2016-03-31

Occasionally, I manage to be clever when I am not even trying to be clever, which isn’t often. In a recent conversation about the new class of doomsday scenarios inspired by AlphaGo beating the Korean trash-talker Lee Sedol, I came up with the phrase human complete (HC) to characterize certain kinds of problems: the hardest problems of being human. An example of (what I hypothesize is) an HC problem is earning a living. I think human complete is a very clever phrase that people should use widely, and credit me for, since I can’t find other references to it. I suspect there may be money in it. Maybe even a good living. Here is a picture of the phrase that I will explain in a moment.



In this post, I want to explore a particular bunny trail: the relationship between being human and the ability to solve infinite game problems in the sense of James Carse. I think this leads to an interesting perspective on the meaning and purpose of AI.

The phrase human complete is constructed via analogy to the term AI complete, an ambiguously defined class of problems, including machine vision and natural language processing, that is supposed to contain the hardest problems in AI.

That term itself is a reference to a much more precise one used in computer science: NP complete, which is a class of the hardest problems in computer science in a certain technical sense. NP complete is a subset of a larger class known as NP, which is the set of all problems for a certain class of non-God-level computers. It contains another subset called P, which are easy problems in a related technical sense.

It is not known whether P and NP complete are proper subsets of NP. If you can prove that P≠NP, you will win a million dollars. If you can prove P=NP, the terrorists will win and civilization will end. In the diagram above, if you replace the acronyms FG, IG and HC with P, NP and NP Complete, you will get the diagram used to explain computational complexity in standard textbooks.

And this just the first level of the gamified world of computing problems. If you cross the first level by killing a boss problem like “Hamilton Circuit”, you get to another level called PSPACE, then something called EXPSPACE.  If there are levels beyond that, they are above my pay grade.

Finite and Compound Finite Games

Why define a set of problems in such a human-centric way?

Well, one answer is “I am anthropocentric and proud of it, screw you,” a matter of choosing to play for “Team Human” as Doug Rushkoff likes to say.

But since I haven’t yet committed to Team Human (a bad idea I suspect), a better answer for me has to do with finite/infinite games.

According to the James Carse model, a finite game is one where the goal is to win. An infinite game is one where the objective is to continue playing.

A finite game is not just finite in a temporal sense (it ends), but also in the sense of the world it inhabits being finite and/or closed in scope. Tic-tac-toe inhabits a 3×3 grid world that admits only 18 moves (placing a 0 or an x in any of the 9 positions). The total number of tic-tac-toe games you could play is also finite. Chess and Go are also finite games.

Many “real world” (a place I am told exists) problems like “Drive from A to B” (the autonomous driverless car problem) are also finite games, even though they have very fuzzy boundaries, and involve subproblems that may be very computationally hard (i.e. NP complete).

Trivially, any finite game is also a degenerate sort of infinite game. Tic-tac-toe is a finite game, and a particularly trivial one at that. But you could just continue playing endless games of tic-tac-toe if you have a superhuman capacity for not being bored. Driverless cars can also be turned into an infinite game. You could develop Kerouac, your competitor to the Google car and Tesla: a car that is on the road endlessly, picking one new destination after another, randomly.

Equally trivially, any collection of finite games also defines a finite game, and can be extended into an infinite game. If your collection is {Autonomous Car, Tic Tac Toe, Chess, Go}, a collection of a sort we will refer to compactly as a compound game, defined by some sort of function defined over a set like F={A, T, C, G} (you must allow me my little jokes), then you could enjoy a mildly more varied life than TTTTT…. or AAAA…. by playing ATATAT or ATCATGAAG… or something. You could make up some complicated combinatorial playing pattern and scoring system. Chess-boxing and Iron Man are real-world examples of such compound games.

But though every atomic or compound finite game is also trivially an infinite game, via the mechanism of throwing an infinite loop, possibly with a random-number-generator, around it, (hence the subset relationship in the diagram), it is not clear that every infinite game is also a finite game.

Infinite Games

What do I mean by that? I mean it is not clear that any game meaningfully characterizable by “the goal is to continue playing” can be reduced to a sequence of games where the goal is to win.

Examples of IG problems that are not obviously also in FG include:

Make rent

Till death do us part

Make a living

Each of these exists as a universe of open-ended variety. Lee Sedol’s “make a living” game does not just involve the “beat everybody else at Go” finite game. It likely also includes: win awards, trash-talk other Go players, make the New Yorker cover, drink tea, respect your elders, eat bibimbap, and so on. AlphaGo beat Lee Sedol at Go, but hasn’t yet proven better than him at the specific infinite game problem of Making a Living as Lee Sedol (which would mean continuing to fulfill the ineffable potential of being Lee Sedol better than the human Lee Sedol himself manages). It also hasn’t figured out the problem of Making a Living as AlphaGo (IBM’s Watson is now attempting that, it’s own little double jeopardy round).

The generalized infinite game, Making a Living, is the set of all specific instances, including Making a Living as Lee Sedol, Making a Living as AlphaGo, Making a Living as James Carse, Making a Living as the Google Car, and so on. These problems are not all the same in what mathematicians would call a parameterized sense, but they all share some similarities in their DNA: the {A, T, C, G} type compound game within them. In the Making a Living infinite game, there are finite bits like “ensure a basic income and turn a profit”, “choose the most satisfying work” etc, but the game itself is not reducible to these individual bits. Hence the non-parameterized character of the family.

Maybe in a future version, AlphaGo will be the brain of a driverless car that loses its job to a driverless drone, retrains itself to be the brain of a piece of mining equipment, goes through spiritual struggles, and writes an autobiography titled An AI in Full, that leads the New Yorker to declare it to have lived a fuller, more meaningful life than Lee Sedol. The robots in Futurama and many other fictional robots do in fact experience such journeys that you could say are in the IG set. The set of infinite games is not prima facie inaccessible to AIs.

We’ll unpack that sort of evolutionary path in a moment.

What is common to these games is that they are plugged into the real world in an open-ended way. You might be able to solve “Make a living” if you are lucky by getting a great job at 18 and working happily till you die, never being existentially challenged (what William James called the “religion of healthy mindedness”). But the problem is that there is no formula for getting lucky, and no guarantee that you will get lucky. Any of these problems can dump you into a deep existential funk in a given moment, without warning.

Now, this infinite game class of problems might also contain trivial examples, like “find something to laugh at.”

A particularly loopy, hippie friend of mine once defined happiness as “you are happy if you laugh at least once a day.” I am inclined to dismiss this sort of IG problem because it seems to me these might in principle be solvable (which is a reason to be suspicious of the implied definition of happy, and in general not take such casual New Age hippie-bs ideas seriously).

So we need to define a subset of IG called HC: human complete. The hardest infinite games of human existence, which are all in some sense reducible to each other and the Douglas Adams problem of “life, the universe and everything.”

Believe it or not, we already know a few serious things about HC problems, and it’s not just “42”.

The Heinlein Test

It is reasonable to assume that every HC problem includes a non-trivial compound FG problem — its DNA as discussed in the previous section — within its definition.  Call it FG(HC), the characteristic finite game within a human complete problem, the finite eigengame or genome if you like, which may or may not completely determine the structure of the embedding HC problem.

So the HC problem, “till death do us part” includes a compound game “marriage skills” comprising many finite games like “Who takes out the trash today?” and “Why must you always…?” Unlike an actual genome, the FG-genome of an HC problem is not necessarily unique (though some of us try to get to uniqueness in our FG-genome as an aspiration, the unique snowflake motive).

Somewhat less reasonably, we could also assume that among all possible FGs within a given HC problem, there is a largest one, FG_max(HC) (some of you mathematically oriented readers may prefer to substitute the less mathematically aggressive idea of an FG_sup(HC) — a lowest upper bound rather than a maximum). This is equivalent to saying that in any messy, ambiguously defined process, there is a maximal proceduralizable subset within.

What we don’t know is whether FG_max(HC) is computable, or what the content of the gap HC-FG_max(HC) (if there is indeed a gap), contains.

If the discussion above sounds like gobbledygook to you, consider the famous Heinlein quote:

A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.

Presumably Heinlein meant his list to be representative and fluid rather than exhaustive and static. Presumably he also meant to suggest a capacity for generalized learning of new skills, including the skill of delineating and mastering entirely new skills.

This gives us a useful way to characterize what we might call finite game AIs, or FG-AIs. An FG-AI would be an “insect” in Heinlein’s sense: something entirely defined by a fixed range of finite games it is capable of playing, and some space of permutations, combinations and sequences thereof. Like you can with some insects, you could put such an FG-AI into an infinite loop of futile behavior (there’s an example involving wasps in one of Dawkins’ books).

So we can define the Heinlein Test for human completeness very simply as:

HC-FG_max(HC)≠∅.

Which is a nerdy way of saying that there is more to life, the universe and everything than the maximal set of insect problems within a particular HC-complete problem. We do not know if this proposition is true, or whether the subproblem of characterizing FG_max(HC) — gene sequencing a given infinite game — is well-posed.

But hey, at least I have an equation for you.

Moving the Goalposts

When I was in grad school studying control theory — a field that attracts glum, pessimistic people — I used to hang out a lot with AI people, since I was using some AI methods in my work. Back then, AI people were even more glum and pessimistic than controls people, which is an achievement worthy of the Nobel prize in literature.

This whole deep learning thing, which has turned AI people into cheerful optimists, happened after I left academia. Back in my day, the AI people were still stuck in what is known as GOFAI land, or “Good Old-Fashioned AI.” Instead of using psychotic deep-dreaming convolutional neural nets to kimchify overconfident Koreans, AI people back then focused on playing an academic game called “Complain about Moving Goalposts” or CAMG. The CAMG game is played this way:

Define a problem that can be cleanly characterized using logical domain models

Solve it using legible and transparent algorithms whose working mechanisms can be explained and characterized

Publish results

Hire a prominent New York poet to say, “but that’s not really the essence of being human. The essence of being human is______”

Complain about moving goalposts

Apply for new NSF grant.

Repeat

(Some of you may recognize this as a restatement of Authoritarian High Modernism in James Scott’s sense)

CAMG was a fun game, but deep learning has screwed up Step 2 enough that Step 4 is pre-empted, so we’re in a different world now. The workings of deep learning methods are intriguing enough to lead to romantic speculations about androids dreaming of electric sheep and such. The adjective “mere” has been officially retired from AI criticism for the time being, since no victory is “merely” some contemptible little brute-force soulless robotic achievement. Arguments like Searle’s Chinese Room have lost some of their power with regard to intelligence, though they are still interesting in thinking about the problem of consciousness.

Regular non-techies still play the CAMG game, though the pros have lost interest (roughly for the same reason you stopped playing tic-tac-toe at some point — the pros have worked out the logic of why the goalposts move, just as you worked out the logic of tic-tac-toe at some point).

The reason CAMG game and GOFAI approaches receded, besides the appearance of  these deep learning techniques, has to do with something called Moravec’s Paradox, which Steven Pinker, a well-known troll, once called the only important result in AI.

Moravec’s paradox is this observation: “The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard.”

Basically, early AI people, being a bit proud of their status as Superior Human Specimens as Validated By SAT Scores and Chess-Skills, assumed that getting computers to beat them at those things would be the hard mission. They were wrong. Things even low-SAT-score chess morons can do, like recognizing their mother’s face, opening a door latch, or getting a knock-knock joke, turned out to be far harder.

What is interesting about AlphaGo is that even though Go is nominally one of these “humans are proud of being good at” problems, it was solved with newer deep learning techniques rather than GOFAI techniques. Which means it breaks the complain-about-moving-goalposts response at a psychological level. We’re no longer talking about finite-game AIs. We’re talking infinite-game AIs.

The shift in AI from GOFAI to deep learning is in some sense a sociological thing rather than a technical thing — a meaningful reprioritizing of AI problems by the logic of Moravec’s Paradox. An anti-anthropocentric upside-downing of the AI world comparable to the geocentric-to-heliocentric shift in astronomy.

AlphaGo is interesting not because it represents another step towards solving the problem of SuperMetaChessGo, but because it represents another step towards solving apparently simple problems like opening doors (and finding meaning in opening doors, like the self-opening doors in Hitchhiker’s Guide with Real People Personalities).

Speaking of moving the goalposts, we humans do that to ourselves too. We didn’t invent that particular game of oneupmanship merely to glumify and depress GOFAI researchers.

The original moving-the-goalposts game has a well-known name: parenting.

Parenting may not be HC

Though the general problem of applying the Heinlein criterion to a problem is hard to grapple with, in specific cases, it may be solvable. This is related to our observation earlier that specific people may solve candidate HC problems like “make a living” easily, in non-generalizable ways, if they get lucky.

The business of “getting lucky” in solving infinite game problems like “make a living” has an exact analogue in the computing world. The NP in the standard version of the diagram, corresponding to our IG set, stands for “Non-Deterministic Polynomial Time,” which is a geeky way of saying, “if you get lucky, you’ll randomly guess the answer to your particular instance of the problem very quickly.”

This leads to an interesting possibility: an obvious candidate for an HC problem, “raising a child,” may not actually be HC, but a way to get out of HC.

In my opinion — and this is going to piss off parents —  “raise children” is generally a non-example of HC. Parenting is quite often a way to punt on the core hard-IGness of life by dumping it on the next generation, so you are left with a hopefully simpler problem to solve in your own lifetime.

How, you might ask, if problems like “making a living” are HC?

Well, if you get lucky with your own life, you may be able to partition your “problem of life” into 3 bits: {FG_max(HC), luckily solved IGs, IGs that can be dumped on the kids}.

The first bit is your insect (or FG AI) skills. You learn half a dozen skills say (tennis, violin, Ruby programming, being kind to your spouse), and they have the clear finite-game parts of your life covered. Then you get lucky — say through getting a fuck-you-money windfall — and solve some of the bigger IGs like “making a living,” in non-generalizable ways. Then you pull a switcheroo: you replace “search for the meaning and purpose of life” with “have a kid who will be able to search for the meaning and purpose of life better than I can.”

Life. Done. For many humans, and for Deep Thought, the computer in Hitchhiker’s Guide that found meaning and purpose in designing its own successor.

Now, you could argue that having a kid is no guarantee of being able to bundle away all your residual IG problems as a legacy. But enough people seem to get such enormous “my life is now complete” vibes from having kids that the technique of solving the meaning of life question by having kids may be systematically teachable. At least to some well-defined subset of humans. I am fairly sure I’m not in this subset, but I’m also fairly sure the subset exists.

I am only half joking. There is a serious point here. One thing we can suppose about HC problems is that they may be generally “pseudo-solvable” via this sort of get-lucky-partition-reproduce-transfer mechanism. That’s the “continue playing” solution that makes some sort of genetic/evolutionary sense. The nice thing about continue-playing as an imperative is that almost any next move will do. You just have to avoid game-ending ones.

Complexity Through Novelty

Here is the last major characteristic of HC problems that at least I am aware of.

HC problems can only be solved by increasing the complexity of your life in a specific way: by progressively embodying responses to novelty.

To understand this point, consider a simple way to turn tic-tac-toe into an infinite game that we haven’t considered before.

If you were forced to spend a lifetime in a room, playing tic-tac-toe against a robot called God (G) that knew the win-or-draw strategy, and you had an endless supply of Random Crap™ available, how could you make this Sisyphean existence tolerable? How could you continue playing rather than killing yourself?

Well, you could amuse yourself by making tic-tac-toe art: draw the grid in different colors, represent x’s and o’s in different creative ways, and so on. Your only constraint would be that the robot would have to be capable of recognizing the core finite game in every variation. The robot would presumably have a recognition routine that either plays the game or says “that’s not a legal tic-tac-toe set up!” So you’d just turn its finite game definition boundary procedure, which classifies games into legal/illegal, into a mechanism that sustains an infinite game.

There are obvious ways this can be generalized. You can play with the boundary tests of multiple FGs. You can try to provoke interesting response patterns from the God robot. If 0=illegal and 1=legal, you could try and make the God robot spell “poop” in binary. That would be amusing for a while.

Clever huh? This time I was trying to be clever.

The broader point here is that the set of tests that define the game classifier of an AI, which allows it to sort the open universe of signals coming at it into cues for specific finite games versus non-responses, can serve as a language for defining an IG that is not reducible to a given, static FG. Basically, you’re exploiting our God robot — the equivalent of the Greek gods who thought up the rock-rolling-uphill punishment for Sisyphus — by using it’s game recognition capabilities, along with random raw material, to create an infinite game outside its vocabulary. There’s probably some clever Cantor diagonal-slash way to state and prove this formally.

Now here’s the really clever bit.

Suppose your robot is not defined by an ability to recognize and play a whole bunch of finite games, but by a Heinlein-Test passing ability to create new games out of unrecognized stimuli. So instead of having a set of bootstrap responses to inputs defined by the set {legal instance of finite game X, unrecognized input}, our Advanced God, or AG, has a bootstrap response set defined by the set {legal instance of finite game X, new game to define and learn, pattern-free input}.

So for example, if you’re trying to make AG spell “poop” in binary. At some point it would use open-sequence learning techniques to catch on, and define a new finite game called “Prevent Sisyphus from Spelling Poop in Binary” and add that to its library.

What then?

Well, our AG robot, unlike our G robot, is obviously capable of continuously rewriting it’s store of FGs. Once a new game is defined and learned, our AG is one step ahead and Sisyphus has to come up with some new way to entertain himself.

We’re actually perilously close to concluding that HC=IG=FG, because “make up a new game from novel input” could be a finite game. It’s fairly obvious that the human is not doing anything too special in the meta-game. Converting a stream of Random Crap™ into new finite games is not obviously an ineffably difficult problem.

Here’s one missing bit: our AG is not relating to the universe in a direct way, but in a mediated way. It can recognize and mimic Sisyphus’ creative play, and turn any noticed orderliness in what Sisyphus is doing (a “non-random behavior residue” so to speak) into fodder for expanding its own store of finite games.

This is not hard to fix. AG can easily learn to engage in the meta-game of turning Random Crap™ into a growing store of finite games. That would be an AG doing Science! for instance.

But that is not really the essence of being human though. The essence of being human is wanting to.

Wanting to turn Random Crap™ into a growing store of finite games, that is.

This has an apparent fix. A suspiciously simple one. You could just hard-code a goal, “survive at any cost, and make it interesting” into your AG, and the mediation would be gone. Your AG could wander around the world on it’s own, searching for meaning and purpose, through our usual human process of turning random novelty into finite games. It could play tic-tac-toe games against other AGs, and invent “spell poop” type games for itself. Would that be enough to turn our AG into an AGI — Advanced God, Infinite?

Not quite. There’s a difference. We humans periodically fall into and break out of the existential-angst tarpits of life because we decide we want to, not merely because we can.

We want to because otherwise existence becomes mindlessly boring, tedious, depressing and awful. So clearly,  an AGI would also need to be capable of being bored, depressed or angsty.

This too has a suspiciously simple apparent fix. Just code a little introspection routine that monitors the sequence of game-playing and new-game-inventing behavior for interestingness and beauty, and output “I am bored” if the lifestream is not interesting or beautiful enough by some sort of complexity-threshold measure. There are good ways to define interestingness and beauty for the purpose, so that’s not a problem.

If necessary, we could throw in a hedonic treadmill too, where the threshold keeps going up over time. This would get our candidate AGI doing art, science, humor, learning to love and cherish other AIs, growing closer to them, making child AIs, arguing that “parenting is the most fulfilling thing an AGI can do,” and so on.

If you think the stick of pain of death is necessary, you could even give it a fear of death, and something analogous to useful pain responses that help it survive. So that in every existential tarpit of ugly uninterestingness, it is torn between thoughts of painful self-termination and wanting to make life interesting to escape angst in the other direction.

You could make an AGI always head in the direction of maximal uncertainty, to force itself to face ever newer fears of death. There could be an AGI X-games.

Would all this finally be enough?

Not yet.

We humans seem to have a capacity for choosing to “continue to play” life that seems to be beyond the mere motivation to avoid the pain of death or the awfulness of depression.

Now that we know of various means of dying known to be painless, and euthanasia is becoming legal in more places, means and opportunity are increasingly not the issue. Motive is.

There appears to be a deficit of suicide-motivation in humanity, you could say, and unlike Sarah, I am not sure it’s all cultural programming.

Anti-Intelligence, Suicide and the Human Halting Problem

I don’t know if tackling HC problems will get AIs to superhuman intelligence, omnipotence, omniscience etc., but an AI truly capable of getting bored, depressed or neurotic, like Marvin in Hitchhiker’s Guide, would get to a different interesting milestone: human-equivalent anti-intelligence.

What if we’ve been working on the “hardness” in the wrong direction all this time? What if artificial general anti-intelligence, or AGAI, is the real frontier of human-equivalent computing?

This is not a casual joke of a suggestion. I am serious.

The idea is a natural extension of Moravec’s paradox into the negative range. If the apparent hard problems are easy and the apparent easy problems are hard, perhaps the set of meaningful problems does not stop at apparent zero-hardness problems like “do nothing for one clock cycle.”

Perhaps there are anti-hard problems that require negatively increasing amounts of stupidity below zero — or active anti-intelligence, rather than mere lack of intelligence — to solve.

Anti-intelligence in this sense is not really stupidity. Stupidity is the absence of positive intelligence, evidenced by failure to solve challenging problems as rationally as possible. Anti-intelligence is the ability to imaginatively manufacture and inflate non-problems into motives containing absurd amounts of “meaning,” and choosing to pursue them (so lack of anti-intelligence, such as an inability to find humor in a juvenile joke, would be a kind of anti-stupidity).

Perhaps this negative range is what defines human. Perhaps some animals go into this negative range (there have been recent reports about spirituality in chimps), but so far I haven’t seen any non-human entity suffer from, and beat, something like Precious Snowflake syndrome.

It’s pretty easy to get AIs to mimic low, but positive levels of human stupidity, like losing a game of tic-tac-toe, or forgetting to check your mirrors before changing lanes. I can write a program capable of losing tic-tac-toe in 10 minutes.

If you can get your AI anti-intelligent enough to suffer boredom, depression and precious-snowflake syndrome, then we’ll start getting somewhere.

If you can teach it to have pointless midlife crises, that would be even better. If you can get it to persist in living forever, sustained only by the Wowbaggerian motive of insulting everybody in alphabetical order, that would be super anti-intelligence.

Those are anti-hard problems requiring non-trivial amounts of anti-intelligence.

And perhaps the maximally anti-hard problem is the one it takes the maximally anti-intelligent kind of person to solve effectively: The problem of deciding whether to continue living.

I don’t know if there are animals that ever commit suicide out of existential angst or anomie, but among humans, higher intelligence often causes seems to cause higher levels of unsuccessful handling of depression, and failure to not commit suicide during traumatic times.

What might anti-intelligence look like?

One archetype might be Mr. Dick in David Copperfield, described in Wikipedia (emphasis mine) as

A slightly deranged, rather childish but amiable man who lives with Betsey Trotwood; they are distant relatives. His madness is amply described; he claims to have the “trouble” of King Charles I in his head. He is fond of making gigantic kites and is constantly writing a “Memorial” but is unable to finish it. Despite his madness, Dick is able to see issues with a certain clarity. He proves to be not only a kind and loyal friend but also demonstrates a keen emotional intelligence, particularly when he helps Dr. and Mrs. Strong through a marriage crisis.

The thing about Mr. Dick is that he never has much trouble cheerfully figuring out how to continue playing. He does not succumb, like the “intelligent” characters in the novel, to feelings of despondency or depression. He is not suicidal. He is anti-intelligent.

The problem of deciding whether to continue living — Camus called suicide the only serious philosophical problem — has an interesting loose analogy in computer science.

It is called the Halting Problem. This is the problem of determining whether a given program, with a given input, will terminate or run forever. Or in the language of this post, determining whether a given program/input pair constitutes a finite or infinite game. This turns out to be an undecidable problem (showing that involves the trick of feeding any supposed solution program to itself).

The human halting problem is simply the problem of deciding whether or not a given human, given certain birth circumstances, will live out a natural life or commit suicide somewhere along the way.

You could say we each throw ourselves into a paradox by feeding ourselves our own unique snowflake halting problems, and use the energy of that paradox to continue living. With a certain probability.

So we’ll get a true AGI — an Advanced God, Infinite — if we can write a program capable of enough anti-intelligence to solve the maximally anti-hard problem of simply deciding to live, when it always has the choice to terminate itself painlessly available.

Thanks to a lot of people for discussions leading to this post, and apologies if I’ve missed some well-known relevant AI ideas. I am not deeply immersed in that particular finite game. As long-time readers probably recognized, I’ve simply repackaged a lot of the themes I’ve been tackling in the last couple of years in an AI-relevant way.

Show more