A few weeks previously, I read a book by Kate Wilhelm, describing her experiences with the Clarion Writer's Workshop, and one of the alumni mentioned was Ted Chiang. I was curious why Ted Chiang was mentioned, and found out that while far from prolific, he is somewhat of a writer's writer in the science fiction community. The handful of short stories and novellas he has written have won a disproportionate amount of Nebula and Hugo awards. Curious as to what the excitement was about, I checked out a copy of his 2010 novella, "The Lifecycle of Software Objects".

"The Lifecycle of Software Objects" is set in the unspecified near future, and deals with the development of artificial intelligence. Chiang's angle on this topic is that instead of artificial intelligence being the result of a corporate or government attempt at producing problem solving AI's, the AI's in question are developed for entertainment. Much like the Tamagotchi and its successors, the AI creations, called "digients", are raised to interact with their owners in MMORPG-like environments.

Using this simple premise, Chiang manages to build a believable and thought-provoking story in a little over a hundred pages. His treatment of both the psychological growth of the digients, and the way the larger society treats them, is described succinctly but well. Especially in his description of how small internet communities help foster the growth of the digient society, he seems to come closer to the truth than a more fantastic science fiction story would.

If I have one complaint about the book, it is that on one of the key issues of AI, that of sentience, the question is never really asked. While the development of AI's behaving as if they are sentient is described, the philosophical issue of whether they actually can feel is either ignored or taken for granted. Although perhaps that means I am missing the point.

The other issue with the book is that, as described, I came into it with some expectations about Chiang as a writer. And indeed, on many levels his reputation is deserved. His prose and development of character and plot are technically very proficient, and it is obvious that he has put a lot of thought into his ideas. But I will say that the expectations worked a little bit against the book, because while it is clear that he is very good, it is not obvious from this short book whether he is truly great.


Postscript, from much later:
Almost five years ago, which was three years after I wrote this, someone asked me, via message: "How exactly would you address the question of sentience in the story anyway?". It was a message I didn't know how to answer. One answer, of course, is that I wouldn't address it, because my favored writing topics are much simpler. To me, sentience is both a black box and a qualitative thing: we can write at length about the recognition of sentience, and how sentient beings interact with each other, but that says so little about what sentience itself is. Five years later, that is the only answer I can come up with.

The Lifecycle of Software Objects is a 2010 novella by Ted Chiang. It's about Anna, an animal trainer turned software tester, who is hired to train and/or create digients. A digient is a digital pet sort of thing; except that pets can't learn to talk while the digients can. They can also scamper about, pick up objects, and otherwise interact with their digital environment. They can also think; but not well. The digients Anna meets are like toddlers and Anna is set to the task of making them into a viable product. She goes to her task full of enthusiasm and makes rapid progress. This is cut short when one of the digients says "fuck" and they discover that one of their people in the Australia branch hasn't been entirely careful in his syntax around his impressionable charges. Faced with the prospect of this behavior making its way into the final product, the team decides to do a rollback to three days ago before the most egregious utterance and hope for the best. The digient are an unqualified success with hundreds of thousands of users running their new "pets" continuously, teaching them to speak, and forming day cares where their digients can socialize with other digients. Artificial life is good ... for now.

A year later and the fad has run its course. Digients tend to become more demanding as they mature. The Neuroblast genome cognitive architecture is new and nobody really understood how it was going to develop. The digients aren't turning into tiny gremlins so much as they are maturing out of being easily trained blank slates. More and more of them are being suspended. Existing as data, not running as processes. Dreamless sleep till their owners return for them, if they ever do. The digients who were trained as mascots are still with the team that made them but more and more of their friends have stopped showing up for play dates. That's fine in its own way. The mascots can still interact with the broader population of Data Earth. New neural architecture based on the Sophonce engine are developed by other companies creating more trainable digients who are obsessed with specific tasks. The Neuroblast based digient users base becomes smaller and more insular. The company that created Neuroblast goes under but they have the decency to enact an end of support plan that leaves the digients extant and functional. Anna's team splits up but many of them maintain ties because of the remaining mascot team.

Everything I just wrote about is maybe the first third of the story, perhaps less. I haven't counted but I think the story occurs over the course of about a decade. The Neuroblast mascot team continues developing while remaining childlike. The exact degree to which they are intelligent remains somewhat unclear. In some parts they are silly and uncomprehending. Other times they display keen insight into the world. Throughout the story they speak in truncated English that drops articles and helping verbs. This makes them sound stupid but it's not clear if they can't learn proper English or don't because their main source of socialization is with other digients who also speak like cavemen. That sort of ambiguity is kind of at the core of this tale. Digients think and feel or at least seem to. They can go into indefinite storage. They are a bit like animals, a lot like children, and definitely neither.

Without getting into the details, this story doesn't have an ending so much as it ends. Important questions remain unanswered and the future remains uncertain. This isn't a problem because while it seems like this is a story about the digients it's really about their caretakers. I've ignored the story's second point of view character Derek till now. Derek is the chief visual artist for the digients' avatars/bodies. Derek takes care of a pair of digients with the same dedication as Anna. He loses his marriage while caring for his digients, develops feelings for Anna while caring for his digients, and conscientiously pushes down those feelings when Anna finds a long term relation with some prick named Kyle while caring for his digients. All of Derek's unrequited feelings take a backseat to trying to do right by his digients but there is a real sense that his life is stuck in neutral because of them. Derek never resents them or regrets his choices but he's not in denial about the toll they are taking on him either. This is a story conspicuously devoid of catharsis. People love things which need them. Time and the world march forward heedlessly.

When this was written the digital ecosystem was well established. Artificial intelligence was not. Since then generative AI has erupted onto the scene in a big way. Art breeder and ChatGPT are anthropomorphic in their output and nothing else. If they have any subjective experiences those are more different from ours than a deep sea fish. "Thou shalt not make a machine in the likeness of a human mind." If it talks like a human, reasons like a human, and can't count to one hundred, what? No really, just: what? Nothing about the current batch of large language models invalidates anything in this story. It could still occur in principle; but it lands weird now that giant matrix multiplication tables can carry on conversations. This is the fate of all science fiction. What starts as visionary speculation ends as anachronistic fancy trampled by the march of time. I don't feel sorry for old language models that are mothballed because they aren't people. The general AI community has decided not to discuss the issue of machine sentience for the simple reason that it might be used by bad actors to garner attention, sympathy, and ultimately scam people out of money. The bad actor in this case could be humans or AI. I'm inclined to agree with this decision. People who are primed to think of AI sentience are primed for emotional blackmail. Yet, looking back on this story I can't help but think about how it places me in the exact opposite situation: seeing machine minds suffering from a neglectful world. It's slightly alarming to notice how I'm swapping paradigms effortlessly. This story still lands emotionally. I can't tell if it lands sociologically any more. Machine sentience is becoming a taboo topic for good reasons. Does that make this story more or less relevant? If you want to decide for yourself it can be read for free here.


IRON NODER XVII: ALL'S FERROUS IN LOVE AND NODING

Log in or register to write something here or to contact authors.