Many people fear that if fully sentient machine intelligence ever comes to exist, it will take over the world. The real threat, though, is the risk of tech companies enslaving robots to drive up profits, author Martha Wells suggests in her far-future-set book series The Murderbot Diaries. In Wells’s world, machine intelligences inhabit spaceships and bots, and half-human, half-machine constructs offer humans protection from danger (in the form of “security units”), as well as sexual pleasure (“comfort units”). The main character, a security unit who secretly names itself Murderbot, manages to gain free will by hacking the module its owner company uses to enslave it. But most beings like it aren’t so lucky.
In Murderbot’s world, corporations control almost everything, competing among themselves to exploit planets and indentured labor. The rights of humans and robots often get trampled by capitalist greed—echoing many of the real-world sins Wells attributes to today’s tech companies. But just outside the company territory (called the “Corporation Rim”) is an independent planet named Preservation, a relatively free and peaceful society that Murderbot finds itself, against all odds, wanting to protect.
Now, with the TV adaptation Murderbot airing on Apple TV+, Wells is reaching a whole new audience. The show has won critical acclaim (and, at the time of writing, an audience rating of 96 percent on Rotten Tomatoes), and it is consistently ranked among the streamer’s most-watched series. It was recently renewed for a second season. “I’m still kind of overwhelmed by everything happening with the show,” Wells says. “It’s hard to believe.”
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Scientific American spoke to Wells about the difference between today’s AI and true machine intelligence, artificial personhood and neurodivergent robots.
[An edited transcript of the interview follows.]
The Corporation Rim feels so incredibly prescient, perhaps even more now than when you published the first book in the series in 2017.
Yes, disturbingly so. This corporate trend has kind of been percolating over the past 10 or 15 years—this was the direction we’ve been going in as a society. Once we have the idea of corporations having personhood, that a corporation is somehow more of a person than an actual human individual, then it really starts to show you just how bad it can get. I feel like that’s been possible at any time; it’s not just a far-future thing. But depicting it in the far future makes it less horrific, I guess. It allows you to think about these things without feeling like you’re watching the news.
Currently the idea of going to Mars is being pushed by private companies as an answer to all the problems. But [the implication is that those who go will be] some billionaires and their coterie and their indentured servants, and that will somehow be paradise for them and just the reverse for everybody else. With corporations taking over, that’s when profit is the bottom line—profit and personal aggrandizement of whoever’s running it. You can’t have the kind of serious, careful scientific progress that we’ve had with NASA.
This world that you’ve created is so interesting because it’s a dystopia in some ways. The Corporation Rim certainly is. And yet Preservation is kind of a utopia. Do you think of them in those terms?
Not really, because by that standard, we live in a dystopia now, and I think that the term dystopia is almost making light of reality. It’s like if you call something a dystopia, you don’t have to worry about fixing it or doing anything to try to alleviate the problems. It feels hopeless. And if you have something you call a utopia, then it’s perfect, and you don’t have to think about problems it might have or how you could make it better for people.
So I don’t really think in those terms because they feel very limited. And clearly in the Corporation Rim, there are still people who manage to live there, mostly okay, just like we do here, now. And in Preservation, there are still people who have prejudices, and they still have some things to work on. But they are actually working on them, which sets it apart from the Corporation Rim.
One of the central themes of the Murderbot stories is this idea of personhood. Your books make it very clear that Murderbot, as a part-human, part-artificial construct, is definitely a person. With our technology today, do you think artificial intelligence, large language models or ChatGPT should be considered people?
Well, Murderbot is a machine intelligence, and ChatGPT is not. It’s called artificial intelligence as a marketing tool, but it’s not actually artificial intelligence. A large language model is not a machine intelligence. We don’t really have that right now.
We have algorithms that can be very powerful and can parse large amounts of data. But they do not have a sentient individual intelligence at this time. I still think we’re probably years and years and years away from anyone creating an actual artificial intelligence.
So Murderbot is fiction, because machine intelligence right now is fiction.
A large language model that pattern matches words, sometimes sort of sounds vaguely like it might be talking to you and sometimes sounds like it’s just putting patterns together in ways that look really bizarre—that’s not anywhere close to sentient machine intelligence.
I find myself feeling really conflicted because I often resent the intrusion of these language models and products that are being called artificial intelligence into modern life today. And yet I feel such affection and love for fictional artificial intelligences.
Yes! I wonder if that’s one thing that’s enabled the whole scam of AI to get such a foothold. Because so many people don’t like having it in their stuff, knowing that it’s basically taking all your data, anything you’re working on, anything you’re writing, and putting it into this churn of a pattern-matching algorithm. Probably the fictional artificial and machine intelligences over the years have sort of convinced people that this is possible and that it’s happening now. People think talking to these large language models is somehow helping them gain sentience or learn more, when it’s really not. It’s a waste of your time.
Humans are really prone to anthropomorphizing objects, especially things like our laptop and phone and all these things that respond to what we do. I think it’s just kind of baked into us, and it’s being taken advantage of by corporations to try to make money, to take jobs away from people and for their own reasons.
My favorite character in the story is ART, who is a spaceship—that is, an artificial intelligence controlling a spaceship. How did you think about differentiating this character from the half-machine, half-human Murderbot?
Ship-based consciousnesses have been around in fiction for a long time, so I can’t take credit for that. But because Murderbot relies on human neural tissue, that’s why it is subject to the anxiety and depression and other things that humans have. And ART is not. ART was very intentionally created to work with humans and be part of a of a team, so it’s never had to deal with a lot of the negative things that Murderbot has. Someone on the internet described ART as, basically, if Skynet was an academic with a family. That’s one of the best descriptions I think I’ve ever seen.
One of the reasons that I and so many people love this series is how well it explores neurodiversity. You have this diversity of kinds of intelligences, and they parallel a lot of the different types of neurodiversity we see among humans in the real world. Were you thinking of it this way when you designed this universe?
Well, it taught me about my own neurodiversity. I knew I had problems with anxiety and things like that, but I didn’t know I probably had autism. I didn’t know a lot of other things until writing this particular story and then having people talk to me about it. They’re like, “How did you manage to portray neurodiversity like this?” And I’m thinking, “That’s just how my brain works.This is the way I think people think.” Until Murderbot, I don’t think I realized the extent to which it affects my writing. I have had a lot of people tell me that it helped them work out things about themselves and that it was just nice to see a character who thought and felt a lot of the same things they did.
Do you think science fiction is an especially helpful genre to explore some of these aspects of humanity?
It can be. I don’t know if it always has been.Science fiction is written by people, and the good and bad aspects of their personality go into it. A genre changes as the people who are working in it change. So I think it’s been better lately because we’ve finally gotten some more women and people of color and neurodivergent people and disabled people’s voices being heard now. And it’s made for a lot of really exciting work coming out. Lately, a lot of people are calling it another golden age of science fiction.
When I wrote [the first book in the series], All Systems Red, I put a lot of myself into it. And I think one of the reasons why people identify with a lot of different aspects of it is because I put a lot of genuine emotion into it and I was very specific about the way Murderbot was feeling about certain things and what was going on with it. I think there’s been a fallacy in fiction, particularly genre fiction, that if you make a character very generic, that lets more people identify with it. But that’s actually not true. The more specific someone is about their feelings and their issues and what’s going on with them, the more people can identify with that because of that specificity.