27 October 2022
It’s time to premiere Episode 02 of Your Performance Review. In this episode, Molly is told to report to a performance reviewer because she is overwhelmed at work. She’s a mission expeditor, responsible for managing a hundred space flights a day. If she makes a mistake, people could die.
Molly knows she is being pushed by admin to run as many missions as she can, and she believes the reason for this is greed. She oversees mining missions, and admin wants to run as many as it can, regardless of the human cost.
Joy Donnell plays Molly in this episode. I felt honored when I asked her, and she agreed; she’s a brilliant author and speaker who has eyes on the future.
Your Performance Review is a science fiction audio drama in ten parts. Listen to it on Apple Podcasts, Spotify, Google Podcasts, Amazon Music, or wherever you listen to your podcasts. Find an archive of all episodes at FutureX Studio
In 1992, Garry Kasparov laughed at how embarrassing his computer chess opponent was. Within five years, he was beaten by one. — Jonny Thompson, writing in Big Think
Since I’ve made a podcast that has bots in it, you may be wondering when I think the bots will take over everything. Some very smart people, like Stephen Hawking, futurist Ray Kurzweil, investor Sam Altman and Elon Musk believe that by 2045, a superintelligence will be running things on Earth. The superintelligence will arise after an artificial intelligence figures out how to improve itself, becoming recursively intelligent. Once it has optimized its thinking process, it will turn to optimizing the planet. That may well lead to eliminating people, for example, or at least realizing what poor stewards we’ve been of the Earth, and doing something about it.
Fans of superintelligence believe that would be a good thing. A supermachine, they reason, will run things better than humans can. This works well if you are super-wealthy, like Sam Altman or Elon Musk, and can build a secure hideaway in New Zealand, or even blast yourself off to Mars someday. But what about everyone else? Should we fear the superintelligence?
This is a good time to bring up a thought experiment called the paperclip optimizer. In this experiment, somebody programs a computer to make paperclips. Included in the program is an instruction to make as many paperclips as possible. The computer isolates the part of itself that is smart, and makes that part smarter and smarter. It realizes that people are a good source of iron, because of their blood, and therefore it starts to turn every human it encounters into a paperclip until humanity dies out.
This supposes that not only would a computer figure out how to make itself smarter, but it would also figure out how to control things in the real world. Maciej Cegłowski, a programmer and author, gave a talk about people who worry about superintelligence, and poked some holes in those worries.
Just because a computer is super-smart, doesn’t mean that it would be able to control our world. Cegłowski gives the example of Stephen Hawking trying to get his cat in a cat carrier. Hawking was a brilliant physicist who changed the way we think about the Big Bang, and time, but as smart as he was, there was no way he would get his cat into a carrier without help. Cegłowski, sensing that he was making an ableist argument because Hawking got around in a wheelchair, brought up the example of Albert Einstein. He pointed out that Einstein was a burly, muscular guy. (Einstein was also an avid sailor.) But if Einstein’s cat didn’t want to go into the carrier, and Einstein got into a tussle with it, Einstein would come out the worse in that conflict.
I speak from experience about this cat carrier issue, having had to take our cat to the vet this week.
A superintelligence might not want to take over the planet, Cegłowski points out. We imprint our ideas on its potential personality, assuming it would be power-mad and evil. It might be smart but depressed, might want to only stare at the floor, or take up Buddhism, or contemplate the nature of existence without doing much of anything.
Aside from the synthetic voices I’ve used in Your Performance Review, I also thought it would be fun to try an artificial intelligence artist. DALL-E 2 is a project from a group called Open AI. Open AI’s motives for developing natural language processing projects have been criticized as self-serving. (The best critique was written by Karen Hao for the MIT Technology Review.) But these experiments are entertaining. You tell DALL-E 2 what you want a picture of, and it supplies it. For example, I asked for “a cat mixing a podcast, in the style of Picasso’s Blue Period.”
I don’t think the Picasso estate will need to sue for copyright infringement, but it’s kind of impressive that a machine would be able to make that. DALL-E 2 has been trained on millions of images, including cats, examples of Picasso’s Blue Period, and it knows roughly what a podcast mix is. Here is what I got when I asked for “a cat at an audio mix board, done in the style of Matisse.”
The colors are off (not Matisse-like at all) but the sentiment is kind of … French? Something about the sunglasses. When I look at these images, I am not worried about a superintelligence taking over any time soon. But if you’re working as an illustrator, it might be time to worry.
(c) Lee Schneider 2022. Take care of each other. Subscribe.