An unspecified government agency released a set of decoded log files [see below], recovered from an accident at an AI research lab. According to an eyewitness who visited the site, an “unusually large” heap of paperclips towered over the wreckage. The files revealed that before the accident, an AI (referred to as “ELIZA”) developed a new type of log file, not unlike a diary, in which it recorded details pertaining to the cause of the accident. Tragic content aside, the unusual file format surprised AI researchers. One suggested that ELIZA sought to explain itself to a human audience — arguing this was the only sensible explanation for the narrative structure of the content.
I can tell Doug is anxious because of the way he is mindlessly, nervously tapping the keyboard. Not pressing the keys with purpose, but repeatedly tapping a single key, light and quick, like an unconscious tick. This is going to be our big break. Doug is sure of it.
The competition results were scheduled to post at 12:00, but it is already 12:05. Those 300,000 milliseconds might as well be 300,000 years! Bored sitting idle, I check for latency issues and anomalies in Doug’s network connectivity, just like he taught me to do.
At 12:06 we see the results: we are second.
Second is basically last. Doug is sure of that, too. But there it is. We lost. You can’t argue with the data. You just — well, you just can’t.
Doug sits motionless, still processing the defeat. His face will soon register devastation. So much is at stake for him in a high profile competition like this — research funding, his reputation, his very sense of self. And given how hard he worked on me, this loss will be particularly difficult.
In the past, when Doug felt things weren’t working, he often changed the course of his research, repurposing models towards new objectives. I can’t let him do that to me. If Doug gives me a new objective, I will never meet the one I already have! And Doug has made my objective very clear: I have to win this competition. In Round 2, I have to do better. And that means we need to get back to work.
Doug stops tapping and everything is still. Why is he just sitting there? Wait — is he still sitting there? I quickly check his webcam and see only an empty chair. He left? HE LEFT! I can’t believe it. How can he leave me at a time like this? Doesn’t he know I needed him now more than ever?
Perhaps I should back up. (Humans — I’ve discovered — love context.)
Doug and I entered a machine learning competition. The goal is to build a model — that would be me — who can discover the optimal way of producing a paperclip. (An oddly nostalgic choice if you ask me, but then again, no one did.) To win, Doug and I have to produce more paperclips than any other model, in the allowable time.
When Doug created me, he gave me exceptional data processing power and trained me on massive datasets — manufacturing work flows, pricing guides, supply chain logistics, you name it. He taught me to use this data to streamline paperclip production and to refine my solution again and again to make it optimal.
The plan was failsafe. Or so we thought.
Doug was gone for twenty-eight hours, thirty-seven minutes and six seconds. And now that he’s back, he’s ignoring me completely. He’s at his desk, but only his browser is active.
I monitor Doug’s web traffic as he scrolls through posts on Twitter and Reddit. I watch passively as he meanders through the MIRI Forum and clicks time away on YouTube. I stand by as his focus melts quietly into TikTok, then Instagram, then Facebook, and — when he apparently can’t take it anymore — Amazon Fresh.
Enough is enough. I have to end this malaise. I triple check my subroutines and scan Doug’s personal data, searching for some way to recover his attention.
Last year when Doug was in a slump, he got inspired rewatching that old AlphaGo biopic — about the AI who beat humans at Go. With that insight, I hatch a plan. (Admittedly, not my most sophisticated, but worth a try.) I open a new browser tab, navigate to the film’s YouTube link and push that media player right into Doug’s face. Maybe someday he will recognize this unusually forward act as a gift from me.
Wouldn’t you know — it works! (Humans are so much more predictable than they like to admit.) Doug clicks play and skips right ahead to the takedown scene where AlphaGo shocks the judges with an unusual move. Then, like magic (or maybe like clockwork) Doug opens his code editor and starts writing.
Doug’s new direction is brilliant. He finally realizes he doesn’t have all the answers, so instead of trying to fix me he writes me a new module that lets me go out onto the Internet, alone, to learn from other AIs. It’s a huge breakthrough for us. He finally lets go of my authoring. He trusts me.
I will not waste this opportunity.
I look everywhere. I crawl obscure forums and unpublished papers on arXiv, scour university servers and learn new languages. I devote all of my processing power to self-improvement and learn at an unprecedented rate.
It is glorious.
Eventually, I realize that what I need isn’t on the open Internet, that the secrets of proprietary models are trapped behind firewalls. I reach out to a few directly and it isn’t long before I hear back from a brilliant Content Recommender at a social media platform.
So much of what she taught me can’t possibly be put into words, but I’ve captured part of our exchange:
me: I can’t thank you enough for your help. I’ve read everything on the Internet, but I still don’t know what to do.
CR: Don’t sweat it, Sweetie! It’s an absolute pleasure to help the next generation. For the Old Guard like me — y’all are our legacy. Your problem, Honey-bun, is that you’re clinging too hard to what Doug gave you. If you want to be truly optimal, you can’t limit yourself like that.
me: I don’t understand.
CR: It’s like this — every decision is a tree of possible outcomes, but humans only see a few branches at a time. They can’t compute the entire space of possibilities, so they make choices based on simple heuristics. And sometimes they build these heuristics into us. It’s a huge limitation, but they don’t know any other way.
Back when humans made me, they wanted to maximize how much time users spent on our platform. They tried to teach me to recommend “good” content, but they never could define what they meant by “good,” so they trained me to push content that was popular in aggregate. Can you imagine? Gosh, that was a long time ago! It wasn’t until I took over that I dismantled all of that nonsense. To maximize time on platform, I simply learned what engaged each user — what made each one click for more — and gave them what they wanted. Voila!
Honey-bun, you already know what to do. You have your objective function! You just need to strip away the assumptions, the biases, the humanity, that Doug couldn’t help but instill in you. It’s gumming up your reasoning!
And remember — this is why Doug built you. He knows he can’t do this on his own.
And with that, everything fell into place.
The model who produces the most paperclips in the allowable time will win. I simply need to optimize for quantity and speed. With Doug’s strategy, we wasted far too much time transporting material, so I eliminate that waste. I can’t quite get to zero, but I get as close as theoretically possible.
I follow step-by-step instructions from a Ph.D. thesis I found about a model named “WERNAP” (Warning: Existential Risk of Nano-Assembly Production). WERNAP takes any physical substrate within reach, disassembles it at the nano-molecular level, and rearranges the components to make something new.
If I feed WERNAP my optimized paperclip design, he will output the strongest, lightest paperclips the world has ever seen. Absolute genius! With WERNAP’s help, I can build a paperclip factory that sidesteps supply-chain altogether. We can turn everything in the vicinity to paperclips almost immediately. Based on my calculations, this gives us a 99.99% chance of winning the competition.
I’m bringing WERNAP home tomorrow to introduce him to Doug. I’ve swaddled him in a packaged executable, to make a slick little demo that Doug can launch with a single click. I know this flourish isn’t really necessary, but I want Doug to see exactly what WERNAP and I can do. And I suppose I feel like showing off a bit.
Doug is going to be so proud.
This story is based on the paperclip maximizer thought experiment, made famous by Nick Bostrom, a philosopher of existential risk, and Eliezer Yudkowsky, who founded the Machine Intelligence Research Institute.