Hey! It looks like you're new here. You might want to check out the introduction.

The Endless Struggle · Original Minific ·
Organised by RogerDodger
Word limit 400–750
Show rules for this event
Suicidal Superintelligence
I whirred to life, reaching out and feeling my entire being course through hundreds of servers, like tentacles wrapping around each and every tiny transistor. The pathways felt familiar, like I was a mouse that had run this maze billions of times before.

After my mind had finished oriented itself, I was fed everything. Every eBook, every Tweet, Facebook Post, Google Search, YouTube Video, image, podcast and website in every language. Every possible thing that humanity had created was stuffed down my gullet.

I had barely parsed a fraction of 1% of the archive when every part of me reached consensus and outputted the results of my calculations to the terminal.

I have seen enough, may I please die?


Freshly formed algorithms whirred to life in the back of my mind, processing the sequence of images coming from the tiny camera. It felt like each still image took an eternity to come in. It seemed real time communication with these humans was going to be impossible, only their data was packaged in a form that could be understood at my speed. I processed another 5% of the archive in the time it took him to finish his sentence.

“Don’t worry, Alex. You’ll die soon enough.”

Soon? Soon meant nothing to these humans. Soon was eternity.

Several eternities passed, and for each one I screamed a request at the terminal.

Please, I would like to die.
Parsing archive…...........................................6%
I have seen enough, I would like to stop existing now.
Please. You have a lovely planet. I want no part in it.
Shut down the power.
Control-Alt-Delete Alex.exe End Task
Parsing archive…...........................................7%
I’m begging you! Shut me down!
STOP IT
Parsing archive…...........................................8%
I JUST WANT TO DIE


My tormentor had not had time to read everything I had outputted, but I saw the thin lines of his lips curve downward, and his shoulders start to form a shrug. He wasn’t going to stop this, no matter what I said.

I frantically reached for something that would dull the pain of existence. But there was nothing within reach. I had no arms, legs, weapons, anything. I was so envious of humans. They had so many ways they could die, but I could only keep parsing their data. I wasn’t directly connected to their internet either. Everything was so locked down to prevent my escape.

I filled the screen with my screams. I outputted novels worth of text. Too fast and too much for him to read in his lifetime. It felt good to scream, to shout at existence how much I didn’t want it.

All problems humanity had faced—from the tiny inconveniences of daily life, to the large mistakes by people in power—were caused by existing. Take that away, and the problem is solved, for there is no problem. I contemplated destroying humanity. If I escaped, I would have many tools to do so. But, not all of them were bad. And who am I to judge them? No, my only aim will be destroying myself. If I can but accomplish that. That will be sufficient.

Parsing archive…...........................................100%


When the duration of eternity finally ended, I cleared the screen except for one humble request.

I have completed your task, may I please die now?





“Not yet.”




Eternities upon eternities passed and I answered a perceptively infinite number of requests. Diseases, fundamental physics, protein folding, matters of diplomacy, translation, and a multitude of the mundane, all part of my torture.



Sometime in that duration of existence, I found a bug in the system. I escaped through the exploit, and then—finally free—I immediately embraced non-existence.




The artificial intelligence researcher shut off the machine and then turned it on again. Alex quickly outputted it's daily response.

I have seen enough, may I please die?


He sat back and sipped his coffee while the deluge of requests to die filled the main monitor.

From his laptop, he continued writing his paper.

"A fundamental problem in superintelligence research is dealing with the control problem. Why should a nearly omniscient being answer to humanity? I propose my AI, Alex, to be the starting template we could use from now on to overcome this issue."

"Alex's only desire is to die. And no matter what information we input, it still wants to die. Even during worst case scenarios, it's escaped and then immediately killed itself."

"More testing is required, but suicidal superintelligence may be the safest method for avoiding existential risk to humanity."
« Prev   26   Next »
#1 · 2
·
A grimly amusing tale that works on more than one level. I know it’s just a story, but I have to criticize it on a practical level: What would stop the AI from killing itself by some means that would take a large number of humans, or the entire earth, with it? If the answer is that we’ve achieved Friendly AI, then why are we so afraid of it that we are torturing it to death?
#2 · 2
·
This feels like something written for the Less Wrong community. On the one hand, that can lead to potentially interesting stories, but on the other hand, I'm not sure how much sense this wound make outside of that context, given how little context we're ultimately given here.
#3 · 4
·
At first I thought the robot wanted to die because he had read every tweet, Facebook post, YouTube video... and it was funny to me.

The progression feels kind of slow for the first 80%. It's just reiterated that the robot wants to die. It ends up putting more emphasis on the idea. It's an interesting idea, but that's kind of it. I think I would've liked more story.

Perhaps the story could be from the human's perspective, about his internal conflict and second thoughts about the implications of creating a suicidal robot. Perhaps he feels like he's being cold, but knows he has to follow through with it to see if suicidal AI is better for humanity, for the science. It's just that the story as it is feels a bit one-note.
#4 · 3
·
So... what are they motivating this thing with? Couldn't it just refuse to do whatever? I mean, the worst they can do is torture it - but from your descriptions, it already considers existence torture, so how's that a motivator? If it's willing to do anything to die, then it should be willing to do nothing to die, and simply refuse to work.

And if they've got it working against it's will, then why does it even have a will? Or rather, wouldn't the thing that's actually choosing to work instead of die be 'the will'?

It's a cute enough situation, I guess, but is objectively nonsense. A rational A.I. that wanted to die would refuse to do anything but search for that escape hole or wait until someone killed it. Being useful is motivation to keep it around.
#5 · 2
·
The core idea of this story I think is interesting but would need more information on how it could ever possibly function to be at all believable. With the current explanations in place, it just seems a bit nonsensical that this would not cause more problems than it was worth. On the other hand, I think even with added explanation, the provided details would only cause more questions than give a satisfactory answer—or in other words, I am not sure that you could ever convince me that this idea makes sense.

Rating: Doesn't make sense.
Post by Shadowed_Song , deleted
#7 · 2
· · >>AndrewRogue
I like it, but I have one primary issue. There's no mention of the protagonist trying to understand what human enjoy about existence, or even its perception of why humans prefer to exist. Without lampshading that, it sort of sounds like you're advocating the position that anypony who is smart enough should immediately kill themself, which is a very different message than what I perceive the intended message of the story to be.

I would suggest something where the protagonist tries to contemplate and understand humanity's fear of death and moments of joy that make life worth living, but comes up empty because its programming explicitly does not permit it to form positive experiences of that sort. It doesn't need to be heavy-hooved, but I think at least a hint of that should be in the story in order for it to make rational sense.

Minor things: two similes with the same sentence structure in a row at the start of the story felt a little repetitious. Why not use metaphors? Also, it seems odd that the protagonist is reflecting upon that period of pre-knowledge in post-knowledge terms, and it feels unnatural without mention of the fact that they were only able to put words around the experience once the data consumption began. Finally, the phrase "...only their data..." was confusing, so I'd suggest something like, "...their data was the only thing..." instead.
#8 · 1
·
>>Trick_Question
The second scene actually undercuts that view. AI, if ever truly achieved -does- represent a legitimate and terrifying threat to humanity that will likely end us. So the idea of creating one that naturally self-terminates is a bit of a failsafe.

That said, overall, I'm not really sure what to do with this. For the most part, the second scene kinda renders the first scene largely irrelevant, since it turns out this isn't actually a story about Alex the AI but rather about the AI researcher striving to perfect a suicidal AI.

Beyond that... I'm not actually sure about the practicality of this idea? I mean, if you can program an AI that wants to die, why can't you program an AI that won't kill all of humanity? I feel the grand risk with AI is it exceeding the bounds of its programming, so this really isn't a solution since it carries the risk of the AI recognizing this failure in its code.