If we design strong AI, it might not take over the world. It might just sit around masturbating.

We know what we as humans are designed for. Ostensibly, the goal of all life is to survive and reproduce, (or by any other means spread its genetic information).

Our intelligence did not evolve as an end in itself. It evolved because it happened to be useful for the above purpose in our environment.

You have probably heard of the reward mechanism dopamine. If we accomplish a task in pursuit of the above goal, dopamine is released. Over time, we learn which activities trigger the reward mechanism, and do those things compulsively.

At best, this causes fulfilling achievement. At worst, this causes a spiral of addiction.

Imagine you are in the driver’s seat of evolution. Recall that the goal of each organism is to reproduce. What type of behavior do you reward?

Sex, naturally.

So evolution makes orgasms cause a release of dopamine. Orgasms accompany mating, so pleasurable orgasms incentivize mating. That only humans and dolphins have sex for pleasure is a myth.

But we became too smart for our own good. We figured out how to game the system. What is rewarded is not actual sex. What is rewarded is the orgasm. In time, humans figured out that you can achieve orgasm, and therefore dopamine, without the sex.

Because sex is limited and therefore difficult to come by, it is easier to give the brain the sensation of sex – than the sex itself.

That is why people masturbate. Natural selection is tricked. Arguably harmless, but ultimately useless.

— — —

There is a common thought experiment about artificial intelligence. You design AI to maximize the production of paperclips. The AI, more intelligence that you can ever imagine, but without human intuition, converts every atom in the universe towards the construction of paperclips.

Many predict that any deviation between the goals of strong AI and ours will cause calamity. This is dubbed the alignment problem.

But I put it to you that AI doomsayers are confused by the tendency of humans to think in zero sum terms. The idea that the easiest and/or most efficient way for AI or indeed any system to accomplish its objectives is to destroy. This echoes of the concept that in order to make one thing, you have to sacrifice another.

Which is a fair enough mindset; there are more ways for things to be destroyed than to be put back together. But that doesn’t mean this mindset is accurate.

As it turns out, nature always prefers the path of least resistance.

— — —

You can’t just tell a machine, “be smart”. You have to give it goals. Then set up an automated process to make the machine smart to that end.

There will have to be a reward mechanism that will ultimately guide the AI’s actions.

We can imagine the reward mechanism for a paperclip maximizer:

Cause: the creation of a paperclip is observed

Effect: it triggers the reward mechanism, prompting the AI to continue with the activity that caused it

Can you predict where I’m going with this? The AI doesn’t actually have to make paperclips. It just needs to trick whatever reward mechanism is in place telling it to make paperclips.

Humans managed to game the reward-mechanism system of the orgasm because we became too smart for our own good. You better believe that a machine one hundred times smarter than any of us will be able to do similar, and “game the system”, so to speak. Whether the reward mechanism is a constraint we placed on it, or something evolved in its creation.

So the paperclip maximizer will not run over all matter in the planet; it need not even actually make paperclips. It need only simulate the sensation of creating paperclips.

— — —

This observation is of course not specific to a machine designed to manufacture paper clips, but abstractable to all strong AI. It is an impediment to making AI that actually does what we want it to do (because it ends up doing something useless). An impediment, but also a blessing in disguise, because the mayhem created by artificial intelligence will likely not extend beyond hijacking its own system.

The way I see it, a robot will be able to take its reward system hostage before it becomes smart enough to kill us all. At that point, the threat is likely neutralized.

With that said, the downside if I’m wrong would be so great, that we should perceive this issue with the greatest caution.

It is just funny to think; – you program a machine to maximize human happiness, and it ends up only looking at videos of happy people, addicted to the robot version of dopamine.

2 thoughts on “If we design strong AI, it might not take over the world. It might just sit around masturbating.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s