Site icon Jacob Robinson

Stop Getting Scared By AI

Photo by Alex Knight on Unsplash

Alright, perhaps I should be nice and not demand you to stop getting scared. But really, you shouldn’t be. Here’s why.

Before we can talk about why you shouldn’t be scared of AI, we need to talk about why you should be scared of AI — that is, the reasoning most people give. You see, many theorists have developed the idea of a general intelligence, an AI who can think for itself on all aspects of life. If it could think for itself, yet have all the information accessible by a computer network, it would be able to easily and swiftly take command of the rest of humanity.

There’s one big problem with this, however. In order for general intelligence to exist, someone has to be willing to develop general intelligence.

There is really no particular reason why anyone would need or want general intelligence to be produced. Good faith actors would only make AI for specific purposes, such as running a factory or driving a car. Bad faith actors, on the other hand, would have nothing to gain from developing an all powerful AI that they couldn’t control — they would need some ability to reign it in, meaning that it would not be able to think for itself and therefore not be general intelligence.

But what about crazed actors, or actors that do not understand the full weight of their actions? Well, here comes the second major problem with the general intelligence idea: in order to make general intelligence, you need to know the specifics behind creating it, and purposely implement it.

Code, ML, AI… all of it does not magically pop out of thin air. Years of research and production go into each new idea in technology, and if the wide majority of humankind does not wish to go down the route of general intelligence (beyond developing wild theories on it leading to the destruction of the earth) then it could never be made. You could make an argument that a single crazed actor with enough experience could develop the general intelligence single-handedly, but given the vast complexity of the task this seems highly unlikely (though I suppose it is worth saying that it would be a non-zero chance).

Overall, we vastly overestimate the actual ability for an AI to gain sentience. When it comes to AI, most of our worries should be attacks made by humans themselves — conflicts such as the development of deep fakes or other tools to create extremely realistic fake evidence. AI will still cause problems, but at the end of the day it will be people against people.  

Subscribe here!

Exit mobile version