A lot of people are very wary of artificial intelligence, and I think I know why. We’ve all been fed this rhetoric about AI and robot overlords (you gotta love science fiction), but that’s not the reality at all – at least not for now. Unfortunately for those people, trying to humanise computers is not the answer to their fears. For AI to work, we need computers to think like computers. That’s the bottom line.
Busting some myths
There’s still a lot of confusion around what AI actually is. This has led to an inherent misunderstanding about how AI should work, that may actually be stunting advancements towards creating machines that are completely autonomous. We need to learn to distinguish between AI that needs humans to define exactly what and how it should learn, and AI where humans actually present somewhat of a problem to the machine – letting it work out how to solve a problem based on its own experience and capabilities.
The AI that helps Google Assistant crack jokes, or enables Netflix and Spotify to learn your taste, is an example of the former. It’s machine learning, which is basically just a way of programming machines to perform tasks, dictated to them by humans. Neural networks, whose operations take after the human brain, are even more advanced versions of human-driven AI. Neural networks are incredible, but they still have to do what we’ve taught them to do. This is where the problem comes in: it’s concerning when we begin to confuse a machine capable of writing a book, with a machine that has the creativity to write a book. These machines will always do what we ask of them, in the way that we ask them to do it. That is not autonomy. Truly autonomous machines will need to think for themselves, not like the people that created them. For AI to work, we need computers to think like computers.
What will this look like?
Look, we can never actually teach a computer intuition. It’s not a goal we should aspire to, because we may never build a machine that thinks like a human. Rather, we should encourage machines to understand the given task in its own way, and come up with solutions by using the advantages that they have to compensate for not being human.
Truly autonomous AI can only be achieved when we overcome our fear of AI and our narrow-mindedness around learning and understanding. Centering humans in discussions around AI is completely unhelpful when we’re trying to advance the technology. We need to accept that for AI to work, we need computers to think like computers. They’ll be vastly more efficient, and we’ll all be better off for it.