AI has a lot of fantastic use cases. In fact, it really can help us out in so many interesting ways. However, its utility is largely down to what humans decide for it – and we can be total idiots. Let’s take a look at the 5 most unethical AI projects to date.
5. AI Kalashnikovs
Kalashnikov is a Russian weapons manufacturer, known for their (frankly) insane products. They recently announced their plans to develop an AI weapon, capable of targeting and firing on humans. It will use a neural network, plus an onboard camera and computer, to accurately pinpoint it’s targets and neutralize them.
Just one of these things would be terrifying enough, but Kalashnikov has plans to make at least three of them. It has the potential to cause some serious harm, particularly because it learns by taking out targets. Yikes.
4. Brain nanobots
Ever wish that you could learn anything, anytime? Human brains are fallible, and sometimes they need a little help. Well, thanks to Ray Kurzweil, a futurist, inventor and director of Engineering at Google, this could be a reality sooner rather than later. Kurzweil predicts that we’ll have ““nanobots [implanted] in our brains will make us godlike,” by 2030. Essentially, we’ll have tiny AI robots mixing things up in our brains that would allow us to learn anything we wanted in a matter of minutes.
The ethics surrounding this are still pretty questionable. We don’t really have the full picture when it comes to how brains work, so implanting nanobots into our grey matter could cause some serious trouble. Even worse, the nanobots will be connected, potentially giving a powerful AI the option to hack our brains and turn us into AI zombies. Uh oh.
You don’t really expect your browser’s search function to be biased, but anything goes these days. Even basic internet searches can be tainted with prejudice. UCLA prof. Safiya Umoja Noble found that after googling “black women” in hope of finding interesting sites to show her nieces, she instead was shown page after page of pornography. Contrastingly, searches for “CEO” have historically shown thousands of images of white men.
AdWords, has also been found guilty of bias. Research coming out of Carnegie Mellon University and the International Computer Science Institute, showed that male job seekers were more likely to be shown ads for high-paying top-level positions than women.
2. Job hunters
In what is possibly the earliest report of a tainted AI system, a 1979 program was created by an admissions dean at St.George’s Hospital Medical School in London, that excluded nearly all minority and female applicants. In 1986, staff members flagged the issue after noticing that at least 60 minority and female applicants were unfairly excluded each year.
According to the reports, a non-European name would take 15 points off the applicant’s score automatically. The British Medical Journal called this unethical system “a blot on the profession.” Ultimately the school was reprimanded, and offered some minor reparations – including offering places to applicants who were initially declined spots.
Anyone else surprised that one of the most unethical companies in the world, tops the list of the 5 most unethical AI projects to date? According to reports from Reuters, Amazon put together a team in 2014 that used 500 algorithms to automate the resume-review process – specifically for engineers and coders. The team trained the system using the already existing resumes of Amazon’s software teams – which were almost entirely male.
You already know where this is heading: the system learned to automatically disqualify any candidate who attended a women’s college or who listed women’s organizations on their CV.
Amazon won’t be the last company who deals with this kind of issue. As of 2016, 72% of job candidate’s resumes are screened by AI, rather than people.
AI is not inherently unethical
While we’ve looked at the 5 most unethical AI projects to date, it’s important to remember that not all AI is inherently unethical. In fact, there are far more stories of AI actively helping people, rather than hindering them. The unfortunate thing is that AI’s decisions are still largely controlled by people (at least to a certain extent), so they’re influenced by our gross misconceptions about others. We have to be better, for AI to be better.