Global military forces are looking to AI as a way to improve their efficiency on the battlefield and beyond. Now, US Defense Department officials have announced a list of five principles that will ensure the ethical use of AI. As the Pentagon outlines AI ethics, let’s take a look at the recommendations they’ve given.
How will they be used?
According to DOD CIO Dana Deasy, the principles will likely be worked into various sectors: “We need to be very thoughtful about where that data is coming from, what was the genesis of that data, how was that data previously being used,” she said. “You can end up in a state of [unintentional] bias and therefore create an algorithmic outcome that is different than what you’re actually intending.”
Deasy said that an AI steering committee will also be in charge of developing more principles on how to bring in data, build algorithms, and train operators. They’ll work on procurement guidance, technological safeguards, risk mitigation, and training measures.
What are the five principles outlined so far?
As the Pentagon outlines AI ethics, we should examine the principles that they’ve given so far. The DOD has called for AI use that conforms to the following:
- DOD personnel must show appropriate levels of care and judgement, while maintaining responsibility for the development, deployment, and use of AI.
- They must take deliberate steps to ensure that unintended bias in AI is minimized.
- The DOD’s AI capabilities must be developed and used in such a way that all relevant people have an understanding of the technology.
- The department’s AI capabilities must have explicit, well-defined uses, and the safety and effectiveness of those uses will be tested within those defined uses across their entire existences.
- The department will design AI to its full intended function, while also avoiding unintended consequences. AI that doesn’t conform may be deactivated.
Potential for growth
While these are great principles to start off with, they’re definitely still lacking. For example, how will the department address the inherent biases in the people tasked with building AI. Studies have shown that human prejudice will impact the AI that we build, as it’s almost impossible to remain completely neutral. How will the department tackle those biases when they inevitably come up?
While this is something the department is definitely addressing, they need to go more in depth. Ultimately, it will be one the personnel building these machines to confront their own biases – and they might not be pretty. We can’t develop ethically neutral AI machines, unless we ourselves are ethically neutral – which we are not.
A real opportunity here
While the Pentagon outlines AI ethics, it’s important that the following are taken into account:
- How they will address the biases inherent to human nature.
- How to build machines that do not perpetuate human prejudice, particularly in a military setting as that could be catastrophic.
- How to develop AI that can address those prejudices, and work past them.
Ultimately, this is a step in the right direction. Still, it’s only a matter of time before these principles aren’t enough.