Artificial intelligence (AI) is the study and development of computer systems that can learn and think much like the human mind. Based on what the machine has learned, it can apply knowledge to make a decision, come to a conclusion, or solve a problem.
But what if the decision is harmful? Should that be controlled? Can it be controlled?
With the speed at which technology develops and affects the world, there are inherent fears and ethical implications. The quality of the data itself poses an ethical dilemma. If we allow algorithms to decide our future based on data from the past, we may repeat the same mistakes. In this case, social progress could move backward.
It’s important to examine the positives and negatives of artificial intelligence. Do the benefits outweigh the risks? Is there such a thing as ethical AI?
The Ethical Concerns of AI
There’s a common misconception about AI: the more data, the better.
Collecting big data doesn’t ensure that the results are reliable, relevant, or current. In turn, those results may not serve democracy, equality, justice, and wellbeing. This is especially worrying in areas with a global impact, like autonomous weapons and mass surveillance.
A couple of issues come into play. First, we need to consider why and how this incredibly powerful tool will be used. Second, there’s the question of if ethical means were used to execute the function. Introducing bias or discriminatory parameters in the development of these machine systems—even unintentionally—has consequences. We’re haunted by the image of unforeseen repercussions or the frequently-used movie theme of a machine gone rogue.
I would categorize the main areas of risk as:
Building a model that doesn’t accurately reflect its intended population creates bias. A real-world example is facial recognition technology that is less accurate with people of color and women because it was trained with a biased data set of predominantly white males. Also, humans are behind AI system programming, so there’s a risk of software engineers building underlying biases into computer programs.
Even if the data set used to build the model accurately represents the intended population, the history from which decisions are made could be unfair. An example is predictive policing since arrest records are historically biased against certain races.
Could systems be used for unethical purposes? There’s concern that if a machine is built at a large enough scale, unethical use could have destructive ramifications.
The Right to Forget
Learning from history is not the same as keeping a list of mistakes. If technology cannot forget and learn lessons from the past, we may build a rigid, unfair society based on a historical list of wrongs.
Directing the Power of AI to Boost Social Good
Along with risks, artificial intelligence has countless potential benefits. Let’s turn over the tarnished side of the coin and take a look at its shiny side.
Deep learning—the form of machine learning that teaches a computer to perform tasks based on text, sound, or images—has proven to be unequivocally faster and more efficient than humans at identifying, processing, classifying, and executing tasks. This is an immense power that’s had a real-world application for improving society.
AI technology is already used in the financial services sector to protect consumers against fraud. Audio-sensor data aids environmental conservation efforts around the globe. In healthcare, disease-detection artificial intelligence systems can examine skin images for cancer diagnoses.
At Citibeats, we’ve seen our AI text analytics make a significant impact on the development of social and hate speech policy, the strategic approach to meeting UN Sustainable Development Goals (UN SDGs), and response times in areas affected by a disaster.
Minimizing the Risks of AI
The ethical concerns of AI are serious. However, the positive effects of AI cannot and should not be ignored.
Ways to minimize these risks include:
- Aggregating data into insights on cohorts of citizens to reduce bias
- Using advanced categorization systems to minimize unreliable data by sifting through bots to disqualify fake news
We’ve also seen entities take ownership of this responsibility by establishing their own set of guidelines to maintain a human-centered approach to the research, development, operation, and use of AI.
On May 29, 2019, NTT Data announced its AI ethics guidelines. The five principles include:
- Realizing Well-Being and Sustainability of Society
- Co-creating New Values by AI
- Fair, Reliable, and Explainable AI
- Data Protection
- Contribution to Dissemination of Sound AI
This is a good start towards paving the way for others in developing a harmonious coexistence between AI and society. Whether this is enough to keep AI ethical is a matter of time, dedication, and persistence.
Vision for the Future
The ethics of data usage and artificial intelligence impacts us all. It is a trap to believe that technology can solve ethical, social, or political problems. That task falls on us. We are ultimately responsible for maintaining fairness, peace, and equality.
Ethical AI is a challenge worth pursuing. The ideal vision is that machine learning will be used solely as a powerful tool for social good towards useful, beneficial, and ethical ends.
So is ethical AI real? Can it exist? In our vision of the world, we believe it can.