Artificial Intelligence Ethics
Intelligent machine systems are already making our daily lives easier and more personalized. From social media to Google Maps to watching Netflix, so many applications of intelligent software are already a part of the background of our lives.
As these systems get smarter and computing capacity grows, they’ll only become more powerful. Artificial intelligence (AI) is the technology to watch both today and 50 years from now.
At the same time, AI poses a huge number of ethics questions that will challenge our society. Although many of the questions surrounding AI are broad, some of these issues, like the automation of labor, are fairly easy to run a cost-benefit analysis on.
At the same time, AI also presents far more existential problems,
“Includes who should own the wealth created by machines? What happens if the machines become too intelligent? These are problems that fall outside our current modes of decision-making.”
Artificial Intelligence offers real benefits to both individuals and society. But we need to be prepared to make some tough ethical decisions along the way. Here are some of the ways AI could challenge ethics.
How AI Influences the Direction of Business
AI is having a huge impact on business in diverse ways ranging from cybersecurity, product development, and credit card processing. However, the biggest story is that AI technologies promise to help augment human work and allow humans to focus on value-added work by removing the day-to-day tasks that require automation.
AI will also be a boon to businesses as today’s Big Data grows into uncountable data: it will be AI that makes sense of data and derives actionable insights from it.
However, there are real risks to integrating AI into our businesses. One of these risks lies in the questions it creates regarding liability. What happens if the intelligent program makes a mistake, and what if that mistake becomes a structural flaw that threatens people’s lives?
Autonomous cars are a perfect example of the moral choices inherent in AI that appear in everyday decision making. If an autonomous car hits a pedestrian instead of making another choice (e.g., to drive into a parked car), then who is responsible? Is it the automaker who owns the proprietary AI algorithm? Is it the driver or the consumer? Is it the algorithm and not the automaker? The answer is complicated because the self-driving car will need to make an ethical judgment with an imperfect set of rules. What’s more, a lack of transparency in algorithms could make the scenario even more complicated.
Though not all matters related to AI will be life and death, the same kind of issues will face any business that either sells a product equipped with AI or uses AI in its core operations. Who should take responsibility when something goes wrong?
Ethical Issues in Using AI to Solve Societal Problems
The potential offered by AI is immense, even if it is still in its infancy. Some of the most exciting applications of AI right now are helping solve some of the world’s biggest mysteries like how the brain works.
“Applications of Artificial Intelligence are also helping protect us from financial crises, environmental disasters, and famine.”
Perhaps one of the most interesting applications is the use of AI to help alleviate a persistent problem that impacts everyone: poverty. Poverty is not only a problem in itself, but it’s also a problem because it’s incompatible with so many of modern society’s systems. Being poor is very expensive.
Low wages and high costs of living force people to make choices that create a cycle of unfortunate events that can tip a person over the edge from instability to poverty or even exacerbate poverty itself. Being subjected to human bias and discrimination is often one of these events because it can impact not only a person’s earning potential but their ability to do things like get good rates on a mortgage or even buy a home at all.
One of the mantras of computer science is “garbage in, garbage out.” Algorithms, even the intelligent ones, start with the data they learn from humans. When humans code their prejudices into the algorithm (either consciously or unconsciously), the algorithms take those on board. What’s more, the data they gather can also bend the algorithm.
We have already seen early examples of just this with Google’s image recognition program labelling black people as gorillas and a Microsoft chatbot that became antisemitic in a day.
Microsoft’s chatbot was an AI project followed closely by its creators. But what happens if those programs go on auto-pilot? AI has no moral compass, at least not in the way humans do. It can be fooled, instructed, or biased enough to make huge decisions that exacerbate or even punish poverty. Again, whose fault is it if the algorithm goes wrong? And are reparations in order if human lives are harmed?
Is AI the Best or Worst Thing to Happen to Humanity?
As Stephen Hawking said, “Artificial intelligence is likely the best or worst thing to happen to humanity.”
AI represents the best of human potential, but even though it will think faster and smarter than we can, at least in terms of data processing, it’s also the product of a human. Algorithms don’t form in a vacuum: they’re programmed by people with ways of seeing the world that is both conscious and unconscious.
Whether AI will help or hurt remains to be seen, but as long as the potential is there to do an indescribable amount of good in so many areas, it’s probably worth the risk.