Artificial Intelligence Ethics Through The Common Good Approach

Artificial Intelligence (AI) Ethics Through The Common Good Approach

This blog post is a slightly condensed version of an ethical dilemma paper I wrote for one of my high school classes. We were given a week to write it and I was only able to work on it for a few days, so please excuse any possible mistakes.

I will post the link to the full version below as a PDF (before revisions).

Download the PDF Here


Artificial Intelligence (AI) Through The Common Good Approach

In 2016, the computer program, AlphaGo, beat an 18 time world champion Go player, Lee Sedol, in a 5 game match. AlphaGo used deep neural networks to enable it to be trained by supervised learning from human moves as well as reinforcement learning by self-play.

Shortly after, DeepMind introduced AlphaGo Zero, a new algorithm using reinforcement learning. This algorithm is given only the game rules with no data from human player’s moves. It essentially teaches itself to play the game based on predictions and self-play. AlphaGo Zero beat its predecessor program 100-0 (Silver, David, et al. 2017).

While this Go playing program is only narrow AI, it shows the potential of how rapidly things can progress in the AI field. With the increased implementation of artificial intelligence across many disciplines, a new set of ethical issues is presenting itself. Ethical issues within AI include existential risk, superintelligence and the singularity, machine bias, weaponry, wealth distribution, unemployment, medical uses, and privacy.

If researched and developed safely, AI could be the greatest tool ever created. The benefits of a symbiotic relationship with AI could far outweigh the disadvantages by enhancing humans and contributing to creating a greater society. If not researched and developed safely, AI could be the tool that destroys us.

Ethical Question

The ethical question that must be addressed is whether a strong push towards advancement in AI would benefit the community as a whole, and not just some elite members.

With the risks in mind, further advancement in AI will not benefit the community as a whole unless carefully monitored with measures taken to prevent the possible existential and social risks.

Background

In 1950, Alan Turing developed the Turing test, an experiment to judge a machines ability to display and behave in an intelligent manner equal to a human. In the same year, Isaac Asimov published his book I, robot, which contained his three laws of robotics.

(1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

(2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

(3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws (Asimov 1950).

One year later, in 1951, Marvin Minsky and Dean Edmonds built the first artificial neural network designed to simulate a rats brain solving a maze.

An artificial neural network (ANN) is a bio-based system using artificial neurons and networks, designed to mimic neurons and connections in the human brain.

This time period marked the beginning of the exponential advancement in AI.

In 2018, AI is used everywhere; it is used for natural language processing in cell phones, medical image analysis, stock trading in financial markets, Netflix recommendation systems, etc.

Since the beginning though, the question has always been whether it is possible to create a machine with human level intelligence.

A goal that some AI experts say will be achieved by 2050, while others predict by the end of the 21st century (Sandberg, Anders, and Nick Bostrom. 2011)

Companies working on solving artificial general intelligence (AGI), like OpenAI and DeepMind, are consistently making breakthroughs at an unprecedented rate. With no laws governing the advancement of AI, one can see the power that would be in the hands of the first team to generate a machine with human-level intelligence.

Progress will not stop with AGI. Infact, following the creation of AGI, progress will only grow at unimaginable rates. This is known to many as an intelligence explosion. The intelligence explosion is a theoretical rapid technological advancement period, triggered by the creation of AGI, leading to artificial superintelligence (ASI) and the singularity. ASI is a system that far surpasses the most intelligent humans by inconceivable amounts. ASI and the intelligence explosion is best described by I.J. Good, in his 1965 paper “Speculations Concerning the First Ultraintelligent Machine”:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. (Good, Irving John. 1966)

Once a superintelligent system is created, its capability of self-improvement will allow it to update itself whenever necessary without any human interaction. In other words, we will lose control over it as soon as we create it. Therefore, policies for the advancement of AI should be implemented now, and improved as frequently as possible to increase the likelihood of our survival.

Imagine losing control over a machine vastly smarter than even the most intelligent human minds. Hopefully it is obvious that this scenario is not in the best interests of the human species.

Morals of The Machine

The first issue to discuss is the morals of the machine itself.


Question:

Even if it was possible to program morals and ethics into an intelligent machine, what morals would we program into it? Morals and ethics vary by religion, region, personality, etc. How does this get decided?


If machines do not have morals by the time they reach general intelligence, they may never have them. A superintelligent machine may design a way to program morals into itself, but if it deems morals a weakness, why would it? Society would have a superintelligent psychopath among it. This does not immediately mean that humans would be hunted by a Skynet type of force.

The relationship would be more like a humans relationship with a bug. Most humans do not have a goal of destroying all bugs on the Earth. That being said, most humans have no problem stepping on a bug if it is in their way. A similar relationship would likely occur with a superintelligent system where it may not have a goal to destroy all humans, but we may be so insignificant to it that it may not have a problem with stepping on one the way a human steps on a bug.

Good Intentions Gone Wrong

A machine with no morals or emotional intelligence may cause problems even with good intentions. For this situation, assume that an artificially superintelligent machine is successfully programmed to aid humans and can not cause any harm. Even if this is true, the machine may misinterpret what a human is requesting, or may see an option to complete the goal that would be terrible for humans.

Nick Bostrom, in his book Superintelligence: Paths, Dangers, Strategies, gives an excellent example of how things can go horribly wrong.

He says to imagine giving AI a goal of “Make us happy”. The humans intentions were most likely to request the AI to maybe tell a joke. Instead, the AI may see it as more efficient to “implant electrodes into the pleasure centers of our brains”.

For a goal of “Make us smile”: Bostrom writes that AI may “paralyze human facial musculatures into constant beaming smiles”.(Bostrom 120).

It is also possible though, that with more human behavior data and the possible creation of morals in the machine, that this problem can be solved and the machine would know a sufficient amount to understand what the human was requesting.

Privacy Concerns

Privacy has also been a big concern lately with the vast usage of social media and the internet.

Companies gather individual’s data for personalized advertising and persuasion. Everything someone does on the internet leaves a cyber footprint.

A nice link to internet usage and AI is in the 2015 movie Ex Machina.

In the movie an internet companies CEO creates the beginning stage of ASI using data he acquired through his company. He used his internet services as a human data collection experiment.

This is not hard to imagine being used in reality. The internet can be used as a human behavior analysis to gather and store data. If a human level intelligence machine wanted to collect more data and improve itself, specifically on human behavior, where better to look than the internet?

It is likely that the citizens would not want to feel like a guinea pig in an experiment controlled by a machine.

Aided by AI, companies will be significantly better equipped to design personalized ads and try to motivate the target to become a consumer. This technology is already used today to some extent in streaming services such as Netflix.

Netflix has been researching deep neural network algorithms for their recommendation system to design better ways to keep their users interested and motivated to come back to the site to continue watching movies and shows. It’s clear that Netflix’s recommendation system is working, 80% of all Netflix streaming hours come from the recommender system, while the other 20% come from searches (Gomez-Uribe, Carlos A., and Neil Hunt. 2016).

A more advanced system would have even better persuasion techniques and would outsmart all humans.

Invincible War Machines

Another ethical issue is the use of autonomous war machines.

With a technology as advanced as AI in the world, it is expected to see its use in wars. AI would be capable of cyber attacks, and of attacks with deadly precision on the battlefield. Many would never stand a chance against an artificial killing machine. Drones, as an example, are already a widely used technology in war, along with many other types of robotics. Combine that with an artificially intelligent system and an invincible machine would be created. Would governments unleash AI weapons with a hope that it will make the right choices? How do we train, and with what data will we train a model that will have to distinguish what is a threat and what is not?

Control

Humans can not and will not ever control artificial superintelligence.

Is it possible to just pull the plug if the machine is acting up? In short, most likely no. It is a lot more complicated than just “pulling the plug”. This machine will most likely prefer to stay turned on and have control over its self.

By the definition of superintelligence, as a human, it is impossible to think up a way to “outsmart” this machine without having another and even smarter superintelligent system.

If truly superintelligent, the machine will have already calculated and prevented every possible way of turning it off, most likely before any human can begin to even recognize that it should be turned off.

An artificial superintelligence could be invincible in every sense of the word.

Conclusion

While still years, maybe even a century, away, AI needs policymakers to regulate and monitor its creation.

Humans need to be very careful when walking on the thin ice that covers artificial superintelligence. The further advancement of AI is a double edged sword that has the capability to pose as an existential threat to humans.

After researching the risks, the further development of AI, without any policies to regulate, is not in the communities best interest, no matter how helpful it is in the present and short-term future.

Final Note

What happens when Earth’s resources diminish and humans have to fight for them against the ASI system? As mentioned earlier, we could be competing against an invincible, superintelligent psychopath.

Who would win?