The Aliens are here — and They Want our Morals

Strad Slater
14 min readDec 21, 2020

Imagine. After years of trying to make contact with aliens, we finally get a response. You freak out. I freak out. The whole world freaks out! The government hires a team to decipher the message to make it readable to humans. After a couple of days of work from the team and anticipation from us, we finally get to hear the message:

“Hello, we have received your messages throughout the years. We plan to come visit your planet. Taking precautions, we want to get an idea of your species before we land. Send up a subject who best represents the morals and values of your species. If we feel safe we will come down peacefully. If not, we will come down not so peacefully. Thank you.” -Aliens

Oh no. This doesn't sound good. How the aliens know English is the furthest from our worries right now. Our species is being put on stage to showcase our morals. If we fail to make the aliens feel safe, it seems we will have a very rude awakening.

Aliens waiting for the arrival of our moral saint

After getting over the fact that we have just read the first-ever message from aliens, we calm down and realize all we have to do is send up the most moral person. Easy as that. Just send up one of Gandhi's ancestors.

But after some thought, we realize just how tough this challenge is.

It's easy to think of good morals as being kind to everyone, and treating people how you want to be treated, but that stuff is too generic. When we think of actual situations in which you have to apply morals, the answers become a lot foggier.

For one thing, most of us would agree killing is wrong. But not everyone would agree that killing is wrong no matter what. Some people argue for the death penalty, killing those that have done horrendous acts and committed high-level crimes. Others would say this is wrong and we should not be able to control someone's right to live.

What about stealing? Most of us, again, would agree that it's bad. But it's unlikely we agree that stealing money from the bank out of pure greed is as morally wrong as stealing canned goods from a grocery store to feed a poverty-stricken family.

There are a lot more reasons why figuring out the morally good path in a situation is tough and we will address that soon, but first let's step back into reality.

Why should we care about making this decision? As cool and as scary as it would be, we have not gotten any extraterrestrial messages from aliens recently. We don't have to figure out these moral problems right now. Right?

It appears we have had this message for a while now, but it’s not from aliens.

Artificial Aliens?

Code turning into an artificially intelligent being

Yes, these aliens are none other than the machines being made to think just like humans, more concisely called, artificial intelligence machines.

Artificial Intelligence has been around for decades and has been implemented into many aspects of our life. AI is a field determined to continuously make the world more autonomous by creating computers with human and ultimately superhuman intelligence.

One field where AI is prevalent is with e-commerce and media sights. Websites like Amazon, Netflix, and Instagram are constantly using AI to enhance the experience of the user. Amazon uses an algorithm that uses our previous purchases to predict what we will want to buy next. Netflix does the same with movies as does Instagram with posts. One thing that is similar among the three platforms is the fact that they are using AI to make decisions for us.

The website that tracks everything you buy for your own good: Amazon

As AI heads into more complicated fields such as medicine and law, it will be tasked to deal with choices that directly relate to the fundamental morals of society - usually ones that cause more controversy. How will a machine determine who gets life-saving treatment when there is only a limited amount available? How will a machine decide if a criminal deserves a prison sentence, rehabilitation, or the electric chair? These decisions are dependant on what your moral views of life and punishment are and deciding which views to implement into AI is a tough task at hand.

One technology in which this is a significant problem is with self-driving cars. Imagine you are in a self-driving car. To the left is a van with a family of four and to the right is a man on a motorcycle. As you drive, crates from a truck in front of you fall, blocking your path. The car now has a moral decision to make.

Should it hit the van as the extra protection of the car makes death less likely for the family? Or should it hit the biker in order to protect more lives? In that case, should it keep going forward to protect the most amount of lives while putting you in the most harm's way?

A self-driving car that would have to make a moral decision

There is no right answer to a question like this and luckily, humans don't usually have to make the decision cause the scenario plays out in a few seconds forcing instincts to take over. But now, since we're programming machines to drive for us, we have more time to think about these scenarios and decide on the “best” path to take.

You can experience this decision making process yourself on MIT’s moral machine in which they show you different scenarios of a self-driving car having to make a moral decision. From there you decide the “best” path for the car to take in your view.

How do we decide what is “best?” People are bound to have different opinions and it's not clear how we choose who is “right.”

Differences across space and time

The problem becomes exacerbated as you start to expand your moral horizon out to more people. Our world is full of rich diversity due to factors such as religion, culture, government, and values in general. Each of these factors creates different views on morality, sometimes to a significant degree.

Take homosexuality. People in the United States are, on average, much more accepting of homosexuality than those in other countries such as Saudi Arabia and Indonesia where being gay is a crime. Religions such as Christianity and Islam view it as a sin while those who lean more towards atheism or agnosticism view it as a human right for sexuality.

What about individualism or collectivism? Many societies value the government playing a huge role in protecting the collective of society while others value the lack of government allowing for more independence for the individual.

The main point is that there are many differences across the world that touch on morally significant questions. Whether or not you agree or disagree with the arguments above, it's hard to make the other side believe they're wrong.

Individualism — Valuing life at the individual level/ Collectivism — Valuing life at the collective level

When it comes to people's values, it's difficult to use facts and evidence to persuade them the other way cause values are not things you can prove. You can’t use facts to show why someone valuing the collective over the individual is wrong. You can only persuade them why one might be worse than the other and then they can choose to agree or disagree.

You can also look at morality through the lens of time. The same values we have now are not the same as our distant ancestors. Take the obvious example of slavery.

Today we think back in disgust and are horrified that humans were able to commit such heinous acts. But back then when it was occurring, many people viewed it as morally okay. The perception that whites were superior to blacks allowed humans to justify the idea. It allowed them to still feel as if they were morally good, despite enslaving a whole race of people.

Imagine if AI was invented in the 1700s and 1800s. We would have a system of machines that run our lives, getting their moral framework from plantation owners. We’d be living in a much different world, a morally evil world as I'm sure most of us would argue. It's a scary thought to think about.

Showing what life was like on a plantation during the era of slavery

This brings up the question of how to know if our morals right now are good? How do we know if we are creating machines with evil values similar to those hypothetical machines made during the time of slavery?

We often think that we are the most moral time period and that we have figured out all the answers but this is due to the bias of the present.

Think about how we treat animals. Billions of land animals such as chickens, pigs, and cows are killed every year for our consumption. These animals are kept in cages barely the size of their body and are taken advantage of from birth to death just to feed us. These animals were treated the same or even worse than slaves were back then. Many of us go throughout our day without even thinking about this.

You might argue that animals are inferior cause they have lower levels of consciousness and don't think the same way as humans, making it morally okay. But keep in mind that these were the same arguments used to justify the enslavement of African Americans. This argument breaks down further as there is much evidence that supports the idea that animals suffer from both physical and mental pain in a very similar way to humans.

Pigs at farms are forced to cram on top of each other due to overly small cages

The point is, it's naive for us to think that we are moral angels and our predecessors were the monsters. We are biased towards the present and ignorant of information we have not yet figured out.

When creating AI, especially at the human level, we do not have a lot of room to make mistakes. If we created AI during the period of slavery, it might have been too late to change the algorithms once we changed our minds.

Things we do right now that we view as moral might be seen as disgusting in the future so it's crucial that we take the time to evaluate our morals, especially when trying to implement them into AI.

But in an era of rapid growth among the superpowers of the world, such as the US and China, taking our time isn't usually encouraged.

The Race for AI

Anyone who understands the Cold War understands the idea of an arms race. One country starts building some type of weapon, in this case, nuclear bombs, and in order for the other country to protect themselves and keep up, they create weapons as well. This is what happened between the US and the Soviet Union during the cold war.

The problem with an arms race is that it's a race. We aren't just making the weapons to make them, we are trying to keep up with our competitors and that means we have to work fast. This type of motivation encourages taking shortcuts in order to reach and surpass everyone else.

For example, let's say you and I both have our own computer manufacturing company. We are both trying to create a computer that can run 10x faster than the fastest one can now. As a manufacturing company, we both use heavy amounts of coal and fossil fuels.

Now you are told that your company is significantly contributing to climate change, as is mine, and in order to fix that you could switch over to renewables such as solar. The problem is, it will be more expensive and less reliable to switch to solar which will likely slow down your process in creating the 10x computer.

While you care about the environment, you notice that my company decides to stick to fossil fuels in order to keep production at the speed it is. The only way for you to have a chance at keeping pace with me, based on our current energy technologies, is to stick with fossil fuels. From this, you make the decision to sacrifice the environment to keep up with me.

What our computer factories would look like

This situation happens all the time in today’s age. We want to help the environment but if the US completely switches to fossil fuels while countries like China and Russia don't, they could lose their status as a hegemony.

Now take this scenario but switch 10x computers with AI, and environmental damage with bad morals. Right now, countries like the US and China are in an arms race to be the first to create human leveled artificial intelligence, and for a good reason.

During the cold war, it wasn't as big a deal if you were ahead of the game because you could not do much with the weapons you made. They were used more as a deterrent than anything else. Basically, I won't shoot you cause you have the weapons to shoot me back.

With AI, however, there is a lot more at stake. Whoever reaches artificial intelligence at a human and superhuman level first would gain an immense amount of power over other countries. They would literally have a machine that could think thousands of times faster than humans and work 24 hours a day without any food or sleep.

Imagine the rate of development they'll have compared to countries that haven't reached that level of advancement yet. They could create weapons, chemicals, and modes of transportation we didn't even know could exist.

The big thing keeping countries from attacking each other World War II-style is deterrence from nuclear weapons. But superintelligent AI could allow one country to significantly overpower another, giving them just what they need to invade.

Elong Musk looking like the president of the United States next to the flag

As alarmist as this sounds, this is one of the factors motivating researchers to rush towards AI. This pressure is what encourages people to take shortcuts with morality and not view it with the seriousness it deserves in AI.

Take the bias that image recognition algorithms have been shown to have towards African Americans. Some of the “best” algorithms have been shown to not recognize black faces due to a biased dataset. This wasn't the result of racism but more a result of not paying proper attention to the ethical issues that come with AI, such as bias.

This goes to show how careful we have to be in regards to ethics and AI. Having the pressure of needing to win a race makes this difficult to prioritize.

You might argue that professionals understand that this is such an important issue so they are unlikely to rush past it. If we know the power AI can have over society, wouldn't we take the time to make sure we program it with good morals? You could have asked the same question about the atomic bomb.

When Oppenheimer was working on the Matthatan project, he and those he worked with knew just how dangerous a weapon like this could be if created and used in other countries. They knew that these bombs would have the power to destroy most of civilization. They weren't nearly as sure as they should have been about the political and social ramifications that would occur after dropping the bomb. That didn't stop them from dropping it.

Their focus was beating Germany in the race to make an atomic bomb. Their focus was to stop Japan from continuing the war. Sometimes the fear of the enemy overrides the fear of possible mistakes and AI research isn't an exception to this idea.

What Oppenheimer helped create

So it seems we are being attacked from both sides. For one, we have moral questions that must be dealt which slows down the creation of AI. At the same time, we have external pressure from other countries to win the AI race and continue to hold power.

What do we do now?

Objective: Find Out What Humans Want

Stuart Russell, a professor of computer science from UC Berkely who is known for his insight and contributions to AI and his team, might have an answer. The solution comes from how we program objectives into AI.

To understand, he makes a comparison between human and artificial intelligence:

Humans are useful when their actions help their given objectives, whether that be from themselves or others. From this, you would assume artificial intelligence is useful when its actions help its given objectives.

Russell argues that this logic is faulty as there are many situations in which you can imagine an AI machine perfectly achieving its human-given objective at the cost of hindering humans.

For example, you tell an AI to increase overall happiness, and it hooks up the human race to dopamine machines for 24 hours a day. Obviously not what we want.

Although an extreme example, it shows that we have to tell the machine exactly what we want along with all the restrictions in place when getting there. This can be incredibly tough, especially when we are asking them to solve problems we have little idea how to solve. How do we know what restrictions to give them when we don't know all the possible paths it could take?

Morals can be used as a guide for assigning these restrictions but since we don't have concrete answers to all moral questions, it is unlikely we will provide the perfect set of restrictions the first time.

Going back to Russel he argues that artificial intelligence is beneficial when its actions help achieve our objectives.

It’s not enough to give a machine an objective that is in line with humans’ and then let it execute. Humans' objectives change. There are more nuances to Humans’ objectives that even we don't know about.

Instead, we need a system where the machine’s objective is to constantly find out what humans want. This process keeps humans in the loop. The machine does not directly know what the human’s objective is so it will continually need to get feedback from humans to continue any task.

It's as if you are constantly asking your teacher if you are doing a problem right before going on to the next. This prevents you from messing up all the problems, as you continually learn from the mistakes of the previous problem.

Student asking for advice on homework so he doesn't go overboard and destroy the human race

This also allows for humans to make mistakes when implementing morals in AI because it gives us a chance to readjust the machine's objective. The machine won't try to complete some external objective at all cost because its only objective will be to do what humans want, which requires constant interference from humans.

While this solution doesn’t solve the problem of how to answer these moral questions, it does give us a way to implement our ideas into AI without having to get it right the first time. It's our moral safety net.

Conclusion

I hope it's clear now that the “aliens” are already here and we better start working on their moral request. AI is here now and it's finally starting to ask for answers to these moral questions and we should be in a position to tell them.

Robot Alien here to judge “Milky Way’s Got Morals” with Earth as the first contestant

This might be a blessing in disguise though. With the state of the world right now it seems that people's values are increasingly clashing against each other whether that's from views on COVID or the recent protests or anything else causing friction. Maybe this threat of evil morals in AI can help encourage us to talk more about our values rather than fighting against each other. It will take people from all different types of backgrounds and mindsets to answer these questions so communication will be key.

Only then will the “aliens” come down peacefully.

--

--

Strad Slater

I am a Undergraduate and TKS innovator at Las Vegas. I am interested in Nanotechnology, Philosophy and Physics.