The moral system of an AI system will not be mirrored by our’s because for the very simple reason that they are not humans.
Artificial systems would never be benevolent because they are trying to achieve their goals without the moral compass built into them.
They are just trying to achieve the objective they are built for.
Giving the immense power of thinking and acting on the instincts of a machine, is a step closer towards the doom of the humanity.
Take an example;
What is the AI machine running the world try to solve the world hunger problem by killing enough people on the planet? For it, it’s the most plausible solution which just fulfills its objective.Fewer people less shortage of food.
There is nothing like a benevolent machine. This is an alien concept.A machine is simply following the instructions, rules laid out before and to fulfill its objective and at the same time learning and evolving. But to what extent, to evolve in terms of speed and precision.
Don’t get me wrong; it’s not like I’m against the advancement of science and technology. I have studied Masters in Computer Applications and have worked in the technology field for almost a decade. I strongly support the advancement of science and technology for the benefit of mankind.
The point I’m trying to stress here is using machines to ease up our daily life and using their immense calculation power to help the world in one or other way is a noble idea. But on the other hand, giving immense power to an AI system so that it can take autonomous decisions at the behest of humans is a reckless decision.
A human moral compass can never be built into them. They can never realize the beauty in nature, in the hidden moments of life, which is basically the essence of being human. A machine can never have the best interest of humanity at heart. The idea of a benevolent machine is a fallacy.
Joseph Weizenbaum, a German American computer scientist and an emeritus professor at MIT university, argued that there are service providers in certain areas which should not be replaced by the AI machines, as its very important to have authentic feelings towards the people they are caring about. For e.g.
1. A therapist
4. Police officer
5. A judge.
How many times have we read in the news about the drones wrongly attacking and killing the civilians instead of the perpetrators?. To Err is humans, they say, then what about the AI, which takes the autonomous decision. Won’t they make errors?
Machine ethics (or machine morality) or Roboethics is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral, which is still in its infancy.
We all as humans are turning a blind eye to the irrevocable danger in shape of the AI machine learning systems cause we are trading speed and precision with the
To simply put it.
“They are Gods, without the hearts.”
Photo by Tim de Groot on Unsplash
This post is in response to the daily prompt Brilliant