Technologies that operate without human intervention have been emerging spontaneously in diverse applications, from commercial transportation such as driverless cars, to military vehicles like remote-controlled tanks. Support is affluent from eager investors, who are perhaps looking forward to a similar market effect the IoT had. While an autonomous system [abbreviated AS for the rest of this article] has its perks of relegating tedious manual tasks towards a more comfortable lifestyle for consumers, they attract a lot of worry due to the freshness of the concept and the lack of a firm foundation in guaranteeing security. The risk of an unanticipated catastrophe is very real. There are still too few literatures that address questions on safety. Furthermore, fewer of those literatures that do address our qualms fall short of a satisfactory answer. In fact, Mr. Frenzel shares his refutations on autonomous vehicles in his blog posts. (Read “Forget this self-driving car nonsense” and “Just say no to driverless cars”). Personally, I also have a similar distaste for autonomous vehicles, because I do not find machine learning (or at least, the current state of the algorithms) sufficient for unmanned driving. I do not condemn it altogether though. Perhaps a more accurate GPS technology working congruently with a more effective learning algorithm will boost my confidence.

Aside from immediate consequences, such innovations are potently deleterious in the long run, necessitating regulation and proper training to involved designers and engineers. As the cliche adage goes – an ounce of prevention is better than a pound of cure. Integration of preemptive measures with an AS against plausible long-term threats can save a company from lawsuits or bankruptcy and a consumer his/her life. After all, human well-being is always the top priority no matter what the case may be. [Have you heard of Aristotle’s Eudaimonia?] 

Some significant issues and considerations on designing an AS is further discussed below.


The Autonomous Intelligent System vs. Human Beings


The interaction between the AS and the human being is unique with respect to the situation. But how do we know that human rights are not infringed given various degrees of social and cultural norms? Obviously, it won’t be realistic to specify a universal set of constraints for everyone to follow. Rules tailoring to where and to what purpose they will be deployed for will have to be laid out. A clear delineation is a must to avoid compromising situations. The A.I. of an industrial robot that deals with equipment in a production line, for example, must differ greatly from the A.I. of a robot in healthcare, where a stringent increase in environment sensitivity is required.

There are many existing documents that define the rights of an individual, the “Universal Declaration of Human Rights” being the most well-known. Cultural diversity makes this definition particular to location. Because of a multiplicity of norms, the odds that conflict between values is not remote, thus an AS can have algorithmic biases that are disadvantageous to a particular group no matter what. In my opinion, there is an omnipotent element in defining the human right, regardless of geography or culture, and that is the safeguard against physical harm of any form. Morally officious matters, which are still unacceptable, can be dealt with solutions after discovery of the problem. Arising tensions can be pacified when both sides approach with tolerance and maturity. Unfortunately, physical harm, or at worst death, is irreversible. It can lead to unprecedented chain reactions of violence, hatred and anger. Remember what the death of an Austrian, on the fateful day of June 28, 1914, did to the world?


The Need for Methodologies to Guide the Design Process


When a robotics engineer is asked on how he designed his robot to satisfy an adequate level of trust between the people it will interact with, he/she can’t just respond with: “Oh! I just kept in mind Isaac Asimov’s three laws of robotics. I’m sure it won’t even hurt a fly. Sha-lala-lala…”. When all definitions have been identified, the actual act of merging them into the design process remains convoluted. How do I make my robot comply with this culture’s taboos? Should I also program it with the same level of sensitivity it has with people when it interacts with cattle [they are considered sacred in Hinduism]? Will my robot offend anyone when it makes this gesture or if it is shaped this way? Again, the absence of an elucidated official guide to such pressing concerns will yield answers characterized with protracted variability.

Academic institutions seldom prioritize courses that discuss ethics in AS. Maybe such a fiddling topic does not necessitate in-depth study or discussion. The underlying arguments after all, seem to be postulated from common sense. Do we really need models for intercultural education to account for specific issues in an AS?

How common do you see a news article where an unmanned aerial vehicle [UAV] has struck a wrong target? Imagine those poor victims, whose lives were unreasonably cut short because of what? A measly glitch in the AS? How about accidents involving driverless cars? Is there a need to empower awareness on such loopholes in these AS systems so that prompt solutions can be provided by the tech community? Do we need better documentation practices for such events so that the next designers will not repeat such flaws? 

In such accounts, the AS is obviously accountable. But what about cases where accountability is obscure? There is also the challenge of creating a system that can properly identify when the AS is liable to fault, so that an effective solution can be implemented.

The AS designer is also exposed to the risk of self-bias. An ostensible sense of security can be reached even though an imminent peril exists. A third party responsible for the AS’s value alignment can avert such a danger.

Finally, an AS system is bound to evolve as technology moves forward. In effect, both ethical and safety issues will change, and it is incumbent upon the manufacturers of the AS to adjust appropriately.