Robots are inevitably marking the future of mankind with their undeniably big contributions to science and technology. But with obedience in the question, should they do everything we ask them to?
Maybe we have to ask ourselves the question, should we always do what other people say?
At first glance, it might seem robots should be obedient all the time. However, if we think of all the times we should not mindlessly carry the instructions of others, we may want to put robots in the same place.
According to Inverse, Matthias Scheutz explained that these situations may merit robots the value to say no.
For instance, an elderly care robot being asked by a forgetful owner to wash "dirty clothes" even if they had just come out. Or a preschooler who orders the daycare robot to throw a ball out the window, or a student commanding her tutor to do the homework instead of herself.
These are plenty of morally upright cases where robots receive innocent commands that ideally should not be carried out because they lead to slightly unwanted outcomes. But not all cases will be that innocuous, even if commands appear to be.
For example, a robot car can be instructed to back up while the dog is sleeping in the driveway behind it. Or a kitchen robot may be asked to move forward with a knife on a hand with a chef in front of it.
According to Inverse, these kinds of "innocent" robot obedience can still be avoided. Either way, it's important for both autonomous machines to detect the potential harm their actions would cause and to react to it by either attempting to avoid it, or if harm cannot be avoided, by refusing to carry it out.
So how do they know what will happen next? According to Inverse, we may have to develop robotic controls that make simple inferences based on human commands. These can determine whether the robot should carry them out as instructed or reject them completely because they violate an ethical principle the robot is programmed to obey.
Telling the robots when and why to disobey is far easier said than done. Figuring out what harm or problems may result from actions is not a matter of looking at outcomes. Context can make all the difference.
It's difficult for robots today to determine when it's okay to throw a ball, like a child playing catch, and when it's not, like out of the window.
Understand these dangers involves a lot of background knowledge as the robot needs to also consider action outcomes by themselves and to contemplate the intentions of people giving the instructions.
In general, robots should never perform illegal actions nor should they perform legal actions that are not desirable. Therefore, robots need 'representations" of laws, moral norms and even etiquette in order to be able to determine whether the outcomes of an instructed action might violate these principles.
© 2024 NatureWorldNews.com All rights reserved. Do not reproduce without permission.