Article - Issue 45, December 2010
Are we ready for autonomous systems?
Dr Natasha McCarthy and Lambert Dopping-Hepenstal FREng
Are we ready for autonomous systems?
A report on the social, legal and ethical issues surrounding Autonomous Systems was published by The Royal Academy of Engineering in the summer of 2009. The media response worldwide was so strong that a follow-up conference was held in September 2010. Dr Natasha McCarthy and Lambert Dopping-Hepenstal FREng summarise the hopes and concerns for the technology aired by academics and industrialists at that meeting.
BAE Systems has developed Taranis, an unmanned combat aircraft system to demonstrate the viability of autonomous technology for long range strikes. Flight trials are due to take place in 2011 © BAE Systems
Predicting the impact of new technologies has become an increasingly common concern, be it in order to identify potential markets or prepare for potential negative reactions to such technologies. Autonomous systems represent an area of technology which has both the promise of benefits and returns for business; and the worry of unforeseen risks and social rejection of the technology.
Autonomous systems differ from an area like nanotechnology or GM because the issues here are not about the potential physical harm that may arise from using materials with certain properties; they relate to the place of these technologies in society, in communities and our individual personal lives.
Early attention to the issues raised by the introduction of these technologies is important to ensure that their introduction has the public interest in mind and has appropriate support. Public engagement efforts on the issues surrounding autonomous systems and debate about their impact are valuable to promote understanding and address the genuine expectations and concerns of the public, and to take these into account in the development and implementation of the technologies. There are significant benefits to be gained from the development of these technologies.
What is an autonomous system?
The essential difference between a merely automated system (elevators and tube trains for example) and an autonomous system is that a truly autonomous system is capable of operating without human intervention, and can do so because it has a capacity for adaptation and learning.
Driverless vehicles are an example of autonomous technologies that have already been developed – although they are more likely to be introduced in the form of fleets of unmanned trucks on major highways rather than as individual cars finding their own way through busy cities. Medical robots that perform where the surgeon cannot be present are examples, be they in an ambulance en route to hospital or in confined spaces such as an MRI scanner. Military applications are also in use, unmanned air vehicles for surveillance and reconnaissance and battlefield droids are being developed by a number of countries.
Technologies where the human operator is merely absent can be contrasted with technologies designed to actually take the place of a person. In social care, autonomous systems have been designed for various purposes, from sensing the movements of a person in their home, to watch over them and look out for unusual behaviour, to robots designed to feed those unable to feed themselves. Babysitting robots have been developed for use in Japan, which are actually intended to be companions or playmates for children, taking the place of adults or other carers.
While such applications no doubt have benefits, the vision they create – armies of fighting droids; empty unmanned air vehicles sweeping the skies and watching over us; fleets of driverless trucks; robot friends for children – cannot fail to bring a chill. The idea of a society where such technologies are rife feels inhuman and dystopian, even if the reality can be a safer society, with fewer accidents, and less risk to human life.
The reason why it may be so odd and disconcerting to us might be that we naturally lack the conceptual framework to entertain the idea of entities that are autonomous but inanimate, or companions that are neither human nor other living creatures. We do not assess such systems at face value, but understand them as pseudo-humans, imagining them unable to imitate human powers of decision making and emotion.
The lack of a place for autonomous systems in our psychological and social frameworks is twinned with the lack of a governance or regulatory framework within which to develop and control these technologies. This is a problem, because without this we cannot move forward in their development. These technologies need innovation in regulation as much as in engineering – we cannot invest in research and development in this area if regulation does not exist to allow the technologies to be used. Without regulation there can be no application and no market.
The development of regulation demands that we first address some fundamental ethical and ideological issues about the use of autonomous systems. Society needs to identify what kind of benefits autonomous systems must deliver in order to outweigh the sometimes unforeseeable, risks of their application. We need to address whether and how these technologies can be developed in a way that allays our unease about them. Indeed, society as a whole needs to consider whether this is an area of technology that should be pursued at all.
Autonomy or isolation?
Caring for an ageing population is an issue of significant political concern and a growing area of application in biomedical engineering. Telemedicine and the remote monitoring of patients are allowing people to stay in their homes when their health and mobility deteriorates, as well as supporting them in self-care. Autonomous systems are a significant development of these technologies, with sensor systems designed to monitor movement of a person in their home. They can also be programmed to recognise regular patterns of movements and raise the alarm when a situation is out of the ordinary – for example, if the occupant of a house is still for a long time in the middle of the day.
A further development of these technologies is for systems that can take the place of a carer to some extent, by providing a reminder when medicines are not taken for example. A concern is that the more people can be monitored remotely and prompted without anyone present, the less they will see and interact with others. Autonomy for the older person might also bring with it isolation.
Might autonomous systems be able to allay that isolation? Could the voice that reminds a person to take their medicine be a caring voice (a recorded voice of a relative for example), asking questions, giving reassurance, making them feel less alone? When it is impossible for an older or ill person to have company all day, might an autonomous companion of sorts be a good second best?
Or should human contact be valued more highly than independence, and the ability to stay in one’s own home? Here the issues becomes complex, dependent on individual values and preferences. One person’s independence and freedom might be another’s idea of loneliness.
Robots that feed patients in hospital have already been developed, and while to many these may seem cold and dehumanising, to others they may spare patients the indignity of being spoon-fed by nurse or carer. Autonomous systems have the power to both preserve and take away dignity, with the same system having a quite different impact on the quality of life of different people.
It is surely impossible to make general judgements about the right way to care for people. Such judgements raise subjective concerns, relating to individual self-perception and personal aspirations. However, with the challenge of caring for an ageing population a priority, these issues should be addressed.
The ethical and ideological questions surrounding the use of autonomous systems may often seem straightforward to formulate, if difficult to answer. On the battlefield, the fundamental issue appears to boil down to saving lives. Autonomous systems are good if they mean fewer casualties on the battlefield and less ‘collateral damage’ as the result of aerial bombing raids.
How can autonomous systems reduce casualties? Obviously, if you send out a battlefield robot to seek out mines, to defuse bombs, or to perform reconnaissance missions where it is highly likely that enemy personnel will be in waiting, then the worst thing that can happen is that an expensive machine will be damaged or destroyed. When that same task is carried out by a soldier, there is the potential that a young person receives life-changing injuries or loses their life.
There is also the hope that autonomous systems might be created that are more accurate and, by their nature, less emotionally driven than a human soldier. The young soldier on reconnaissance might, fearing the enemy, be too quick to shoot and kill an innocent person, perhaps a young child who has strayed where they should not. Unmanned aerial vehicles can select targets, and while currently dependent on a human in the control loop before any action is taken, they could ultimately be more accurate in their selection and aim. They could, at least, function even when the human controller is incapacitated.
But is this vision realistic? There are concerns that these aims are driving the implementation of battlefield robots even though the technology is far from sufficiently developed to deliver the necessary levels of discrimination and accuracy. It can be argued that autonomous war fighting robots on the ground is unacceptable, because AI (artificial intelligence) is not at a level to support discrimination between civilian and combatant. There are concerns about the proliferation of technology before it is sufficiently matured, given that 40 countries currently have programmes to develop autonomous systems for military use.
Another view is that modern war is not just about winning territory, but about winning hearts and minds. Even if autonomous robots introduced into the theatre of war were capable of drastically reducing fatalities, it might not follow that they are straightforwardly a good thing. Rules of engagement are based, to an extent, on shared social values and a sense of fairness.
‘Just’ wars must adhere to international conventions based on agreed rules of engagement. Is it part of those rules to allow one side in a conflict, no doubt the richer countries, to field robots whilst another is forced to put the lives of human soldiers at risk? And how much more unacceptable do civilian casualties become in a war where one side does not even risk the safety of their troops? It is difficult to place a non-human combatant into this framework and expect standard military and social norms to apply.
In technological regulation, the focus is generally on weighing benefits against risks. In the area of autonomous systems the risks are difficult to quantify because they are largely unknown, they are also to a significant degree, social risks. The fear is the proliferation of soulless machines in the place of people.
But there are many areas of technology where apparently fundamental social norms have been challenged. Heart transplants and IVF are areas where the benefits of technology have prompted us to change our perspective and develop a conceptual framework for things we previously found repellent or amoral. Autonomous systems may well likewise change our perspective, and become an unquestioned part of our lives. The essential thing is that we need to do this mindfully and not robotically, with thought and empathy rather than sleepwalking in the direction that technology pulls us.
Regulators and policy makers need to catch up with autonomous technologies, and put in place the legislation that will allow their deployment, and ensure that they are developed and used responsibly. This will require consultation with engineers and the public to address the difficult questions that autonomous systems present.
Read Autonomous Systems at www.raeng.org.uk/autonomoussystems