Can we technologically evolve humans into a post-human, cyborg state? This article tells the story of the self-experimentation implant research carried out over the last few years by the author.
The term ‘cyborg’ has been widely used in the world of science fiction, yet it aptly describes a field of research still in its infancy. The Oxford English Dictionary describes a cyborg as ‘a person whose physical abilities are extended beyond normal human limitations by machine technology (as yet undeveloped)’. Meanwhile, others see the class of cyborgs (cybernetic organisms – part human, part machine) as including those with heart pacemakers or artificial hips, even those riding bicycles (Hayles, 1999). In this discussion, however, the concept of a cyborg is reserved for humans whose physical and/or mental abilities are extended by means of technology integral with the body.
One interesting feature of cyborg research is that the technology developed can be considered in one of two ways. On one hand it can be seen as potentially augmenting all humans, giving them abilities over and above those of other humans. Alternatively, it can be viewed as helping those who have a physical or mental problem, such as a paralysis, to do things they would otherwise not be able to do. This dichotomy presents something of an ethical problem with regard to how far the research should be taken and whether it is a good thing or bad thing to ‘evolve’ humans in a technical, rather than biological, way.
Reasons for experimenting
The primary question is: why should we want to extend human abilities? Despite the success of humans on Earth, this is something we have generally always been trying to do. Indeed, it could be regarded as an important part of what it means to be human. We have obvious physical limitations and in the last few centuries in particular we have employed technology to dig tunnels, lift heavy loads, communicate instantly around the world, repeat mundane tasks accurately and rapidly and, perhaps most diversely of all, enable us to fly.
However, with their finite, limited brain size, humans also exhibit only a small range of mental abilities. Such a statement can, though, be difficult for some humans to accept, largely because of their finite, limited brain size. By comparing the human brain with a machine (computer) brain, one can see distinctly different modes of operation and, in some ways, advantages of the machine in terms of its performance.
Some of the machines’ ‘mental’ advantages have been put to good use in recent years. For example, a computer’s brain is able to carry out millions of mathematical calculations, accurately, in the same time it takes a human to do one calculation inaccurately. Also the memory capabilities of a networked computer are phenomenal in comparison with a human’s memory. Surfing the web for a host of information that the human brain cannot hope to retain has become commonplace. Such mathematical and memory abilities of machines have led to considerable redefinitions of what ‘intelligence’ is all about and have given rise to an ongoing controversy about what machine intelligence is and what it might be capable of (Warwick, 2001).
Technology has also been used to improve on the human’s limited range of senses, and to give us information about aspects of the world around us that are not obvious in everyday life. So now technology can give us information about X-ray signals, what’s going on in the infrared or the ultraviolet spectrum and even ultrasonic images of the world around. In most cases such signals are converted into visual images that humans can understand.
Computers are also employed nowadays to process data, to ‘think’, in many dimensions. One reason for this is that human brains have evolved to think in, at most, three dimensions, perhaps extending to four if time is included as a dimension. Space around us is, of course, not three-dimensional, as humans categorise it, but quite simply can be perceived in as many dimensions as one wishes. Machines therefore have the capability of understanding the world in a much more complex, multidimensional way in comparison with humans. This multidimensionality is an extremely powerful advantage of machine intelligence.
When one human communicates either with a machine or with another human, the human brain’s relatively complex electrochemical signals are converted to mechanical signals, sound waves in speech or movement with a keyboard, perhaps. Realistically this is a very slow, limited and error prone means of communication in comparison with direct electronic signalling. Human languages are, as a result, finite coding systems that cannot appropriately portray our thoughts, wishes, feelings and emotions. In particular, problems arise due to the wide variety of different languages and cultures and the indirect relationships that exist between them. Machine communication is, by comparison, tremendously powerful, not least because it usually involves parallel transmission whereas human communication is, by nature, serial.
When we compare the physical and mental capabilities of machines with those of humans, it is apparent that physically humans can benefit from the technological abilities of machines through external implementation. In other words, we sit in cars or on planes; we don’t need to become one with them. When it comes to the mental area, humans can also benefit, as we already do in many cases, through external cooperation. For example, a telephone helps us communicate or a computer provides us with an external memory source. But a much more direct link up could offer us so much more. For example, by linking human and computer brains together, could it be possible for us, in this cyborg form, to understand the world in many dimensions? Might it also be possible to tap directly the mathematical and memory capabilities of the machine network? Why should the human brain remember anything when a machine brain can do it so much better? What are the possibilities for feeding in other (non-human) sensory information directly? What would a human brain make of it? And perhaps most pertinent of all, by linking the human brain directly with a computer, might it be possible to communicate directly, person to machine and person to person, purely by electronic signals – a phenomenon that could be regarded as thought communication?
All of these questions, each one of which is valid in its own way, provide a powerful driving force for scientific investigation, especially as the technology is now becoming available to enable such studies. It is a challenge that perhaps provides the ultimate question for human scientists. Can we technologically evolve humans into a post-human, cyborg state?
The 1998 experiment
By the mid- to late 1990s numerous science fiction stories had been written about the possibilities of implanting technology into humans to extend their capabilities. Also at this time several eminent scientists started to consider what might be achievable now that appropriate technology had become available. For example, in 1997, Peter Cochrane, who was then head of British Telecommunications’ Research Laboratories wrote, ‘Just a small piece of silicon under the skin is all it would take for us to enjoy the freedom of no cards, passports or keys. Put your hand out to the car door, computer terminal, the food you wish to purchase, and you would be dealt with efficiently. Think about it: total freedom; no more plastic.’ (Cochrane, 1997). Despite the predictions of such scientists, little or nothing, perhaps surprisingly, had been done with research in this direction. In particular no actual scientific tests or trials had been carried out by that time.
As a first step, on 24 August 1998 a silicon chip transponder was surgically implanted in my upper left arm. With this in place the main computer in the cybernetics building at Reading University was able to monitor my movements. The transponder, being approximately 2.5 cm long and encapsulated in glass, was in fact a radio frequency identification device. At various doorways in the building, large coils of wire within the door frames provided a low power, radio frequency signal which energised the small coil within the transponder. This in turn provided the current necessary for the transponder to transmit a uniquely coded signal such that the computer could identify me. In this way signals were transmitted between my body and the computer – the reverse transmission was also possible. In order to demonstrate the capabilities of an individual with a transponder implant:
the door to my laboratory opened as I approached
the computer was aware of exactly what time I arrived in certain rooms and when I left
the corridor light came on automatically
a voice box in the entrance foyer of the cybernetics building welcomed my arrival each morning with ‘Hello Professor Warwick’.
As far as we were concerned the experiment was successful, and hence the implant was removed nine days after its insertion.
One reason for carrying out the experiment was to take a look at some of the ‘Big Brother’ tracking and monitoring issues. In fact, as a one-off test, it was difficult for me to assess this. Personally I was quite happy with the implant in place: after all, doors were being opened and lights came on. It is therefore difficult to conclude anything with regard to the ‘Big Brother’ issues. If I did have to make some judgement, however, it would be that, if we feel we are gaining from more monitoring, then probably we would go ahead and move into a ‘Big Brother’ world.
One surprise was that, mentally, I regarded the implant as being part of my body. Subsequently, I discovered that this feeling is shared by those who have artificial hips, heart pacemakers and transplanted organs. However, it was clear that the implant had only a limited functional use. The signals it transmitted were not affected by what was going on in my body and any signals sent from the computer to the implant did not affect my body in any way. To achieve anything along those lines we needed something a lot more sophisticated. Hence, after concluding the 1998 tests, we immediately set to work on a new implant experiment.
The 2002 experiment
On 14 March 2002, at the Radcliffe Infirmary, Oxford, an array of 100 silicon needle electrodes was surgically implanted into the median nerve fibres of my left arm. The array itself measured 4 mm × 4 mm, with each of the electrodes being 1.5 mm in length. The median nerve fascicle was estimated to be approximately 4 mm in diameter and hence the electrodes penetrated well into the fascicle. A first incision was made centrally over the median nerve at the wrist and this extended to 4 cm proximally. A second incision was made 16 cm proximal to the wrist, this incision itself extending proximally for 2 cm. By means of a tunnelling procedure, the two incisions were connected, ultimately by means of a run of open tubing. The array, with attached wires, was then fed down the tubing from the incision nearest the elbow to that by the wrist.
Once the array and wires had been successfully fed down the tubing, the tubing was removed, leaving the array sitting on top of the exposed median nerve at the point of the first (4 cm) incision. The wire bundle then ran up the inside of my arm to the second incision, at which point it linked to an electrical terminal pad which remained external to my arm. The array was then pneumatically inserted into the radial side of the median nerve under microscopic control, the result being that the electrodes penetrated well into the fascicle.
With the array in position, acting as a neural interface, it was possible to transmit neural signals directly from the peripheral nervous system to a computer, either by means of a hard wire connection to the terminal pad or through a radio transmitter attached to the pad. It was also possible to stimulate the nervous system, via the same route, sending current signals from the computer to the array in order to bring about artificial sensations (Warwick et al., 2003). By this means a variety of external devices could be operated successfully from neural signals, and feedback from such devices could be employed to stimulate the nervous system (Gasson et al., 2002).
The project was conducted in association with the National Spinal Injuries Centre at Stoke Mandeville Hospital, Aylesbury. One key aim was to see if the type of implant used could be helpful in allowing those with spinal injuries either to bring about movements otherwise impossible or, at least, to control technology which would, as a result, bring about a considerable improvement in lifestyle. In an extreme case the aim would be to implant the same device directly in the brain of a severely paralysed individual to enable them to control their local environment, to some extent, by means of neural signals – in popular terminology to perhaps switch on lights or drive their car, just by thinking about it. Our experiment of 2002 was therefore a first step in this direction, and in that sense provided an assessment of the technology.
The electrodes allowed neural signals to be detected from the small collection of axons around each electrode. As the majority of signals of interest, such as motor neural signals, occurred at frequencies below 3.5 kHz, low pass filters were used to remove the effects of high frequency extraneous noise. Distinct motor neural signals could be generated quite simply by making controlled finger movements. These signals were transmitted immediately to the computer, from where they could be employed to operate a variety of technological implements.
In experiments to ascertain suitable voltage/current relationships to stimulate the nervous system, it was found that currents below 80 ìA had, in the first instance, little perceivable effect. Unfortunately such results are not fixed in time because of the human brain’s ability first to process out initially unrecognised signals and subsequently to gradually recognise stimulation signals more readily as it adapts to the signals input. In order to realise this current, voltages of 40 to 50 V were applied to the array electrodes. The exact voltage depended on the electrical resistance met by each individual electrode, which, because of the variability of the human body, was not strictly the same from day to day.
It was further found with stimulation experimentation that currents above 100 ìA had little extra effect, the stimulation switching mechanisms in the median nerve fascicle exhibiting a non-linear, thresholding characteristic. The current was, in each case, applied as a bi-phasic signal with 100 ìs inter-signal break periods. This signal waveform in fact closely simulates the first harmonic of the motor neural signals recorded.
In the first stimulation tests, while I was wearing a blindfold, a mean correct identification of 70% was achieved. In simple terms this indicates that, without prior warning, I could successfully detect when a signal had been injected, and when not, seven times out of ten on average. This figure is though somewhat misleading, as it would usually take a few sets of tests to get my brain ‘into the mood’ for an experimentation session. Subsequently, after about an hour of inputting signals, my brain would appear to ‘get fed up’ and results would tail off. Hence experimental sessions usually lasted for an hour at most, with about one hour for alternative activities before the next session commenced. Results from the middle time period of a session were frequently a lot higher than the 70% average.
Towards the end of the entire 2002 implant experiment, which concluded with the implant’s extraction on 18 June 2002, a mean perception rate of stimulation of over 95% was being achieved. Given the nature of the tests being carried out, as described in the previous paragraph, what this in reality meant was that, to all intents and purposes, the recognition of stimulation was, by this time, usually 100%. All sorts of side effects were likely to disrupt a pure 100% return though, ranging from phantom signals, to local mobile phone texting to, in extreme cases, potential pickup from local radio stations.
The applications carried out were quite wide ranging (Gasson et al., 2002; Warwick, 2002) and included the bidirectional control of an articulated hand. The aim of the hand, known as the SNAVE hand, is to mimic the control mechanisms apparent in a human hand. Sensors in the fingertips allow for the grip shape to be adapted as well as for the applied force to be modified as necessary. In this way tension applied to an object can be adjusted to avoid slippage or to apply a force appropriate to the object being gripped.
In tests, during which I wore a blindfold, the articulated hand’s movements were controlled directly from signals taken from the implanted array, i.e. my motor neural signals. Further to this, sensory data were fed back via the implant and the grip force was recorded. The object of the exercise was for me, without any visual stimulus, to apply the lightest touch to an object, just sufficient for a very light grip. As more force was applied to an object, so the amount of neural stimulation was increased. Over the course of a two-week period, I learnt to judge accurately, a force just sufficient to grip an object.
On 20 May 2002 I visited Columbia University, New York City, and an Internet link was set up between the implant in my arm, in New York, and the SNAVE hand, which was still back in Reading University in the UK. Signals from the neural implant in the USA were transmitted across the Internet to control the remote hand. Coupled with this, with myself wearing a blindfold, feedback information was sent from the UK to the implant to successfully stimulate my nervous system in a series of trials. A 100% signal recognition rate was achieved and the SNAVE hand was controlled adequately, despite the apparent delay in signal transmission.
Data taken from the neural implant were directly employed to control the movement of an electric wheelchair, by means of a simple sequential state machine. Neural signals were used to halt the machine at a point related to the chosen direction of travel – forwards, backwards, left and right. In the first instance, experiments involved selectively processing signals from several of the implant electrodes over time, in order to realise direction control.
With only a small amount of learning time (about one hour), reasonable drive control of the wheelchair was achieved. For this task, however, a short-range digital radio link was established between the implant and the wheelchair’s driver control mechanism. The radio transmitter/receiver unit was worn on my lower left arm, being housed in a lightweight gauntlet. Extensive trials were subsequently carried out around a fairly cluttered outdoor environment, with considerable success.
Another application was the use of neural stimulation to feed in extrasensory input. Two ultrasonic sensors were positioned on the peak of a baseball cap. The output from these sensors was fed down to the gauntlet, to bring about direct neural stimulation. When an object was positioned adjacent to the sensors, the rate of stimulation was high. As the distance between the object and the sensors increased, the rate of stimulation was reduced in a linear fashion with regard to distance. In this way I was able to obtain a highly accurate ultrasonic sense of distance.
Tests were carried out in a normal laboratory environment and, with a blindfold on, I was readily able to navigate around objects in the laboratory. My personal, albeit one-off, experience was that my brain adapted very quickly, within a matter of minutes, to the new sensory information it was receiving. The pulses of current being witnessed were clearly directly linked to the distance of a nearby object. Further, when an object was rapidly brought into my ultrasonic ‘line of sight’ an ‘automatic’ recoil type response was witnessed, causing my body to back away from what could have been a dangerous situation.
The final experiment of scientific note involved the assistance of my wife, Irena. She had two electrodes inserted into her median nerve in, roughly speaking, the same location as my own implant, a process referred to as micro-neurography. Via one of the electrodes in particular, motor neural signal responses could be witnessed. The output from the electrodes was then directly linked to a computer. In tests, signals generated by my wife’s nervous system were transmitted through the computer in order to stimulate my own nervous system, with the process also being initiated in the reverse direction. Effectively we had brought about a direct electrical connection between the nervous systems of two individuals. This link we then employed to send motor neural signals directly from person to person. So if Irena generated three such signals, I witnessed three signal stimulations on my own nervous system and vice versa. In this way we had successfully achieved a simple radio telegraphic signalling system between our nervous systems. Clearly, with implants positioned not in the peripheral nervous system but directly in the motor neural brain region, the same type of signalling could be regarded as the first, albeit rudimentary, steps in thought communication.
Conclusions so far
The range of applications carried out with the 2002 implant, a full description of which is given in Warwick, 2002, gives rise to a number of implications. With implants subsequently positioned in the motor neural brain region it means we can look forward to a variety of technological control schemes purely initiated by thought. For those who are paralysed, this should open up a new world, with them being able to switch on lights, make the coffee and even drive a car – just by thinking. Extrasensory input, such as the ultrasonics employed already, could also provide an alternative sense for those who are blind.
Issues of infection and rejection were high on the agenda during the experimental period. It can be reported that at no time was any sign of infection witnessed. As regards rejection of the implant, however, results are perhaps far more encouraging than could have initially been hoped for. When the implant was removed, 96 days after implantation, no signs of rejection were observed. Indeed fibrous scar tissue had grown around the implant itself, firmly pulling it towards the median nerve bundle. It appeared that the implant had neither lifted nor tilted from the nerve trunk and the electrodes were still embedded.
One negative aspect of the trial was the gradual loss of electrodes, most likely due to mechanical wire breakdown, at the point of exit from my arm. By the end of the 96-day study only three of the electrodes remained functional, all others having become open-circuit. Post-extraction examination indicated that the electrodes themselves appeared to be still intact and serviceable. However, the gradual decline in the number of channels still functioning was one of the main reasons that the experiment was brought to an end. Clearly, for long-term implantation, the mechanical design aspects will need to be looked at in detail.
Our research in this area has now been clearly refocused towards a potential brain implant, possibly in the motor neural area. However, many decisions need to be taken in the meantime as to the exact positioning of implanted electrodes, the number and type of electrodes to be implanted and the extent of signals to be investigated. High on the list of experiments to be carried out is a series of tests involving thought communication. Necessarily this will involve the implantation of more than one individual other than myself, which may present ethical difficulties.
The whole programme, in fact, presents something of an ethical dilemma. Very few would argue against the development of implants to help those who are paralysed to control their environment, including some aspects of their own bodily functions. Alternative senses for those who are blind or deaf would also be seen by most to be a good cause. The use of such technology to upgrade humans, turning them into cyborgs, presents a much more difficult problem. Who gets an implant and who doesn’t? Who controls their use? Indeed should humans be allowed to upgrade their capabilities and become ‘super humans’?
Humans now have the potential to control another aspect of their own destiny. It will be interesting to see how quickly and easily this will be brought about. I, for one, will be at the front of the queue.
- N. K. Hayles, How We Became Posthuman, University of Chicago Press, 1999.
- K. Warwick, QI: The Quest for Intelligence, Piatkus, 2001.
- P. Cochrane, Tips for the Time Traveller, Orion Business Books, 1997.
- K. Warwick, M. Gasson, B. Hutt, I. Goodhew, P. Kyberd, B. Andrews, P. Teddy and A. Shad, ‘The application of implant technology for cybernetic systems’, Archives of Neurology, to be published 2003.
- M. Gasson, B. Hutt, I. Goodhew, P. Kyberd and K. Warwick, ‘Bidirectional human machine interface via direct neural connection’, Proc. IEEE International Workshop on Robot and Human Interactive Communication, Berlin, pp. 265–270, Sept. 2002.
- K. Warwick, I, Cyborg, Century, 2002.
Professor Kevin Warwick
University of Reading
Kevin Warwick is Professor of Cybernetics at the University of Reading, UK where he carries out research in artificial intelligence, control, robotics and cyborgs. He is also Director of the University TTI Centre, which links the university with SMEs and raises over £2 million each year in research income. Following a PhD and research post at Imperial College, London, he held positions at Oxford, Newcastle and Warwick universities before being offered the Chair at Reading. Professor Warwick has been awarded higher doctorates both by Imperial College and the Czech Academy of Sciences, Prague. He was presented with The Future of Health Technology Award in MIT and was made an Honorary Member of the Academy of Sciences, St. Petersburg. In 2000 he presented the Royal Institution Christmas Lectures, entitled ‘The Rise of the Robots’. His recent autobiography I, Cyborg gives a full picture of his life, cyborg research and aim to link his own brain to a computer.