Skip to main content

Advertisement

Log in

Brain to Computer Communication: Ethical Perspectives on Interaction Models

  • Original Paper
  • Published:
Neuroethics Aims and scope Submit manuscript

Abstract

Brain Computer Interfaces (BCIs) enable one to control peripheral ICT and robotic devices by processing brain activity on-line. The potential usefulness of BCI systems, initially demonstrated in rehabilitation medicine, is now being explored in education, entertainment, intensive workflow monitoring, security, and training. Ethical issues arising in connection with these investigations are triaged taking into account technological imminence and pervasiveness of BCI technologies. By focussing on imminent technological developments, ethical reflection is informatively grounded into realistic protocols of brain-to-computer communication. In particular, it is argued that human-machine adaptation and shared control distinctively shape autonomy and responsibility issues in current BCI interaction environments. Novel personhood issues are identified and analyzed too. These notably concern (i) the “sub-personal” use of human beings in BCI-enabled cooperative problem solving, and (ii) the pro-active protection of personal identity which BCI rehabilitation therapies may afford, in the light of so-called motor theories of thinking, for the benefit of patients affected by severe motor disabilities.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. General motives for avoiding purely speculative approaches in technoethics, that is, ethical reflection on technological scenarios that are beyond the reach of technological foresight, are discussed in [32] as far as neuroethics is concerned, and in [33] as far as nanoethics is concerned.

  2. Imminence is regarded as a chief dimension for neuro-ethical triage in [32].

  3. Accordingly, the broad philosophical context of transhumanism - in the framework of which issues of cyborg identity, rights, and responsibilities are often examined [34] - is hardly relevant here.

  4. The distinction between personal and sub-personal levels of explanation in psychology was introduced in [25]. “It is only on the personal level that explanations proceed in terms of the needs, desires, intentions and beliefs of an actor in the environment.”[25], p.164. In connection with the explanation of pain states, Dennett remarks: “Since the introduction of unanalysable mental qualities leads to a premature end to explanation, we may decide that such introduction is wrong, and look for alternative modes of explanation. If we do this we must abandon the explanatory level of people and their sensations and activities and turn to the sub-personal level of brains and events in the nervous system.” [25], p. 93, emphasis mine. For a more recent analysis of this distinction, see [26].

  5. Chiefly based on invasive transduction technologies are so-called input BCIs, which fall outside the scope of this paper. Input BCIs establish computer-to-brain communication by collecting, processing, and transmitting to the brain signals that are produced by a source external to the human body.

  6. See [5], for a more detailed description of these functional components.

  7. For a more general discussion of this epistemic issue in connection with the justification of inductive inference, see [35], and in connection with ethical issues concerning learning robots, see [36].

  8. The possibility of deploying on-line learning methods to deal with BCI learning problems is analyzed in [15].

  9. For an introduction to individualistic and relational conceptions of autonomy, see [37]. The promotion of autonomy of locked-in patients afforded by BCI systems is appropriately emphasized in [11], pp. 127-129. On more general grounds, however, one should carefully note, as Hansson does, that “subordination to technology will probably become an increasingly serious problem as enabling technology is developed that exhibits more and more intelligent behavior” [38, 264].

  10. The Charter of Fundamental Rights of the European Union, art. 26, states: “The Union recognizes and respects the right of persons with disabilities to benefit from measures designed to ensure their independence, social and occupational integration and participation in the life of the community”.

  11. It is not clear, however, that the more appropriate liability ascription policies for brain-actuated robots in the near future will be those based on economically oriented criteria, in view of the free exchange of technological resources which has become standard practice within the BCI research community: “The non-invasive BCI community overcame commercial temptations with the BCI 2000 website allowing laboratories worldwide access to the necessary hard- and software.” [12, p. 482].

  12. This issue is examined in the light of the distinction between negative and positive rights in [11]. Moreover, Fenton and Alpert discuss possible enhancement effects in LIS subjects deriving from the use of BCI systems in the light of extended mind theories in the philosophy of mind, according to which BCI-controlled peripheral devices may enable one to augment neural structures for cognitive processing [11, p. 127].

  13. http://www.darpa.mil/dso/thrusts/trainhu/nia/index.htm (site visited on February 12, 2009).

  14. The authors of this study go as far as claiming that “...the presence of reproducible and task-dependent responses to command without the need for any practice or training suggests a method by which some non-communicative patients, including those diagnosed as vegetative,... may be able to use their residual cognitive capabilities to communicate their thoughts to those around them by modulating their own neural activity.” [29] p. 1402.

  15. See, for discussion, [39] pp. 249-271 and, more specifically in connection with BCI systems, [40].

  16. Rather than as a paralyzing maxim which uniformly blocks the use of technologies if one cannot exclude undesirable consequences with absolute scientific certainty. This construal of the precautionary principle is arguably incoherent, insofar as it is oblivious to the fact that scientific theories and models are inherently fallible.

  17. Early ethical reflection on affective computing are found in [41].

  18. This scenario reminds one of the psychoanalytic variation of the know thyself maxim that Freud set out as a main goal for psychoanalytic interactions, that is, “…to strengthen the ego, to make it more independent of the superego, to widen its field of perception and enlarge its organization, so that it can appropriate fresh portions of the id. Where id was, there ego shall be. It is a work of culture - not unlike the draining of the Zuider Zee .” [42 p. 80 of the English translation].

References

  1. Birbaumer, N., N. Ghanayim, T. Hinterberger, B. Kotchoubey, A. Kuebler, J. Perelmouter, E. Taub, and H. Flor. 1999. A spelling device for the paralyzed. Nature 398:297–298.

    Article  Google Scholar 

  2. Hochberg, L.R., M.D. Serruya, G.M. Friehs, J.A. Mukand, M. Saleh, A.H. Caplan, A. Branner, D. Chen, R.D. Penn, and J.P. Donoghue. 2006. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature 442:164–171.

    Article  Google Scholar 

  3. Birbaumer, N. 2006a. Breaking the silence: Brain-computer interfaces for communication and motor control. Psychophysiology 43:517–532.

    Article  Google Scholar 

  4. Wolpaw, J.R., N. Birbaumer, D.J. McFarland, G. Purtscheller, and T.M. Vaughan. 2002. Brain-computer interfaces for communication and control. Clinical Neurophysiology 113:767–791.

    Article  Google Scholar 

  5. Millán, J. del R., F. Renkens, J. Mouriño, and W. Gerstner. 2004. Brain-actuated interaction. Artificial Intelligence 159:241–259.

    Article  Google Scholar 

  6. Galán, F., M. Nuttin, E. Lew, P.W. Ferrez, G. Vanacker, J. Philips, and J. del. R. Millán. 2008. A brain-actuated wheelchair: Asynchronous and non-invasive brain-computer interfaces for continuous control of robots. Clinical Neurophysiology 119:2159–2169.

    Article  Google Scholar 

  7. Friedman, D., R. Leeb, L. Dikovsky, M. Reiner, G. Pfurtscheller, and M. Slater. 2007. Controlling a virtual body by thought in a highly immersive virtual environment, in GRAPP 2007, Barcelona, Spain, 83–90.

  8. Nijholt, A., D. Tan, A. Brendan, J. del R. Millán, B. Graimann. 2008. Brain-computer interfaces for HCI and games, in Proceedings of CHI08, ACM, pp. 3225–3228.

  9. Gerson, A.D., L.C. Parra, and P. Sajda. 2006. Cortically coupled computer vision for rapid image search. IEEE Transactions on Neural Systems and Rehabilitation Engineering 14(2):174–179.

    Article  Google Scholar 

  10. Yahud, S., and N.A. Abu Osman. 2007. Prosthetic hand for the brain-computer interface system. IFMBE Proceedings 15:643–646. Springer, Berlin.

    Article  Google Scholar 

  11. Fenton, A., and S. Alpert. 2008. Extending our view on using BCIs for locked-in syndrome. Neuroethics 1:119–132.

    Article  Google Scholar 

  12. Birbaumer, N. 2006b. Brain-computer interface research: Coming of age. Clinical Neurophysiology 117:479–483.

    Article  Google Scholar 

  13. Buxton, R.B. 2002 An introduction to functional magnetic resonance imaging: Principles and techniques. Cambridge UP.

  14. Linderman, M. D., G. Santhanam, C.T. Kemere, V. Gilja, S. O’Driscoll, B.M. Yu, A. Afshar, S.I. Ryu, K.V. Shenoy, T.H. Meng. 2008. Signal Processing Challenges for Neural Prosthetes. A Review of State-of-Art Systems, IEEE Signal Processing Magazine 18.

  15. Millán, J. del R. 2004. On the Need for On-line Learning in Brain-Computer Interfaces. International Joint Conference on Neural Networks.

  16. Vapnik, V. 2000. The nature of statistical learning theory. 2nd ed. New York: Springer.

    Google Scholar 

  17. Reath, A. 1999. Autonomy, ethical. In Routledge encyclopedia of philosophy, ed. E. Craig. London: Routledge.

    Google Scholar 

  18. MacKay, D. 2003. Information theory, inference, and learning algorithms. Cambridge UP.

  19. Arkin, R. 1998. Behavior-based robotics. Cambridge: MIT.

    Google Scholar 

  20. Nehmzow, U. 2006. Scientific methods in mobile robotics. London: Springer.

    Google Scholar 

  21. Matthias, A. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology 6:175–183.

    Article  Google Scholar 

  22. Miall, R.C., and D.M. Wolpert. 1996. Forward models for physiological motor control. Neural Networks 9:1265–1279.

    Article  Google Scholar 

  23. Kawato, M. 1999. Internal models for motor control and trajectory planning. Current Opinion in Neurobiology 9:718–727.

    Article  Google Scholar 

  24. Bufalari, S., F. Cincotti, F. Babiloni, L. Giuliani, M.G. Marciani, and D. Mattia. 2007. EEG patterns during motor imagery based volitional control of a brain computer interface. International Journal of Electromagnetism 9:214–219.

    Google Scholar 

  25. Dennett, D. 1969. Content and consciousness. London: Routledge & Kegan Paul.

    Google Scholar 

  26. Hornsby, J. 2000. Personal and Sub-Personal: A Defence of Dennett’s Original Distinction. In New Essays on Psychological Explanation, Special Issue of Philosophical Explorations, eds. M. Elton, and J. Bermudez, 6–24.

  27. Kanizsa, G. 1955. Margini quasi-percettivi in campi con stimolazione omogenea. Rivista di Psicologia 49:7–30.

    Google Scholar 

  28. Philiastides, M.G., and P. Sajda. 2006. Temporal characterization of the neural correlates of perceptual decision making in the human brain. Cerebral Cortex 16:509–518.

    Article  Google Scholar 

  29. Owen, A.M., M.R. Coleman, M. Boly, M.H. Davis, S. Laureys, and J.D. Pickard. 2006. Detecting awareness in the vegetative state. Science 313:1402.

    Article  Google Scholar 

  30. Kant, I. 1983. Grounding for the Metaphysics of Morals. In Kants Ethical Philosophy, ed. J.W. Ellington. Indianapolis: Hackett.

    Google Scholar 

  31. Millán, J. del R. 2007. Tapping the mind or resonating minds? In European visions for the knowledge age, a quest for new horizon in the information society, ed. P.T. Kidd, 125–132. Macclesfield: Cheshire Henbury.

    Google Scholar 

  32. Farah, M.J. 2002. Emerging ethical issues in neuroscience. Nature Neuroscience 5:1123–1129.

    Article  Google Scholar 

  33. Nordmann, A. 2007. If and then: A critique of speculative nanoethics. Nanoethics 1:31–46.

    Article  Google Scholar 

  34. Warwick, K. 2003. Cyborg morals, cyborg values, cyborg ethics. Ethics and Information Technology 5:131–137.

    Article  Google Scholar 

  35. Tamburrini, G. 2006. Artificial intelligence and Popper’s solution to the problem of induction. In Karl Popper: A centenary assessment. Metaphysics and epistemology, vol. 2, eds. I. Jarvie, K. Milford, and D. Miller, 265–284. London: Ashgate.

    Google Scholar 

  36. Santoro, M., D. Marino, and G. Tamburrini. 2008. Robots interacting with humans. From epistemic risk to responsibility. Artificial Intelligence and Society 22:301–314.

    Google Scholar 

  37. Christman, J. 2003. Autonomy in moral and political philosophy. Stanford encyclopedia of philosophy, http://plato.stanford.edu/entries/autonomy-moral/

  38. Hansson, S.O. 2007. The ethics of enabling technology. Cambridge Quarterly of Healthcare Ethics 16:257–267.

    Article  Google Scholar 

  39. Merkel, R., G. Boer, J. Fegert, T. Galert, D. Hartmann, B. Nuttin, and S. Rosahl. 2007. Intervening in the Brain. Changing psyche and society. Berlin: Springer.

    Google Scholar 

  40. Lucivero, F., and G. Tamburrini. 2008. Ethical monitoring of brain-machine interfaces, A note on personal identity and autonomy. AI and Society 22:449–460.

    Article  Google Scholar 

  41. Reynolds, C., and R.W.Picard. 2004. Affective sensors, privacy, and ethical contracts. Proceedings of CHI04, ACM, 1103–1106.

  42. Freud, S. 1933. New introductory lectures on psycho-analysis. The standard edition of the complete psychological works of Sigmund Freud, vol. 22, 1–182. London: Hoghart.

    Google Scholar 

Download references

Acknowledgments

I wish to thank an anonymous reviewer, Giuseppe Trautteur, Federica Lucivero, and Giovanni Boniolo for helpful and stimulating comments. I benefited from discussions on BCI systems and ethics with Febo Cincotti, Edoardo Datteri, José del R. Millán, Donatella Mattia, Stefano Rodotà, and Matteo Santoro.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guglielmo Tamburrini.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Tamburrini, G. Brain to Computer Communication: Ethical Perspectives on Interaction Models. Neuroethics 2, 137–149 (2009). https://doi.org/10.1007/s12152-009-9040-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12152-009-9040-1

Keywords

Navigation