Aug 022011
 

In another blog post, Victor Pikov raised provocative points  that speak to the iterative integration of neurotechnology (if not technology in general) into the fabric of human life and being. In this light, I think that we need to view Manfred Clynes and Nathan Kline’s conceptualization of the “cyborg” as a multi-step process, with renewed interest and vigor.  As humans, we use tools to gain knowledge and exercise the power of such know-how over our environment and condition.  Technology provides both investigative and articulative tools that allow us to both know and do at increasing levels of sophistication, complexity and capability.  Indeed, our current and future state might be seen as Homo sapiens technologicus (one aspect of which is Pikov’s somewhat tongue-in-cheek “twittericus”).

 

I agree with these perspectives, and offer that we are seeing the human-in-transition, a form of “trans-humanism” that is defined by and reliant upon technology and a technologically enabled worldview in the evolution and development of our species. This is evidenced by our technologically-enabled, rapid access to unprecedented amounts of information, increasing integration of technology into human embodiment, technologically-mediated engagement with each other, and capabilities for manipulation and control of our external and internal environments. As Victor Pikov notes, in this way, we are poised before a horizon of possibility, and potential problems.

 

Yet, any progression into and through a new era will incur individual and social attitudinal changes in relation to the capabilities and effects offered by new science and/or technology, and the effect(s) and our relationship to (and through)  neural interfacing would be no different.  It is interesting to speculate on how the cyborgization of homo technologicus will occur, and I wonder how we as individuals, communities and a species will direct and handle such change.   A “one-size-fits-all” approach to the employment of any neurotechnology – be it diagnostic or interventional – is at very least pragmatically inefficient, and at worse, inapt on both technical and ethical grounds. And while we might skirt some (but not all) of these technical issues when dealing with certain forms of neuroimaging (like fMRI/DTI), the possibility for runaway, a.k.a. Wexelblatt, effects (i.e. unanticipated consequences of nature) incurred by interventional neurotechnologies looms large, and ethico-legal and social issues become all the more prominent with increasing use of any neurotechnology in the public sphere. I believe that the issue boils down to an intersection of two major unknowns –first is the persistent uncertainties of the so-called “hard questions” of neuroscience (namely, how consciousness/mind originates in/from brain), and the second is how any neurotechnology can and does affect the nervous system. These uncertainties are not mutually exclusive – the tools-to-theory heuristics of neuroscience are sustained by the use of neurotechnology to forge ever-deepening understanding about the structure and function of the brain, and theory-to-tool heuristics enable the development of successively more complicated and capable neurotechnologies to assess, access and control neural functions. Yet, navigating the possibilities of what and how technologies can be used, versus what technologies should be used, in whom, and in which ways requires stringency in the guidelines and policies that direct neurotechnological research and its application(s).

 

As Don DuRousseau and I have recently noted, this may be increasingly important given a pervasive “market mindset” that has fostered more widespread use of neurotechnologies, and a tendency to side-step evidence-based, pragmatically grounded approaches and instead rely upon lore rather than the most currently validated science. Clearly, further research, development, testing and evaluation (RDTE) of various neurotechnologies is required to more fully define 1) what constitutes evidence-based versus non-evidenced based claims; and 2) the capabilities, limitations – and potential risks – of employing various neurotechnologies in both clinical and non-clinical settings.

 

We have called for uniform and enforced screening mechanism for all neurotechnology product developers to ascertain whether their products may incur potential risks to the general public, and regulation of the industry, as well as the clinical and public use of these technologies and devices (see: Giordano J, DuRousseau D. Use of brain-machine interfacing neurotechnologies: Ethical issues and implications for guidelines and policy. Cog Technol 2011; 15(2): 5-10).

 

But it’s important to note that the field – and use- of neurotechnology is evolving and with this evolution comes the development of new techniques, knowledge and capabilities. So, perhaps what is required is an “evo-devo” orientation to not only the ways that neurotechnology can affect the human condition, but also to the ongoing development and use of the technology itself. As more data become available, pre- and proscriptions regarding the use(s) of particular neurotechnologies should be re-examined, re-assessed, and altered as necessary, and as consistent with the spirit and dictates of science to remain self-reflective, self- critical and self-revising. To do otherwise would be anachronistic, if not downright de-evolutionary.

Sorry, the comment form is closed at this time.