Sep 132011
 

As Victor Pikov astutely noted in an earlier blog post, there are robust efforts to increase neurotechnology research, development (R&D), and production in China.  This is not incidental; neurotechnology renders considerable capability and potential to improve quality of life – both directly and indirectly. In the former sense by enhancing medical care and human performance, and in this regard one need only think of the ability to assess, discern and better diagnose neurological disorders by using neurogenetics, neuroproteomics, and various forms of neuroimaging, and the therapeutics made possible through selective neurotropic drugs, peripheral and central neural stimulating devices, transcranial magnetic and deep brain stimulation, and neuroprosthetics.   In the latter sense the benefits of neurotechnolgy are financial, achieved by neurotech companies and the national economies that profit from their revenues.

 

Therefore, it becomes important to consider how neurotechnology could be used to leverage economic – and socio-political – influence on the world stage. The old adage that “the one who controls the chips controls the game” is metaphorically appropriate in that efficient production of neurotechnologies can foster a presence in worldwide biotech markets, and the use of neurotechnologic devices in China (for example, conducting neuroscientific and neurotechnological research in Chinese medical institutes) can be attractive to global investment partners, due to the frequently reduced costs and time required to execute such studies. And given that much of the microcomputational circuitry used in neuroS&T (neuroscience and technology), irrespective of where it is made, is being increasingly produced in China, the adage may have literal validity, as well.

 

This steadily growing prominence of non-Western nations in the field of neuroS&T gives rise to a number of important considerations and concerns.  First, is that we are witnessing a shift in global economics, influence, and capabilities, and neurotechnology is a factor in the current and future re-balancing of this power equation. It’s no longer simply a case of “…the West and the rest”, but rather that non-Western countries such as China are becoming a scientific, technological and economic force to be reckoned with.

 

Second, the needs, desires, ideals and practices of Western societies may not be relevant or applicable to the ways that enterprises such as neuroS/T research, development, testing, evaluation (RDTE) and use are viewed and conducted in non-Western nations. This generates “who’s right?” scenarios that involve issues of what  and how the values and practices of a particular group of people can and should be regarded and responded to – a point raised by philosopher Alasdair MacIntyre and recently addressed by Alan Petersen of Monash University in Australia. For example, should a stance of “when in Rome, do as the Romans do” be adopted, and if so, does this mean the employment of certain guidelines and regulations in the country that is involved in neurotech research and product development, and different guidelines and regulations for each and every country that utilizes such neuroS&T? Or could some uniform codes of research and use be viable in any and all situations – and if so, how might these codes be developed and articulated?

 

Third, technological and economic capabilities engender “cred and clout” at international bargaining tables, and so the social and professional values of those countries that are gaining and sustaining momentum in neurotechnological research and production will become ever more prominent, important, and therefore necessary to acknowledge.

 

Working in our group, Misti Andersen and Nick Fitz are studying these issues, and together with Daniel Howlader, are addressing how various philosophies and ethics inform national neurotechnology policies (in the USA, EU, and Asian nations, including China).  Collaborating with social theorist Roland Benedikter of Stanford University, we are examining how the shifting architectonics of biotechnological capability are affecting the philosophical and ethical Zeitgeist that characterizes the “new global shift” and its manifest effects in healthcare, public life and national security on the world stage.

 

These issues span from the scientific to the social, in that neuroscience can be employed to explore, define, and manipulate human nature, conduct, and norms, and neurotechnology provides the tool kit for neuroscientific research and its uses (or misuses).  Moreover, not every country that is dedicating efforts to neuroS&T maintains the same ethical standards for research and/or use that have become de rigueur  in the west.  How shall we engage those countries that do not strictly adhere to the Nuremburg Code, or Declarations of Geneva and Helsinki, yet generate products and devices capable of affecting the human predicament or condition (e.g.- by providing state-of-the-art treatments for neurological and psychiatric disorders or performance enhancement), and in this way incur significant economic power in global markets? Should we adopt some form of moral interventionalism that would seek to enforce particular Western ethical standards upon the conduct of non-Western neurotechnological R&D, or do we posture toward more of an isolationist stance? And in the event, how would we then maneuver our neurotechnological  R&D to retain a viable presence on the global technological and economic map?

 

In this blog and elsewhere, I’ve claimed that it is exactly this scientific-to-social span of neurotechnological effect that necessitates programs dedicated to the ethical, legal and social issues inherent to neuroS/T.  But, as I mentioned in my earlier blog post, if neuroethics is to be globally relevant, then it must be sensitive to pluralist values, and cannot be either an implicit form of neuroscientific and technological imperialism, or succumb to ethical laissez faire.

 

A complete discussion of my take on the fundamental premises and precepts of the discipline and practice(s) of neuroethics is beyond the scope of this blog. But, one of the key points I believe is important to emphasize is that neuroethics must be grounded to a bio-psychosocial framework that recognizes the interaction and reciprocity of biology and the socio-cultural environment.

 

Culture is both a medium in which bio-psychosocial (e.g.- genetic, phenotypic, and environmental) variables are generated, and a forum that defines how such variables may be expressed. So, while our species certainly has a host of common biological features, we also differ – and these differences occur as a consequence of cultural factors,  and in contribution to socio-culturally patterns of cognitive and behavioral variability.

 

The “take home” message here is that our biological, psychological and social aspects manifest both commonalties and differences, and any meaningful ethics would need to take these factors into accord.  Philosopher Bernard Gert’s concept of “common morality” may be viable to some extent, but ethical values and systems also manifest distinctions in standpoint, and therefore ethics would need to at least acknowledge, if not frankly recognize these distinctions in perspective in a discursive way. This brings us back to MacIntyre’s question of “which rationality” should be used in approaching ethical issues and resolving ethical questions.

 

Perhaps it’s not so much a question of “either one form of rationality or another”, but rather more a position of “both/and” in these situations. If neuroethics is to authentically represent a naturalistic orientation to human cognition, emotion and behaviors, then I think that it’s vital to appreciate the ways that bio-psychosocial (viz.- cultural) differences are manifest, and in this appreciation, adopt an ethical approach that is more dialectical.  Thus, I’ve called for a cosmopolitan neuroethics that seeks to bring differing viewpoints to the discourse, and is not necessarily wedded to a particular theory or system, but instead is open to all, relative to the circumstances, benefits, burdens and harms that are in play and at stake.

 

Now, you might be thinking, “Isn’t cosmopolitan ethics a particular theory or system?” and to some extent you’d be right; but before we write off the term and concept as self-contradictory (i.e. an antinomy, something that cannot be “a” and at the same time claim “b”), let’s regard it more as a “way” of doing ethics that seeks complementarity in perspective, orientation and approach, so as to enable a richer, more complete discourse from which to foster synthetic solutions. This would allow us to move away from a “West and the rest” position, to more of a naturalist view of the human and human condition, that would be open to differing views and values, and would seek to define core concepts that could be employed in specific ethical situations and deliberations.

 

Neurotechnology can and likely will affect biological, socio-cultural, economic and political realities in numerous ways, and if we are to develop well-informed, ethically sound guidelines and policies that are best-suited to the complexity of these circumstances, then the need for an inclusive, cosmopolitan neuroethics becomes apparent. The really hard part is making it work.

Aug 162011
 

In a recent piece in the journal Science and in a longer paper posted on the MIT website, Phillip A. Sharp and Robert Langer have spoken to the need for, and trend toward convergence in biomedical science. As these prominent researchers note, convergence “emerges” as the foci and activities of several disciplines fuse so that the sum of their research and outcomes is greater than its constituent parts. Such convergence is occurring among the disciplines that create, employ, and constitute the “field” of neurotechnology – and so we witness a merging of physics, chemistry, nanoscience, cyberscience and engineering, and the engagement of genetics, anatomy, pharmacology, physiology and cognitive psychology, in ways that biologist E.O Wilson might describe as “consilient.”

 

To be sure, this fosters and necessitates the “multilingual” “convergence creole” capabilities of terminology, discourse and knowledge and resource inter-digitations that Sharp and Langer describe. I agree – a common language and working construct of convergence is vital if we realistically operationalize de-siloing of the disciplines that could develop and employ neurotechnological to maximize opportunities to define and solve novel problems in basic and translational biomedicine, and more broadly in the public sphere. That’s because this process is not merely a technical sharing, but instead represents a synthetic mind-set that explicitly seeks to foster innovative use of knowledge-, skill-, and tool-sets toward (1) elucidating the nature and potential mechanisms of scientific questions and problems, (2) de-limiting existing approaches to question/problem resolution; and (3) developing novel means of addressing and solving such issues.

 

I posit that in this way, convergence enables concomitant “tools-to-theory” and “theory-to-tools” heuristics, and the translation of both heuristics and tools to practice. This is important because the current utility of many neurotechnologies is constrained by factors including (1) a lack of specificity of action and effect (e.g. transcranial and/or direct magnetic stimulation), (2) size restrictions and cumbersome configurations of micro- and macroscale devices, and (3) difficulties of matching certain types of neurologic data (e.g. from neuroimaging, neurogenetic studies) to databases that are large enough to enable statistically relevant, and meaningful comparative and/or normative inferences. So the fusion of neuro-nano-geno-cyber science and technologies can be seen as an enabling paradigm for de-limiting current uses and utility, and fostering new directions and opportunities for use and applicability.

 

Once silos are dissolved, limitations can be diminished or removed, but so too may be the ability to recognize relative limits upon the pace and extent of scientific discovery, and the use of its knowledge and products. As I’ve previously mentioned in this blog and elsewhere, the result may be that we then encounter effects, burdens, and harms that were as yet unknown, and/or unforeseen. There is real risk that the pace, breadth and depth of neuroscientific and technological capability may outstrip that of the ethical deliberations that could most genuinely evaluate its social impact, and in response, appropriately direct such innovation and steer its use.

 

What is needed is a systematic method of and forum for inquiry about what the convergence approach in neuroS&T (neuroscience and technology) will and might yield, and how its outcomes and products may change the values and conduct of science and society. Appropriate questions for such inquiry would include: (1) how convergence approaches can be employed in neuroscience; (2) what practical and ethical issues, concerns, and problems might arise as a consequence, and (3) what systems of risk analysis and mitigation might be required to meet these challenges, and guide the employment of neuroS&T. Given the power of convergent science to affect the speed and scope of neuroscientific discovery and neurotechnological innovation, I argue that such an approach to the ethical, legal and social issues (ELSI) is needed now, and not after-the-fact.

 

But any meaningful approach to the ELSI of convergent neuroS&T would require an equally advanced, integrative system of ethics that can effectively analyze and balance positive and negative trajectories of progress, increase viable benefits, and militate against harm(s). Obviously, this would necessitate evaluation of both the ethical issues germane to the constituent convergent disciplines, and those generated by the convergence model of neuroS&T itself.  I believe that neuroethics can serve this role and meet this demand (although opinions on this certainly differ; see for example: “against neuroethics“) As a discipline, neuroethics can be seen as having two major “traditions” – the first being the study of neurological mechanisms involved in moral cognition and actions (what might be better termed, “neuro-morality”), and the second that examines, addresses and seeks to guide ethical issues fostered by neuroS&T research and use (see: “Neuroethics for the New Millennium“).

 

I’ve posed that these two “traditions” are not mutually exclusive, and that if and when taken together, may afford a meta-ethics that both informs how and why we develop and act morally, and uses this information to intuit ways to employ existing systems of ethics, and/or cultivate new ethical approaches to better reflect and decide upon the moral implications and ramifications of various uses and misuses of neuroS&T in the social sphere. Philosopher Neil Levy has claimed that neuroethics might be a new way of doing ethics, and this might be so. At very least, I think that neuroethics will allow a more explicit and purposive focus upon how change, uncertainty and progress in neuroS&T are affected by – and affect – progress, not only in genetics, nanoscience and cyberscience as stand-alone entities or simple concatenations of scientific methods, tools and techniques, but as a true convergence that conflates ideas, process and technologies, and in the event, change the human predicament, human condition, and the human being.

 

There are a number of excellent discussions about what neuroethics is and is not, and can and cannot do (see for example, Eric Racine’s fine book Pragmatic Neuroethics). My take on this is that in order to have any real value, neuroethics (as a discipline and practice) must (1) apprehend the changing realities of neuroS&T capability and effect(s); (2) identify which extant moral theories and systems may and/or may not be viable in ethical analyses and guidance; and (3) develop ethical tools that compensate for weaknesses in current ethical theories in order to more effectively weigh benefits and risks, and remain prepared for possible “less than best case” scenarios.

 

A simple precautionary principle won’t work, for the simple reason being that neuroS&T pushes the boundaries at the frontier of the known and unknown, and (1) conditions “at the edge” are always risky; (2) while apparent benefits may compel each new step forward, burdens, risks and harms can be less than obvious because they often are consequential to our beneficent intentions (for those of you who are Sci-Fi fans, there are host of writings and movies that play to this, think for example of the films Mimic, Surrogates, and Limitless, just to name a few), and (3) the longer S/T remains in the public sphere, the greater the likelihood for it being influenced by economic, and/or socio-political agendas.

 

In other words, stuff happens, and we need to be aware that it can, likely will, and be prepared if and when it does. Not by trying to grind neuroS&T to a halt or by imposing unrealistic proscriptions, but by supporting a convergent approach to both neuroS&T and the ethical systems that guide its use in an ever-more pluralist society, and changing world stage.

Aug 022011
 

In another blog post, Victor Pikov raised provocative points  that speak to the iterative integration of neurotechnology (if not technology in general) into the fabric of human life and being. In this light, I think that we need to view Manfred Clynes and Nathan Kline’s conceptualization of the “cyborg” as a multi-step process, with renewed interest and vigor.  As humans, we use tools to gain knowledge and exercise the power of such know-how over our environment and condition.  Technology provides both investigative and articulative tools that allow us to both know and do at increasing levels of sophistication, complexity and capability.  Indeed, our current and future state might be seen as Homo sapiens technologicus (one aspect of which is Pikov’s somewhat tongue-in-cheek “twittericus”).

 

I agree with these perspectives, and offer that we are seeing the human-in-transition, a form of “trans-humanism” that is defined by and reliant upon technology and a technologically enabled worldview in the evolution and development of our species. This is evidenced by our technologically-enabled, rapid access to unprecedented amounts of information, increasing integration of technology into human embodiment, technologically-mediated engagement with each other, and capabilities for manipulation and control of our external and internal environments. As Victor Pikov notes, in this way, we are poised before a horizon of possibility, and potential problems.

 

Yet, any progression into and through a new era will incur individual and social attitudinal changes in relation to the capabilities and effects offered by new science and/or technology, and the effect(s) and our relationship to (and through)  neural interfacing would be no different.  It is interesting to speculate on how the cyborgization of homo technologicus will occur, and I wonder how we as individuals, communities and a species will direct and handle such change.   A “one-size-fits-all” approach to the employment of any neurotechnology – be it diagnostic or interventional – is at very least pragmatically inefficient, and at worse, inapt on both technical and ethical grounds. And while we might skirt some (but not all) of these technical issues when dealing with certain forms of neuroimaging (like fMRI/DTI), the possibility for runaway, a.k.a. Wexelblatt, effects (i.e. unanticipated consequences of nature) incurred by interventional neurotechnologies looms large, and ethico-legal and social issues become all the more prominent with increasing use of any neurotechnology in the public sphere. I believe that the issue boils down to an intersection of two major unknowns –first is the persistent uncertainties of the so-called “hard questions” of neuroscience (namely, how consciousness/mind originates in/from brain), and the second is how any neurotechnology can and does affect the nervous system. These uncertainties are not mutually exclusive – the tools-to-theory heuristics of neuroscience are sustained by the use of neurotechnology to forge ever-deepening understanding about the structure and function of the brain, and theory-to-tool heuristics enable the development of successively more complicated and capable neurotechnologies to assess, access and control neural functions. Yet, navigating the possibilities of what and how technologies can be used, versus what technologies should be used, in whom, and in which ways requires stringency in the guidelines and policies that direct neurotechnological research and its application(s).

 

As Don DuRousseau and I have recently noted, this may be increasingly important given a pervasive “market mindset” that has fostered more widespread use of neurotechnologies, and a tendency to side-step evidence-based, pragmatically grounded approaches and instead rely upon lore rather than the most currently validated science. Clearly, further research, development, testing and evaluation (RDTE) of various neurotechnologies is required to more fully define 1) what constitutes evidence-based versus non-evidenced based claims; and 2) the capabilities, limitations – and potential risks – of employing various neurotechnologies in both clinical and non-clinical settings.

 

We have called for uniform and enforced screening mechanism for all neurotechnology product developers to ascertain whether their products may incur potential risks to the general public, and regulation of the industry, as well as the clinical and public use of these technologies and devices (see: Giordano J, DuRousseau D. Use of brain-machine interfacing neurotechnologies: Ethical issues and implications for guidelines and policy. Cog Technol 2011; 15(2): 5-10).

 

But it’s important to note that the field – and use- of neurotechnology is evolving and with this evolution comes the development of new techniques, knowledge and capabilities. So, perhaps what is required is an “evo-devo” orientation to not only the ways that neurotechnology can affect the human condition, but also to the ongoing development and use of the technology itself. As more data become available, pre- and proscriptions regarding the use(s) of particular neurotechnologies should be re-examined, re-assessed, and altered as necessary, and as consistent with the spirit and dictates of science to remain self-reflective, self- critical and self-revising. To do otherwise would be anachronistic, if not downright de-evolutionary.

Jul 182011
 

The University of Michigan is developing a minimally-invasive low-power brain implant, termed “BioBolt”, that transmits neural signals to a computer control station, and may someday be used to reactivate paralyzed limbs.

 

While the BioBolt carries enormous potential, the issues of intellectual property and market partnership raise a number of neuroethical questions. In our current era of fast-emerging innovative neurotechnology, we must critically confront the practical questions of how such technologies will be provided to those who need them. In our modern society, commutative justice theories establish the disproportionate provision of goods based upon relative (and unequal) need. Their fundamental assumption is that all patients who need such interventions would be provided access and means to acquire them. Implicit to this assumption are notions of neoclassical economics based upon Adam Smith’s construct of rational actors and unlimited resources (Smith, 1776). However, even a cursory analysis of the contemporary atmosphere of healthcare provision reveals such Smithian assumptions to be vastly unrealistic. In fact, resources are limited, and their provision is based upon a multidimensional calculus that determines the relative distribution of medical goods and services. Put simply, not everybody gets what they need, and this is particularly the case for high-tech medical interventions that are often only partially covered, and in some cases, not covered at all by the majority of health insurance plans. Moreover, some 57 million Americans are currently without health insurance (Wolf, 2010).

 

Now more than ever, we face the pragmatic charge of access: who will receive state-of-the-art neurotechnological interventions, such as the BioBolt? Will these approaches become part of a new ‘boutique neurology,’ or will there be active assertion and effort(s) to increase the utility and use of these interventions, so as to make them more affordable and more widely accessible within the general population of those patients who might require them? Will some newly developed medical criteria accommodate these decisions and actions, or, as is more likely, will the tipping points be governed by healthcare insurance provisions? How can and/or should healthcare reform(s) be adjusted and adjudicated in order to accommodate rapidly advancing science and the potential benefit(s) it might confer? While certain provisions of the new federal healthcare plan might support such directions, real availability and access will only be sustainable through a real shift toward a more demand-side health economics, which would constitute something of a sea change in our overall economic infrastructure. But rarely does such change occur all at once. Instead, it may be more viable to dedicate efforts to developing realistic designs for more equitable allocation of neurotechnologies. Such efforts, if appropriately subsidized and sustained, could be important droplets towards the sea change that may be necessary.

 

For further reference, see:

Giordano, J. (2010). Neuroethical Issues in Neurogenetic and Neuro-Implantation Technology: The Need for Pragmatism and Preparedness in Practice and Policy. Studies in Ethics, Law, and Technology. Vol. 4 (3): Article 4.

Giordano, J., Benedikter, R., and Boswell, M. V. (2010). Pain Medicine, Biotechnology and Market Effects: Tools, Tekne and Moral Responsibility. Ethics in Biology, Engineering, and Medicine. Vol. 1 (2): 135-42.