Cerebrum Spring 2021


Big Data, Big Concerns BY PHILIP M. BOFFEY A n article in the Winter issue of Cerebrum magazine and a podcast episode with the author laid out a tantalizing vision of the enormous potential of advances in neuroimaging and so-called Big Data technologies to revolutionize the treatment of neurological disease. But the author made only a fleeting mention of the ethical issues raised by these advances. Fortunately, the same author— Vince Calhoun , director of the Center for Translational Research in Neuroimaging and Data Science —co-authored a recent article with two experts from the Netherlands that did explore in-depth the major ethical issues raised by these endeavors. It is a welcome effort to flag potential ethical problems while the field is still in an early stage of development. Calhoun’s Center is well-situated to conduct this work. It is backed by three universities in Atlanta, GA, with complementary strengths and missions. They include Emory University, which has expertise in brain disorders, Georgia Tech, which is strong in data mining, and Georgia State, which is proficient in neuroscience and psychology. Calhoun’s Cerebrum article lays out the promise, achievements, and disappointments of the field so far. On the plus side, the knowledge gained from big data and neuroimaging have provided new insights into the working of the brain. But hopes that the discovery of functional magnetic resonance imaging would lead to a clinical breakthrough in assessing and treating mental illness have not yet materialized. Indeed, Calhoun can’t point to any specific examples where neuroimaging is beginning to help the mentally ill. Progress has been slower than he would have liked. His hope is that within ten years we may have learned enough to update our psychiatric diagnostic criteria and refine the medications we prescribe to treat some mental health disorders. The research has been slow to reach the scale required. Calhoun traced the evolution of the field through eras in which researchers studied small numbers of subjects, typically 5 to 20, then larger groups comprised of hundreds of individuals, then the interactions between networks in the brain both at rest and while performing tasks. Each era added to the knowledge base, but they have not yet led to clinical tools to treat mental health disorders or determine drug delivery strategies. We are now firmly in, what he calls, “the era of big data for

neuroimaging and psychiatry.” Several studies already scan tens of thousands of individuals over time, and powerful “deep learning models” require lots of data and computer power. Previous studies have focused on group results and averages. The current goal is to make predictions for individuals of how their symptoms will progress and how they will respond to medications. Calhoun finds “considerable reason to be optimistic about the not-so-distant future.” The article co-authored by Calhoun that explored the ethical issues was published in the journal Human Brain Mapping last July. It analyzed differing approaches in the European Union and the United States toward the use and dissemination of personal health data. Probably, the most important distinction concerns who should be considered to “own” the data and thus have the major say in how it is handled and disseminated. Researchers and universities often believe that the data “belong” to them, and funding agencies in this country consider institutions the owners of the data. In some cases, the funding agencies dictate that the data be shared. By contrast, recent laws in Europe give more rights to the individuals who participate in studies to determine the extent in which they want their data shared. That puts a greater burden on researchers to protect participants’ privacy and obtain their permission before disseminating personal health data. Depending on the circumstances, research journals may also demand that data on which an article is based be uploaded at the time of publication, making them the effective owner of that data. The chief risk in sharing data is that, if it escapes from the research realm or falls into the wrong hands, it can harm the individual whose data has been shared. For example, some studies collect information about substance use and abuse, diseases such as HIV/AIDS, or procedures such as gender reassignment surgery that can stigmatize an individual in some circles. There are ways to protect the privacy of an individual’s health data without unduly hampering research. The trick is to strike an appropriate balance between risk and benefit. One approach is to “de-identify” data that directly defines an individual, such as name, address and date of birth, as well as information on an individual’s physical and mental health or treatments. All such information is stripped from the dataset and replaced by artificial identifiers that can’t be linked to individuals by third parties, such as insurers, but can be traced back by the host researchers, if need be. More robust protection is provided by “fully anonymized” data, which has all personalized data removed and any path back to the original data deleted, making it extremely hard to trace the data back to an individual. However, even this is not


Made with FlippingBook - Online Brochure Maker