The neuroscientist is spending a relaxing Sunday morning reading The New York Times while enjoying a cup of English breakfast tea. As Alison Barth turns the pages, a feature story grabs her attention—A Dying Young Woman’s Hope in Cryonics and a Future: Cancer claimed Kim Suozzi at age 23, but she chose to have her brain preserved with the dream that neuroscience might one day revive her mind.

The article goes on to assert that one day it may be possible for science to digitally replicate her consciousness in some capacity.

Suozzi’s hope for digital immortality haunts Barth. No wonder, as she is interim director of Carnegie Mellon University’s BrainHub, which is a CMU interdisciplinary initiative that focuses on harnessing the technology that helps the world explore brain and behavior. Started as a response to President Obama’s BRAIN Initiative, BrainHub fills its role through three initiatives: designing new tools to measure the brain, developing new methods to train the brain, and creating new computational methods to analyze data on the brain.The goal is to link discoveries in brain science with a deeper understanding of brain computation in order to provide insights that will improve approaches to treatment, as well as facilitate the design and creation of intelligent devices for therapeutic intervention and for better experiences in our everyday interactions with the digital world.

Throughout the rest of the day, and the following week, Barth keeps thinking about Suozzi. But rather than remain haunted, she realizes she can at the very least explore the feasibility of what the September 12, 2015, article suggested could be possible. She schedules an October 29 panel discussion, Downloading Consciousness: Connectomics and Computer Science,which will be open to the public and include several of her colleagues. The event will examine top-down and bottom-up approaches to replicate cognitive brain function: where are we now, what is likely in the near future, and what remains science fiction?

The discussion, slated for an hour, takes place in a CMU campus lecture hall that can seat more than 200 people. Good thing—because nearly every seat is taken by an audience that spans generations.

The moderator is Barth, whose lab is focused on understanding how experience assembles and alters the properties of neural circuits in the cerebral cortex, in both normal and disease states.

She introduces the four panelists:

  • Anind K. Dey, professor and director of CMU’s Human-Computer Interaction Institute: Two of his current projects are modeling and predicting human behavior, creating salient summaries of experiences to diagnose and support memory issues.
  • Sandra Kuhlman, CMU biological sciences professor: She uses microscopic techniques to visualize how the circuitry in the brain changes as it learns new skills. By comparing and contrasting how young and old brains adapt to new situations, she seeks to understand how circuit construction evolves over time and how this impacts learning and disease.
  • Wayne Wu, associate director of CMU’s Center for the Neural Basis of Cognition: Also a faculty member in the Department of Philosophy, he focuses on the attention, perception, action, and schizophrenia at the interface between philosophy and cognitive science.

The discussion is meant to be freewheeling:

How are circuits constructed to give rise to cognition? Have we nearly passed the Turing test? Neuroscientists are making great strides in investigating motifs for cellular and synaptic connectivity in the brain, with the hope that they might be able to reconstruct “thought” by understanding the component parts. Conversely, computer scientists are using different strategies to create better and better interfaces for devices to interact with us in a way that is indistinguishable from another human.

At the outset, Touretzky contends that despite all of the headlines, artificial intelligence hasn’t passed the Turing Test, a litmus test established in the 1950s by scientist Alan Turing to determine whether a machine could exhibit intelligence indistinguishable from that of humans. To illustrate,  Touretzky suggests asking Google a few questions: “What’s the third largest city in Botswana? What is the square root of ‘not quite 16?’ Which one of these is easy for people? Which one is hard for Google? Try it and see.” Google’s shortcomings demonstrate that current systems haven’t achieved anything that could truly be called “intelligence.” Touretzky does believe such “intelligence” will be achieved some day, “but I think that day is still pretty far off.”

For Dey, artificial intelligence isn’t measured so strictly. He tells the panel that if he can suspend his disbelief that the technology he’s engaging with is real, if it just feels real to him, even for a moment, then that might be enough. “The fact that I can ask natural language questions to my phone and have it answer, that’s impressive,” he says. “If I interact with a system that essentially can fake me out—it may not have a soul, it may just be a representation of rules underneath—but if something was compelling enough to me that I couldn’t tell, then it almost wouldn’t matter if it has a soul.”

This concept of engagement with machines is something neuroscientists and robotics engineers alike are focusing their research toward. But a machine emulation of a human’s brain? Barth shakes her head. She contends we are still ages away from the technology to upload a human brain and have it be an exact representation of that person. She tells the group that although she respects many of the named sources in the article that brought them all there in the first place, she posted her skepticism on The New York Times website.

“There are so many other things about neural circuits that are not represented by the anatomy,” Barth says. “It’s a fantasy to think it would be sufficient to recreate somebody you would recognize.” Our brains are shaped by modulatory factors, epigenetics, changes in genome, all of which vary depending on cell type, and they operate in a dynamic state, one in constant flux. “The circuit map itself will not be sufficient to recreate someone’s identity. I’d say we’ve got some other fish to fry first.”

Those “fish”include more immediate technological advances which can “liberate the human experience.” For example, at coffee shops, robot employees could soon be pouring the coffee and adding the desired amount of cream and sugar, perhaps freeing baristas to pursue more creative, meaningful responsibilities. “What these devices will do will free us to actualize ourselves,” Barth says.

Like Barth, Kuhlman is in the trenches of basic brain science. Kuhlman falls into the camp of people who prefer to not think too hard about the bounds of artificial intelligence. Instead, she uses another term: “Artificial intelligence means so many different things to people. I’m going to use the words ‘machine learning.’”

Dey’s participation in the panel discussion made him remember a classmate of his from 20 years ago telling everyone he would one day upload his brain to a computer. “We all thought he was nuts,” recalls Dey. In retrospect, maybe not.

In her lab, she works on ways to improve machine learning using biology. Take facial-recognition software, for example. Vision of a human being is a skill that doesn’t take place only with our eyes. “It’s the brain that’s doing the seeing,” Kuhlman points out.

Machine learning systems have been modeled using excitatory cells, the kinds of cells in humans that Kuhlman likens to “go” cells. But there are two types of cells found in human mechanisms: excitatory, or “go,” cells, and inhibitory, or “stop,” cells. By using both types of “cells,” machine learning is bound to improve, as the inhibitory cells form sub-circuits that allow them to filter out unnecessary information. The end result is clarity, in the form of facial-recognition, or even sight for the blind.

But building a machine based upon brain-based principles is not the same thing as simulating human intelligence and behavior. Dey shows the AI-curious room a video example of AI in action, in the form of SimSensei, technology developed by a recent CMU acquisition.

Carnegie Mellon hired researcher Louis-Philippe Morency at the start of 2015 to lead the Multimodal Communication and Machine Learning Laboratory in the Language Technology Institute. He contributed to the development of SimSensei, a virtual interviewer that provides decision support for healthcare practitioners—or perhaps actual healthcare support, depending on how deeply one connects to the pseudo healthcare interviewer. On average, humans engage with the virtual agent named Ellie upwards of 40 minutes, a significantly long period of time for human-computer interaction. In that time, the humans give personal information aloud while SimSensei utilizes its MultiSense technology to quantify nonverbal information as well, such as voice quality and facial expression. In this way, SimSensei can potentially help doctors with medical diagnoses, in both accuracy and efficiency.

But with SimSensei, humans are never meant to suspend belief they are engaging with a machine. Some people—as Suozzi did—hope that humans will interact with uploaded brains as if they were human, too. But will people really be able to have a relationship with AI, even if it’s emanating from a deceased friend or loved one?

“I don’t know. I just don’t,” Dey says.

Work is being done to improve the ways computers think and reason, which could end up helping machines seem more human than ever before. The NEIL (Never Ending Image Learner) computer program has been running 24 hours a day at CMU since 2013, all in the name of common sense. The research team behind this constant learner—Abhinav Gupta, assistant research professor in CMU’s Robotics Institute, Xinlei Chen, a PhD student in CMU’s Language Technologies Institute, and Abhinav Shrivastava, a PhD student in robotics—have found that as the computers analyze millions of images (more than 5 million so far), they are thinking more like people.

Such technology might naturally lead one to ponder the sci-fi, “Terminator”-inspired worst-case scenario that often comes into such discussions: Computers take over the world; humans are rendered unnecessary; and these machines, now capable of recursive self-improvement, simply get rid of us. For now, the panel agrees, we have much more to gain than to lose from these innovations.

The panel also realizes that part of what makes the story of Kim Suozzi in The New York Times so compelling is the idea that, although it is currently highly improbable, science is indeed heading toward the technology of replicating the precise neural fingerprint of a once-living person. Could it ever happen?

Touretzky—who in addition to being an AI and robotics researcher, is also a published computational neuroscientist—believes it’s too early to speculate. We are at the very beginning of understanding the brain, and the fundamental theories that form the foundation for this field are just beginning to emerge.

“Suppose you read a sentence, ‘John kissed Mary.’ What happens in your brain that allows you to understand what that sentence means and remember it? We don’t know,” he says.

The event will examine top-down and bottom-up approaches to replicate cognitive brain function: where are we now, what is likely in the near future, and what remains science fiction?

Much in the way chemistry was a well-established field long before we finally understood its basis in physics, understanding the brain requires a theoretical foundation still being established. It’s highly interdisciplinary, involving an understanding of neural pathways and psychology, and it’s hard to define concepts such as consciousness, which is what panelist Wayne Wu specializes in.

“If you want to find a full definition of consciousness, you’re not going to find one,” Wu tells the group. “But I would also point out that to study a lot of things in natural science, you don’t need to define them, right? If you’re observing tigers, you don’t have to tell me exactly what a tiger is as long as you can track it. So I think if we’re studying consciousness—which seems kind of ineffable in some ways, because it’s really hard to describe and no one’s got a definition of it—it might be enough if you can track it.”

It’s an unsettling prospect, working in a field where basic concepts are impossible to define and can make folks uncomfortable.

But all of the researchers agree it’s well worth the effort. “It’s a better understanding of what it means to be human,” Touretzky says.

As for Suozzi, and her dying wish to one day revive her mind, Touretzky says it ultimately raises two questions:

  1. Can we understand in full detail the principles by which brains work?
  2. Can we somehow deconstruct a particular person's brain in sufficient detail that we can simulate it, and in that way make virtual copies of the person?

The second question, says Touretzky, is far more ambitious and is probably impossible without answering question number one. He adds that most neuroscientists think the second question is “impossible, period,” whereas the first one is probably achievable—though not in our lifetimes.

On the other hand, Dey’s participation in the panel discussion made him remember a classmate of his from 20 years ago telling everyone he would one day upload his brain to a computer. “We all thought he was nuts,” recalls Dey. In retrospect, maybe not. He mentions what were once considered to be science fiction—walking on the moon, self-driving vehicles, surgery under anesthesia, even the washing machine.

In perhaps one of the most quieting moments of the discussion, Dey references Greek mythology, and how the gods once told human beings that they would never have fire, flight, or immortality. Only one of those remains true.