Learning Machines: Neurocomputing Explained by Chris Rozell

Welcome to Learning Machines, where we chat with faculty members from the Machine Learning Center at Georgia Tech (ML@GT) about their main research area and what the future holds for their field.

Today we talked with Chris Rozell, a professor in the School of Electrical and Computer Engineering and ML@GT faculty member, where he explained what exactly neurocomputing is, how it’s helping treat illnesses like depression, and what’s next for the field.

His research group, Sensory Information Processing Lab (SIPLab), focuses on understanding how the brain produces intelligence and how to interface it with artificially intelligent systems for the benefit of society.

In the most basic sense, how would you describe neurocomputing?

Broadly, I think of this area as the intersection of artificial intelligence (AI) and machine learning (ML) with neuroscience and neuroengineering. This combination has two main roles.

First, the biological brain has been and continues to be a source of inspiration for how to build AI systems. The techniques of artificial neural networks that have been the workhorse of recent dramatic advances in machine intelligence were originally loosely inspired by computational structures observed in the brain. While there is plenty of great ML research that is not based on the brain, it is common for some researchers in our community to periodically return to what we’re learning about biological brains to search for new inspiration when trying to solve our most challenging problems in machine intelligence.

Second, modern AI/ML tools are giving us exciting new ways to analyze the data collected from neuroscience experiments. This has been especially powerful because neuroscience data is often very complex and noisy. Some of these tools (related to dimensionality reduction and dynamical systems) have started to show us that these noisy recordings may have structure that we didn’t know about but that can help us understand the computations being performed by the brain. The hope is that these technologies could have impact down the road on clinical applications to neurologic disorders.

Rozell co-directed the first-ever Intelligent Interactions with the Brain (I2B) Workshop at Georgia Tech in November 2019.


How did you become interested in the connection between technology and neuroscience?

I actually started out as a double major between engineering and music, expecting to work in music technology. My first research experience came in that area, which is where I was first exposed to the excitement of pushing the boundary of what is known.

Through that experience I realized that the questions that were really interesting to me were about how our sensory systems deciphered what they were hearing and seeing, which could be better answered from studying neuroscience than from studying music.

However, during that first research experience I was also blown away by the ML tools I was being exposed to. It seemed to me that they were the most powerful platform I could use to model, analyze and understand neural systems. So, I sought out a graduate school mentor in who had a background at this intersection and decided to make that my expertise.

Recently it has become more common to see engineers making scientific contributions in neuroscience, but that was not very common when I started down this path. It’s fortuitous to me that the community recognized the value that engineering can bring to scientific discovery beyond just building tools to hand to scientists.

What kinds of problems does neurocomputing help solve? Can you give a specific example?

I’ll give two examples of research we’re working on right now that illustrate the two areas I discussed above. First, we’re looking into new results about how the biological visual system represents the transformations of objects in the visual world.

While we’re not necessarily trying to mimic this system exactly, we are trying to learn about how it represents this type of visual information to make better conclusions. We use this new knowledge to build mathematical models that we incorporate into ML systems. The result has been new insight into how machine intelligence systems can “imagine” transformations of images so that they can learn to recognize objects with fewer training examples.

Second, we’re using ML techniques to help us understand diseases like depression and how to treat it. We’re working with a clinical team to pioneer an experimental therapy for patients with treatment resistant depression. These patients have an electrode implanted into their brain that stimulates (like a pacemaker) a circuit that is malfunctioning in these patients to train the brain over months into patterns of activity more typical in healthy people.

While there have been good outcomes with this approach, little is known about how and why it’s working. For the first time, we have data recorded from deep in these patient’s brains while they are undergoing months of therapy. Using ML approaches, we are able to see changes being induced by the therapy before they are reflected in standard clinical measures, giving us both a better understanding of the disease and a way to build a more tailored therapy for each patient.

Honestly, as someone who expected to prove theorems for a living, it is a privilege to do work that has an impact on real patients.


What unique challenges does working with the brain present over working with other organs or parts of the body?

The brain is the most complex system we know of in the universe. It’s composed of almost 100 billion neurons, each having close to 10,000 connections (synapses) with other neurons. The activity in the brain is a mixture of electrical and chemical responses, and nothing fits typical assumptions. Put simply, the brain is the ultimate example of a complex system.

Beyond the complexity of structure and function, the brain is also challenging to study because it’s so difficult to specify exactly what we should be trying to understand about it. For other organs in the body, we can articulate what purpose they intend to serve (something like an input-output relationship). With the brain, we tend to think of it as more central to our identity as human beings than our other organs. Despite this, it’s been a challenging task to even have researchers agree on a precise and quantifiable definition of its main functions and what we should be trying to learn about it. 


People can get nervous thinking about AI being able to interact with our brains. What would you say to them and how will this research benefit people moving forward?

These are tremendously important things to think about. Neural interfaces are going to become more and more prevalent, and these interfaces are increasingly involving AI algorithms. As a society, it is critical that we determine what principles should be guiding these technology advances.

The intersection of AI and neurotechnology is going to make us think about issues like humanity, autonomy, privacy and security in ways we never have before. It’s such a critical conversation and we know we need to have it in as many ways as possible. I’m currently working with artists to see how we can use performing arts to try and further this conversation in society.

What do you think is next for neurocomputing?

I believe we’re really at a watershed moment for neuroscience, and it’s a tremendously exciting time to be in the field. There were already revolutionary new advances in technology for reading and writing neural activity when the U.S. made the commitment to neurotechnology development through the BRAIN Initiative, and this investment has only accelerated the field. With these radical advances in our ability to observe and manipulate the brain, as well as the ideas of a new and diverse set of researchers brought into neuroscience by the excitement around these emerging technologies, I believe that we will know dramatically more about brain function in the next decade than we do now.

There are many challenges and opportunities that I see ahead, but one that I am focused on myself is the role of closed-loop stimulation for understanding the brain. Closed loop systems are a class of algorithms where we use measurements in real-time to adjust the way you are perturbing a system to try and reach a desired target behavior. Thermostats and autopilot are examples everyone is familiar with, but this technology is used widely to give us much more precise control over a system’s behavior from moment to moment.

Recent advances in neural interfacing technology are allowing us for the first time to simultaneously stimulate from and record a neuron, meaning that we can now use closed loop stimulation in neural circuits.

My lab and our collaborators are currently working on building the ML systems to allow us to do this in neuroscience experiments and have published some of the first papers demonstrating that we can successfully deploy these techniques in real life. While there are many challenges ahead, I believe that these tools hold great potential for us to perform much more controlled neuroscience experiments that allow us to dissect the causal function of circuit elements in a way we’ve never been capable of before.

What do you enjoy about working in this field?

There’s a lot to enjoy! First and most importantly, we are so early in our path toward understanding the brain that it still has a type of wonder about it. I imagine the feeling is similar to the generation of kids that grew up during the space race. Each research advance we make stands to have an impact on either new machine intelligence systems or for new clinical therapies.

Honestly, as someone who expected to prove theorems for a living, it is a privilege to do work that has an impact either on real patients.

Rozell and Garrett Stanley, professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University are leading a $1.6 million study to develop new algorithms to interact with neural circuitry.


What are some good resources for people to read, watch, or listen to if they want to learn more about work in neurocomputing?

There is a lot of great material out there, especially now with so much effort focused on making things accessible virtually.

For example, some colleagues and I have created Neuromatch – an entirely virtual neuroscience conference that relies on using ML tools to try and make the meeting as effective and accessible as possible. We have our third and largest iteration coming up in the last week of October and we will have almost 1,000 research presentations at the meeting. The entire schedule is dynamically optimized using ML algorithms to make it a custom experience for each attendee.

This organization also ran a wildly successful three-week summer school course in computational neuroscience last year for thousands of students, and we expect to run a similar course again next summer.

Tell us about a project you are either currently working on or one that you are particularly proud of.

I am proud of all of the work I get to collaborate on with the students and postdocs in my lab. I’m probably most known for some work on neuromorphic algorithms that started back in my own Ph.D. thesis. There is a type of computational problem known as sparse coding, which essentially boils down to solving an optimization to find the best way to represent a piece of data using the fewest building blocks in a large dictionary.

This was proposed as a theory of neural computation several years ago, and around the same time this approach become a very influential ML principle that has impacted a wide variety of applications. I developed some approaches to thinking about how these problems could be solved in analog neural circuits rather than on digital computers.

This work has formed the basis of several groups who have built neural coding models using this approach — ML groups building new algorithms for computer vision and remote sensing systems, as well as engineering groups who want to build custom hardware to solve ML problems more efficiently. These groups have used our approach as a testbed for demonstrating next generation computing platforms for ML systems, including hardware implementations of neural networks, memristor circuits, and quantum computing systems. It’s very gratifying to see our work have impact in such a wide variety of problems.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.