Welcome to Learning Machines, where we chat with faculty members from the Machine Learning Center at Georgia Tech (ML@GT) about their main research area and what the future holds for their field.
Today we talked with Swati Gupta, Fouts Family Early Career Professor and assistant professor in the H. Milton Stewart School of Industrial and Systems Engineering (ISyE).
Gupta explained optimization, how it is something everyone does all of the time, and why she chose to make a career of it. She also sheds light on algorithmic bias and fairness and gives tips for how consumers can help fix these problems.
For those who are unfamiliar with optimization, can you explain what it is? Can you give some specific examples of how it’s used?
Optimization is trying to be the most efficient while also satisfying certain constraints. We find the best solution based on what we have and what we need within the parameters of the problem, along with many other factors. It’s something that we all do in our lives in all kinds of situations.
For example, when we are going to a party and need to buy a gift on the way, we don’t drive 20 miles out of the way for the gift. We optimize the route to maximize travel time and accomplish our tasks. Or, when we’re making our grocery list, we consider what we already have on hand, what we need to purchase, our budget, and make decisions so that we have a variety of types of food instead of just eggs. On a larger scale, optimization helps manufacturers design assembly lines, airlines schedule flights, and what search results people see.
What are some common challenges that you run into when working on an optimization problem?
Translating the problem in real life into mathematics is the first step to finding a solution and can be very challenging. If you think about the problem mathematically the right way the first time, you might find a solution faster, a better solution, or one that solves multiple issues. This is hugely important because algorithms are just mathematical ways of telling a computer what to do. The computer can only find solutions based off what we tell it in the algorithm, so it’s important that we translate the worldview into math correctly.
There are also some types of optimization problems that deal with what we call uncertainty. An example of this would be trying to calculate your commute to work, but traffic differs depending on the time of day, whether conditions, roadwork, etcetera. We still have to recommend the most efficient path, but there are always different factors that could derail the prediction.
What is it about working in optimization that excites you?
I love the fact that I get to solve problems. These are like brain teasers or puzzles to me and I have an inner happiness when I’m working on them. I feel like I’ve had a good day when I get a new idea on how to find a solution. It makes it so much better when I know that this solution is going to positively impact another person or our society as a whole.
I also love that optimization is something that can be applied to any industry. Whether its agriculture, cancer research, or transportation, optimization can be used to make our world better.
What’s next for optimization?
Two areas that I’m excited about are algorithmic fairness and optimization for quantum computing.
With algorithmic fairness, we can use optimization to make things more equitable for others. We can use data to show policymakers how people are being negatively impacted because of unfair algorithms.
In terms of optimization for quantum computing, I think this is something we’ll see more of in the next 20 years. Up to this point, we’ve thought about optimization for classical computers where we can add things easily but quantum computers work on a very different model. Over the last five years, we’ve seen a lot of optimization algorithms being adapted to the quantum setting. This is an exciting and challenging area because some of the fundamentals of quantum computing are still things that we don’t know how to do efficiently.
In pre-Covid19 times, it was fairly common to catch you teaching a lesson on ML to elementary or middle school students. What motivates you to teach these lessons to younger students and why do you think it’s important to teach ML concepts at such a young age?
Academia has a lot of parts to it that are very uncertain. You submit a paper and it may or may not get accepted to a conference. You submit a grant and it might get accepted or rejected. There are so many parts of the process that are rewarding but it’s a different kind of reward.
Teaching to young students, you see their excitement. It’s so tangible and instant gratification. Whenever I get to teach to elementary and middle school students, I’m on a high that day. I love that younger students have no fear on whether or not a question is stupid and think so creatively.
It’s important for young students to be exposed to machine learning concepts because it teaches them and gets them excited about science, technology, engineering, and math (STEM.) This early exposure gives them confidence in these areas and makes them more likely to keep pursuing these fields later on.
For female students in particular, there are studies that show that they are more likely to want to work on projects that have a societal impact. If we can show them from a young age that STEM will allow them to do that, hopefully more of them will pursue a career in it.
You also do a lot of work related to fairness and bias in machine learning and AI. How do unfair algorithms impact people and what can consumers do to also help fix these problems?
Nobody sets out to write an algorithm to purposefully discriminate against a group of people. The unfairness and bias tend to come from the data we are generating or because a neutral algorithm might create that unfairness.
People can try to help these issues by being considerate of the kind of data they are generating.
For example, consumers generate Uber driver ratings and those ratings directly impact the Uber driver’s ability to earn an income. We’re supposed to give a rating based primarily on the driver’s service, not on factors like how fancy their car is or whether or not English is their second language. However, people sometimes give drivers a lower rating for these reasons instead of considering, “did this driver pick me up on time and give me a safe and comfortable experience?” If people in a certain demographic consistently get low ratings, then they are more likely to be kicked out of the system. This isn’t the algorithm discriminating against this demographic, but a result of people generating data without giving much thought to its impact on others.
On the tech side, it’s important for us to remember the social context of the problems we are solving. We need to think about the impact algorithms have on people rather than the efficiency they will provide.