I'm the kind of person who likes doing something and seeing an outcome. I don’t like just reading; I like seeing results.
A lot of invisible decisions take place between the grocery store and your morning bowl of cereal.
Where is it placed on the shelf, and does it share space with a lower-priced, generic equivalent? Is the buyer shopping for kids or health-conscious adults? Do they care how much sugar it contains per serving?
Hannah Sieg '24, a business analytics major from Philadelphia, was captivated by those hidden market forces during her introductory analytics course at Bucknell, where she became a cereal sleuth by seeking insights in survey data about grocery-buying behavior for a class project. The work appealed to Sieg because she's the type of person who likes to do more than read what others have discovered in books — she's driven to uncover insights for herself and apply what she learns as she's learning it.
"It was like we got to see the behind the scenes," she says of the project. "It wasn't necessarily about cereal — what was interesting was seeing how what we'd learned plays out in a real-life situation.
"My business analytics experience so far has been about making things actually happen or seeing why things happen," she adds. "I feel like I'm actually doing something, and I really enjoy that."
Projects like these have helped Sieg determine firsthand that her major in Bucknell's Freeman College of Management is right for her. She's also quickly moved to analyzing issues with much higher stakes than what to eat for breakfast.
Since summer 2021, Sieg has been working with Professor Thiago Serra, analytics & operations management, and a small team of student researchers on a project funded by the National Science Foundation. Together, they're working to examine, explain and try to eliminate biases buried within artificial intelligence platforms.
"Recently people have been paying more attention to problems of equity everywhere, and one of the places you might not expect to find bias is in computer algorithms," she says. "You might think that because a human is taken out of the equation, there won't be any bias. But in reality, humans are the ones writing the code."
One of the decisions human coders have to make when aiming for equity is what's considered "fair," Sieg explains, a concept that can mean different things to different people. The basic question Sieg and her research partners are trying to unravel is what "fair" means to a neural network, the framework at the heart of most machine-learning algorithms, which is structured similarly to the human brain.
To find out, the team trains networks to recognize particular classes of information in data sets, then compress those sets. Next, they investigate whether certain classes of data are affected differently by the compression.
"Sometimes the accuracy of the entire model looks really good, say it's like 90% accurate, and you think, 'This model works really well,' " Sieg says. "But then you realize it identifies one specific class way worse than it identifies all the other ones, so the average looks a lot better than the individual classes do."
When a neural network like that is used to determine loan interest rates or in police surveillance, those hidden imperfections can lead to clearly discriminatory outcomes. Finding the right categories and questions to reveal bias in the system has tested Sieg's creativity, drawing out innate skills she never knew she had.
"It's crazy how much you realize that you don't know when you start a project like this," she says. "Even my professor has said, 'This is insane. There's so much that I don't even know.' And it's just a fun process of all of us together, learning it at the same time."
Sieg's short-term goal is to publish a paper on her research group's findings in an academic journal, but her involvement with the topic won't end there. She plans to continue studying the issue and working toward more tangible outcomes.
"I like seeing results from what I'm doing," she says.