Imagine a computerized admissions tool crafted to determine who should be invited to apply to a university. In creating the tool, developers use data from previous application cycles—who was invited, who applied, who was admitted, and who ultimately enrolled. The admissions tool is programmed to analyze the data and “learn” what an admitted college student looks like. From there, the computer follows a formula for finding those most likely to enroll, continuously learning as the process goes along.
This fictitious tool is what’s known as a machine-learning algorithm. An algorithm is simply a computational recipe, a process to achieve a specific result, a set of rules to solve a problem. With a machine-learning algorithm, the rules aren’t driven by human logic; they’re continuously revised by the computer itself.
Computer algorithms range from the simple (a computer user confirming they are 13 or older in order to set up an Instagram account) to the complex (large, decision-making software systems rapidly assessing a vast array of data for a variety of purposes or outputs).
“What’s not being looked at is the part of our culture that values and glorifies the process of shifting to algorithms to do certain kinds of work. Our mindset is to take risks and fix it later. Is that acceptable?”
The promise of mathematical objectivity has resulted in widespread reliance on algorithmic decision-making for loans, benefits, job interviews, school placement (both higher ed and K-12), and even who should get bail, parole, and prison time.
“Algorithms are a mathematical manipulation of the complexities of life,” says Jennifer Daryl Slack, distinguished professor of communication and cultural studies in Michigan Tech’s Department of Humanities. “An algorithm allows you to manage complexity, but it does so by simplifying, prioritizing, and valuing some things over others. It’s a fundamentally biased process.”
Machine-learning algorithms function on association—they group data into insular categories, connecting only the seemingly related and disregarding difference. Stefka Hristova, associate professor of digital media and Slack’s colleague in the Department of Humanities, says an algorithm works to create a structure of sameness and then builds on that structure—think Amazon’s recommendations for you, or your Netflix queue.
“It’s a system that precludes creativity and innovation because you get more of the same,” Hristova says. “It’s also a problematic structure when an algorithm is employed within society. What happens when you have a machine determining who will make a good employee? It connects what is similar, and that’s one of the places where bias comes in.”
Obviously, Slack says, the more diverse perspectives you have amongst the people designing the algorithm, the less biased the algorithm will be. “But if you only focus on the design level, you’re missing a myriad of other issues that are going to come into play regardless of diversity in creation,” she says.
And many of those other issues stem from bias in an algorithm’s implementation. When choosing to deploy an algorithm, companies often make value choices regarding risk and finances.
Slack and Hristova say we must take a look at how easily we hand decisions over to algorithms and ask what we’re prioritizing. Together, they’re developing a methodological approach for intervening in the design and implementation of algorithms in a way that allows humans to contemplate ethical issues, cultural considerations, and potential policy interventions. Their research will be housed in Michigan Tech’s new Institute for Policy, Ethics, and Culture.
“Every stage of algorithmic design and implementation offers different opportunities for intervention to fine-tune the equation,” Hristova says. “With artificial intelligence here to stay, we need a democratic, open, and ethical culture around the creation and deployment of algorithms.”
Institute for Policy, Ethics, and Culture
Algorithmic culture. Medicine and biotechnology. Autonomous and intelligent systems. Surveillance and privacy. The technological changes and disruptive forces of the 21st century are urgent, complex, and vast. To explore the policy implications, ethical considerations, and cultural significance of life in a connected world, Michigan Tech will launch a new Institute for Policy, Ethics, and Culture (IPEC) in fall 2019.
“Technology is a new culture, it’s not just a backdrop,” says Soonkwan Hong, associate professor of marketing in the College of Business. “People tend to take extreme stances—they celebrate technology or they criticize it. But the best path forward is a participatory stance, one where people—not algorithms—make choices about when to use technology, when to unplug, and what data is or isn’t shared.”
Many people aren’t aware of the rights they have, like something as simple as turning off the location data within phone apps. Privacy issues are widely discussed, Hong says, but they’re not the root conflict. Trust is. And to earn the trust of consumers, citizens, and critics—and to avoid shattering it—society’s makers and inventors engage users in a constant negotiation. As a scholar of consumer culture theory, Hong examines these negotiations within the marketplace.
The impact is bigger than the marketplace, however. Sarah Green, professor of chemistry at Michigan Tech, notes that technological advances are necessary, “but not sufficient to address global challenges related to human well-being, ecosystem health, and a changing climate.” Green co-chaired the Science Advisory Panel for the United Nation’s Sixth Global Environmental Outlook (GEO-6) report and is a member of the University working group that developed IPEC. “IPEC will foster innovative and forward-thinking policies, grounded in science and cultural insight,” she says. “A primary goal of IPEC is to guide the ethical development and deployment of technology toward the ‘future we want.’”
“Technology is everywhere, it’s integrated into our lives—or more precisely, our lives are integrated into technology.”