Husky’s Research Helps Navy AI Systems Make Better Choices

Elijah Nieman sits at a desk facing his laptop screen. A water bottle sits near him, and the wall behind him displays a large 3D art piece imitating layers of moss.
Elijah Nieman sits at a desk facing his laptop screen. A water bottle sits near him, and the wall behind him displays a large 3D art piece imitating layers of moss.
Elijah Nieman, a Michigan Tech applied cognitive sciences and human factors Ph.D. student, studies human decision-making to help intelligent computer systems adapt to changing environments.
×

Husky Elijah Nieman is seeking the sweet spot between human adaptability and artificial intelligence's tirelessness. The Michigan Technological University researcher is studying AI decision-making behavior to help the U.S. Navy's intelligent onboard systems make better judgment calls in a constantly changing environment.

Just as electronic autopilots, digital chart-plotting, smart buoys and other technological innovations changed marine navigation, artificial intelligence is making an impact on vessel safety and efficiency, both in port and at sea. But to harness AI's potential, scientists must first understand how AI systems interact with human decision-making.

Michigan Tech graduate student Elijah Nieman, who is on track to receive his Ph.D. in Applied Cognitive Science and Human Factors in 2027, explored this topic firsthand after being selected to participate in the Naval Research Enterprise Internship Program (NREIP). For 10 weeks this past summer, he worked in the U.S. Naval Research Laboratory in the John C. Stennis Space Center, located in Mississippi.

"What my research does is extend our ability to, in a sense, know what we don't know," said Nieman. "With all of the hype around AI, it feels very practical to take a step back and establish the fundamentals of the human and the machine to inform how we integrate with new technology."

Nieman previously worked alongside his Ph.D. advisor, Jason Harman, on the research that led to his NREIP internship.

Few things are certain in marine environments, where weather, waves, wind, and both expected and unexpected obstacles create challenging conditions that can turn at the drop of a hat. Both Navy personnel and the intelligent systems they oversee onboard and offboard ships must be able to quickly and reliably adjust. For example, both navigators and their intelligent systems need to recognize barriers charted on maps in order to avoid collisions.

The study of human factors can offer valuable insights into how to accomplish the goal.

"My work was about examining a situation with some uncertainty and studying how we can quantify how a person is judging that uncertainty," said Nieman. "By modeling how a person's judgment and decision-making shift, we can provide that information to the intelligent system so it can adapt."

Intelligent systems face a major problem called concept drift, where changes in incoming data cause the system's decision-making to be less accurate over time. That's because systems are built with assumptions of what the designer's and end user's goals are and how they'll achieve them.

"Human factors as a field is largely concerned with studying and knowing how we form those design assumptions and whether they're accurate to the user," said Nieman.

Humans are better at recognizing and adapting to new situations without suffering from unwanted concept drift. However, human judgment can weaken over time due to fatigue or other factors, causing changes in decision-making even when the context has not changed.

Nieman is working to reconcile the two sets of challenges. His goal is to make intelligent oceanic navigation systems better at consistently adapting to the fluid external and internal variables of a marine environment. Bigger picture, his data also helps researchers understand human judgment more clearly and could help people understand when to evaluate and adjust their decision-making. Aboard a naval vessel, that could mean successfully maneuvering a ship by accurately gauging the length of a barrier or positioning it to provide the optimal margin of safety while meeting tactical demands.

"While my work focused on visual map data, the same logic of uncertainty and adaptability to situations applies to other inputs as well," said Nieman. "So while we might have satellite visuals in one case, the ocean floors take in a different sort of information, like sonar or acoustics, with the same end goal of identifying, accurately, the position of a barrier or object."

Humans Making AI Technology Work Better for Humans

Before intelligent computer systems can learn to rapidly and accurately adapt to changing data, scientists must further explore how humans make decisions.

Nieman's data collection begins by asking test participants to evaluate sigmoid curves: S-shaped curves of varying steepness and distortion that can represent a variety of datasets in different fields. In Nieman's research, the curves simulate uncertainty. Participants indicate where the curves move from high to low. Nieman then analyzes the responses to determine their consistency.

"We're looking to see if people use the same judgment policy or criteria every time, and whether the aspects of the signal itself determine how consistent people are," said Nieman.

Judgment policy refers to the consistent and often implicit rules or strategies that humans use to weigh information when making decisions in complex, uncertain environments. Sigmoid curves can illustrate the process with a slow start, a period of rapid growth, and then a plateau or eventual decline.

A presentation slide from intern research conducted at the U.S. Naval Research Laboratory reads, 'Capturing Human Judgment for Intelligent Systems.' The slide shows and explains four sigmoid curves—digital shift is relatively unambiguous, sigmoid/decay shift is subject to judgement policy, sporadic noise, and segmented noise—with varying degrees of steepness and distortion. The slide defines the 'Threshold Detection Problem' of, 'When does a signal move from a high state (1) to a low state (0)?'
Participants in Nieman's study are shown a series of sigmoid curves with varying steepness and distortion and asked to mark the position where the curves move from "high" to "low."
(Graphic adapted from Elijah Nieman/U.S. Naval Research Laboratory 2025)

Rather than presenting a clearly correct answer, the varying shapes of the sigmoid curves in Nieman's study require participants to rely on their judgment. Some curves are repeated exactly, giving participants the chance to judge the same situation multiple times. Whether responses were 'correct' is secondary to what impacted the participants' responses because Nieman wants to see what conditions lead to high consensus and what factors lead to changes in judgment. He said the response data will later be used to train machine learning models to better adapt to the judgment calls of human analysts.

"This type of decision-making is subjective," said Nieman. "For this specific research, we want our intelligent systems to act more like a smart autocorrect. You can write the same 'objective' idea multiple ways, and you want your intelligent system to match how a specific person subjectively chose to write something."

Early data revealed that changes in steep sigmoid curves were easier for research participants to pick out more consistently, even when the curves were heavily distorted. More shallow sigmoid curves were more difficult for people to judge. Overall, however, most research participants demonstrated a consistent approach when judging the curves. Participants were ordinary people recruited through an online platform.

"Only a few people had dramatic shifts in judgment policy," said Nieman. "But the trends over time show that there is something to keep exploring. There's still the same core question about consistency and changes in judgment policy, but we're planning to introduce more context or motivation to see how people adapt to different situations."

Nieman's work continues this fall through an extended winter internship with his NREIP program mentor, Jaelle Scheuerman.

His next step is to home in on judgment policies over time, introducing more variables from both test participants and the context of the experiment in an effort to predict higher and lower precision and consistency. This will help create the framework for a similar experiment with a more intuitive task: asking test participants to mark entire boundaries.

"There are a lot of subtle ways systems interact with people's motivation, such as affecting how autonomous or competent a person feels, that drive the quality of their performance. Intelligent systems and human-machine teaming should be about capitalizing on those qualities of the human experience, which in our case is about making them highly adaptive as the world shifts in ambiguous and subjective ways," said Nieman.

Michigan Technological University is an R1 public research university founded in 1885 in Houghton, and is home to nearly 7,500 students from more than 60 countries around the world. Consistently ranked among the best universities in the country for return on investment, Michigan's flagship technological university offers more than 185 undergraduate and graduate degree programs in science and technology, engineering, computing, forestry, business, health professions, humanities, mathematics, social sciences, and the arts. The rural campus is situated just miles from Lake Superior in Michigan's Upper Peninsula, offering year-round opportunities for outdoor adventure.

Comments