Active Awards (sorted by PI)
Dylan Gaines
Amount: $30,012
Sponsor: Michigan Tech Research Excellence Fund (REF)
Date Awarded: March 2024
Keith Vertanen
Amount: $144,017
Sponsor: National Science Foundation
Date Awarded: July 2018
Co-PI's: Scott Kuhl
Amount: $515,392
Sponsor: National Science Foundation
Date Awarded: July 2019
Amount: $78,357
Sponsor: Oregon Health and Science University/National Institute of Health
Date Awarded: October 2023
Select Past Awards
PI: Keith Vertanen
Sponsor: NSF
Amount Funded: $225,663
Date Awarded: August 2019
PI: Kelly Steelman
Co-PIs: Briana C. Bettin, Charles R. Wallace, Leo C. Ureel II
Sponsor: NSF
Amount Funded: $299,617
Date Awarded: July 2021
PI: Keith D. Vertanen, HCC, CS
Co-PI: Scott A. Kuhl, HCC, CS
Sponsor: National Science Foundation
Award: $499,552 | 3 Years
Awarded: August 2019
Abstract: Virtual Reality (VR) and Augmented Reality (AR) head-mounted displays are increasingly
being used in different computing related activities such as data visualization, education,
and training. Currently, VR and AR devices lack efficient and ergonomic ways to perform
common desktop interactions such as pointing-and-clicking and entering text. The goal
of this project is to transform flat, everyday surfaces into a rich interactive surface.
For example, a desk or a wall could be transformed into a virtual keyboard. Flat surfaces
afford not only haptic feedback, but also provide ergonomic advantages by providing
a place to rest your arms. This project will develop a system where microphones are
placed on surfaces to enable the sensing of when and where a tap has occurred. Further,
the system aims to differentiate different types of touch interactions such as tapping
with a fingernail, tapping with a finger pad, or making short swipe gestures.
This project will investigate different machine learning algorithms for producing a continuous coordinate for taps on a surface along with associated error bars. Using the confidence of sensed taps, the project will investigate ways to intelligently inform aspects of the user interface, e.g. guiding the autocorrection algorithm of a virtual keyboard decoder. Initially, the project will investigate sensing via an array of surface-mounted microphones and design "surface algorithms" to determine and compare the location accuracy of the finger taps on the virtual keyboard. These algorithms will experiment with different models including existing time-of-flight model, a new model based on Gaussian Process Regression, and a baseline of classification using support vector machines. For all models, the project will investigate the impact of the amount of training data from other users, and varying the amount of adaptation data from the target user. The project will compare surface microphones with approaches utilizing cameras and wrist-based inertial sensors. The project will generate human-factors results on the accuracy, user preference, and ergonomics of interacting midair versus on a rigid surface. By examining different sensors, input surfaces, and interface designs, the project will map the design space for future AR and VR interactive systems. The project will disseminate software and data allowing others to outfit tables or walls with microphones to enable rich interactive experiences.
PI: Keith D. Vertanen, HCC, CS
Sponsor: National Science Foundation
Award: $225,663 | 3 Years
Awarded: August 2019
Abstract: Smartphones are an essential part of our everyday lives. But for people with visual
impairments, basic tasks like composing text messages or browsing the web can be prohibitively
slow and difficult. The goal of this project is to develop accessible text entry methods
that will enable people with visual impairments to enter text at rates comparable
to sighted people. This project will design new algorithms and feedback methods for
today’s standard text entry approaches of tapping on individual keys, gesturing across
keys, or dictating via speech. The project aims to: 1) help users avoid errors by
enabling more accurate input via audio and tactile feedback, 2) help users find errors
by providing audio and visual annotation of uncertain portions of the text, and 3)
help users correct errors by combining the probabilistic information from the original
input, the correction, and approximate information about an error’s location. Improving
text entry methods for people who are blind or have low vision will enable them to
use their mobile devices more effectively for work and leisure. Thus, this project
represents an important step to achieving equity for people with visual impairments.
This project will contribute novel interface designs to the accessibility and human-computer interaction literature. It will advance the state-of-the-art in mobile device accessibility by: 1) studying text entry accessibility for low vision in addition to blind people, 2) studying and developing accessible gesture typing input methods, and 3) studying and developing accessible speech input methods. This project will produce design guidelines, feedback methods, input techniques, recognition algorithms, user study results, and software prototypes that will guide improvements to research and commercial input systems for users who are blind or low-vision. Further, the project’s work on the error correction and revision process will improve the usability and performance of touchscreen and speech input methods for everyone.
PI: Keith D. Vertanen, HCC, CS
Sponsor: National Science Foundation
Award: $194,541 | 5 Years
Awarded: March 2018
Abstract: Face-to-face conversation is an important way in which people communicate with each
other, but unfortunately there are millions who suffer from disorders that impede
normal conversation. This project will explore new real-time communication solutions
for people who face speaking challenges, including those with physical or cognitive
disabilities, for example by exploiting implicit and explicit contextual input obtained
from a person's conversation partner. The goal is to develop technology that improves
upon the Augmentative and Alternative Communication (AAC) devices currently available
to help people speak faster and more fluidly. The project will expand the resources
for research into conversational interactive systems, the deliverables to include
a probabilistic text entry toolkit, AAC user interfaces, and an augmented reality
conversation assistant. Project outcomes will include flexible, robust, and data-driven
methods that extend to new use scenarios. To enhance its broader impact, the project
will educate the public about AAC via outreach events and by the online community
the work will create. The PI will assemble teams of undergraduates to develop the
project's software, and he will host a summer youth program on the technology behind
text messaging, offering scholarships for women, students with disabilities, and students
from underrepresented groups. Funded first-year research opportunities will further
help retain undergraduates, particularly women, in computing.
This project will explore the design space of conversational interactive systems, by investigating both systems that improve communication for non-speaking individuals who use AAC devices and systems that enhance communication for speaking individuals who face other conversation-related challenges. Context-sensitive prediction algorithms that use: 1) speech recognition on the conversation partner's turns; 2) the identity of the partner as determined by speaker identification; 3) dialogue state information; and 4) suggestions made by a partner on a mobile device will be considered. User studies will investigate the effectiveness and user acceptance of partner-based predictions. New methodologies will be created for evaluating context-sensitive AAC interfaces. The impact of training AAC language models on data from existing corpora, from simulated AAC users, and from actual AAC users will be compared. This research will expand our knowledge about how to leverage conversational context in augmented reality, and it will curate a public test set contributed by AAC users.
Publication: Adhikary, Jiban and Watling, Robbie and Fletcher, Crystal and Stanage, Alex and Vertanen, Keith. "Investigating Speech Recognition for Improving Predictive AAC," Proceedings of the Eighth Workshop on Speech and Language Processing for Assistive Technologies, 2019.
PI: Robert Pastel, HCC, CS
Sponsor: National Science Foundation (FPT-Northern Arizona University)
Award: $116,561 | 5 Years
Awarded: February 2019
Co-PI: Robert Pastel, HCC, CS
Sponsor: National Science Foundation (FPT-Northern Arizona University)
Award: $20,577 | 2 Years
Awarded: November 2019
Abstract: Flooding is the most damaging natural hazard in the U.S. and around the world, and
most flood damage occurs in cities. Yet the ability to know when flooding is happening
and communicate that risk to the public and first responders is limited. At the same
time there is a surge in digitally connected technologies, many at the fingertips
of the general public (e.g., smartphones). The need is for new flood information that
can be generated from primary observations that are collected in exactly the right
places and times to be coupled with the ability to more effectively communicate this
risk to communities. This project will develop the Integrated Flood Stage Observation
Network (IFSON), a system that can take in crowd-sourced information on flooding (from
cameras, a smartphone app, and social media), intelligently assess flood risk (using
machine learning), and communicate those risks in real time. IFSON will be scalable
to any community or city and will provide a backbone for new crowd-sourced technologies.
This project will i) integrate several new technologies (each that directly engages with different communities) to provide new insights into and communication capacity around urban flooding hazards, ii) connect a range of communities to each other in near-realtime (from the general public to first responders to infrastructure managers) and develop flood sensing and avoidance capacities that can be used anywhere in the U.S. or even internationally, iii) develop new insights into how urban morphology contributes to flood risk, and iv) leverage prior funding by connecting practitioners from existing sustainability research networks and sending data to CUAHSI and eRams. Additionally, this research will develop outreach activities that will educate the public and practitioners on how flooding hazards occur, their impacts, and how to mitigate risks. The research will directly empower and engage local citizens in flood event reporting and response, and explores a concrete model for what it would mean to have a "smart and connected community" for minimizing flood risk. Although driven by a number of novel technologies and techniques, the central focus of this work is on the interface of community with technology and, in particular, how modern network technologies can engage and bring together ordinary citizens, city planners, first responders, and other local stakeholders within a shared, collaboratively constructed information space; a broad range of educational and outreach opportunities are included to engage stakeholders and amplify project impact. In addition to training students through research positions, the project will create a summer Research Experience for Undergraduates (REU) program. It will also connect with national, state, and local societies across a number of disciplines. For example, the project will work with the City of Phoenix during their Monsoon Preparedness day to educate first responders on how to use project results. Interdisciplinary course modules that show how to engage various communities (including the public, first responders, and infrastructure managers) in mitigating flood risk will be developed and disseminated. Additionally, infrastructure managers will be recruited to participate in workshops on how project data will reveal new insights into the condition of infrastructure and what strategies can be employed to reduce hazards.
Publication: Lowry, Christopher S. and Fienen, Michael N. and Hall, Damon M. and Stepenuck, Kristine F.. "Growing Pains of Crowdsourced Stream Stage Monitoring Using Mobile Phones: The Development of CrowdHydrology," Frontiers in Earth Science, v.7, 2019.