- Assistant Professor, Computer Science
- PhD, University of Cambridge, 2009
- M.Phil, Computer Speech, Text, and Internet Technology, University of Cambridge, 2004
- MS, Computer Science, Oregon State University, 1999
- BA, Computer Science, Mathematics, University of Minnesota Morris, 1997
Dr. Vertanen specializes in designing intelligent interactive systems that leverage uncertain input technologies. A particular focus of his research is on systems that enhance the capabilities of users with permanent or situationally-induced disabilities. Dr. Vertanen serves as an associate editor for the International Journal of Human-Computer Studies and as a subcommittee chair for CHI 2020. Previously he served as an associate chair for CHI 2019, CHI 2018, CHI 2017, IUI 2015, and MobileHCI 2014.
Area of Expertise
- Human-Computer Interaction (HCI)
- Accessible computing
- Speech and Language Processing
- Mobile Interfaces
Student co-authors denoted by *. Conference acceptance rates listed where available.
- Vertanen, K., Fletcher, C.*, Gaines, D.*, Gould, J.*, Kristensson, P.O. The Impact of Word, Multiple Word, and Sentence Input on Virtual Keyboard Decoding Performance. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2018). Acceptance rate: 26%
- Walker, J.*, Li, B.*, Vertanen, K., Kuhl, S. Efficient Typing on a Visually Occluded Physical Keyboard. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2017), 5457-5461. Acceptance rate: 25%
- Vertanen, K., Memmi, H.*, Emge, J.*, Reyal, S. *, and Kristensson, P.O. VelociTap: Investigating Fast Mobile Text Entry using Sentence-Based Decoding of Touchscreen Keyboard Input. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2015), 659-668. Best paper. Acceptance rate: 23%
- Automatic Speech Recognition using Deep Neural Networks (PI), Michigan Tech Research Excellence Fund (REF) award (2018). This project will create a state-of-the-art speech recognition engine for use in interactive systems for instrumented environments and wearable devices, $45K.
- NSF CAREER: Technology Assisted Conversations (PI), National Science Foundation (2018). This project will investigate how technology can augment our conversations, including for individuals who use Augmentative and Alternative Communication (AAC) devices, $539K.
- Sensing and Feedback for On-Body Input (PI), Paul William Seed Grant, Michigan Tech's Institute of Computing and Cybersystems (2018). This project will investigate how to appropriate everyday surfaces, including one's own body, as an input device for interactive systems, $44K.
- Less is More: Investigating Abbreviated Text Input via a Game (PI), Google Faculty Research Award (2016). This project will investigate how to improve touchscreen text input by allowing users to abbreviate their input, $47K.