- Associate Professor, Computer Science
- PhD, University of Cambridge, 2009
- M.Phil, Computer Speech, Text, and Internet Technology, University of Cambridge, 2004
- MS, Computer Science, Oregon State University, 1999
- BA, Computer Science, Mathematics, University of Minnesota Morris, 1997
Dr. Vertanen specializes in designing intelligent interactive systems that leverage uncertain input technologies. A particular focus of his research is on systems that enhance the capabilities of users with diverse abilities. He is the recipient of an NSF CAREER award and an ACM CHI best paper award. Dr. Vertanen served as a subcommittee chair for CHI 2021 and CHI 2020. He was an associate chair for CHI 2019, CHI 2018, CHI 2017, IUI 2015, and MobileHCI 2014. He served on the program committee for ASSETS 2020, ASSETS 2019, ASSETS 2018, IUI 2014, and MobileHCI 2011.
Area of Expertise
- Human-Computer Interaction (HCI)
- Accessible computing
- Speech and Language Processing
- Mobile Interfaces
Student co-authors denoted by *. Conference acceptance rates listed where available.
- Adhikary, J.*, Vertanen, K. Typing on Midair Virtual Keyboards: Exploring Visual Designs and Interaction Styles. Proceedings of INTERACT (2021). Acceptance rate: 27%
- Gaines, D.*, Kristensson, P.O., Vertanen, K. Enhancing the Composition Task in Text Entry Studies: Eliciting Difficult Text and Improving Error Rate Calculation. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2021). Acceptance rate: 26%
- Adhikary, J.*, Vertanen, K. Text Entry in Virtual Environments using Speech and a Midair Keyboard. IEEE Transactions on Visualization and Computer Graphics (TVCG 2021). Acceptance rate: 16%
- Vertanen, K. Probabilistic Text Entry—Case Study 3. In Intelligent Computing for Interactive System Design: Statistics, Digital Signal Processing, and Machine Learning in Practice (2021), 277–320. [publisher]
- Vertanen, K., Kristensson, P.O. Mining, Analyzing, and Modeling Text Written on Mobile Devices. Natural Language Engineering (NLE 2021), 33 pages. [preprint]
- Vertanen, K., Gaines, D.*, Fletcher, C.*, Stanage, A.M.*, Watling, R. *, Kristensson, P.O. VelociWatch: Designing and Evaluating a Virtual Keyboard for the Input of Challenging Text. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2019). Acceptance rate: 24% [pdf]
- Adhikary, J.*, Watling, R.*, Fletcher, C.*, Stanage, A.*, Vertanen, K. Investigating Speech Recognition for Improving Predictive AAC. In Proceedings of the Workshop on Speech and Language Processing for Assistive Technologies (SLPAT 2019, workshop). [pdf]
- Vertanen, K., Fletcher, C.*, Gaines, D.*, Gould, J.*, Kristensson, P.O. The Impact of Word, Multiple Word, and Sentence Input on Virtual Keyboard Decoding Performance. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2018). Acceptance rate: 26% [pdf]
- Walker, J.*, Li, B.*, Vertanen, K., Kuhl, S. Efficient Typing on a Visually Occluded Physical Keyboard. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2017), 5457-5461. Acceptance rate: 25% [pdf]
- Vertanen, K., Memmi, H.*, Emge, J.*, Reyal, S. *, and Kristensson, P.O. VelociTap: Investigating Fast Mobile Text Entry using Sentence-Based Decoding of Touchscreen Keyboard Input. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2015), 659-668. Best paper. Acceptance rate: 23% [pdf]
- Rich Surface Interaction for Augmented Environments (PI), National Science Foundation (2019). This project explores how to use flat everyday surfaces for input in augmented and virtual reality, $516K. [details]
- Improving Mobile Device Input for Users Who are Blind or Low Vision (PI), National Science Foundation (2019), This project aims to improve touchscreen and speech input for users who are blind or low vision, $226K. [details]
- Automatic Speech Recognition using Deep Neural Networks (PI), Michigan Tech Research Excellence Fund (REF) award (2018). This project will create a state-of-the-art speech recognition engine for use in interactive systems for instrumented environments and wearable devices, $45K.
- NSF CAREER: Technology Assisted Conversations (PI), National Science Foundation (2018). This project will investigate how technology can augment our conversations, including for individuals who use Augmentative and Alternative Communication (AAC) devices, $539K. [details]
- Sensing and Feedback for On-Body Input (PI), Paul William Seed Grant, Michigan Tech's Institute of Computing and Cybersystems (2018). This project will investigate how to appropriate everyday surfaces, including one's own body, as an input device for interactive systems, $44K.
- Less is More: Investigating Abbreviated Text Input via a Game (PI), Google Faculty Research Award (2016). This project will investigate how to improve touchscreen text input by allowing users to abbreviate their input, $47K.