- Kelly Lancaster, NA
- Damien Brockmann, , NA
- Length of Session: 1-hr
- Format: Interactive/Discussion
- Expertise Level: Intermediate
- Type of session: General Conference
We will consider the affordances that machine learning provides for accessibility in education technology by training a web application to associate voice and gesture commands with drag-and-drop actions in the browser.
We will begin with an overview of current methods for creating accessible learning tools, focusing on keyboard navigation, screen reader alerts, and sonification. We will then provide a brief introduction to machine learning and open-source libraries that allow for voice and gesture recognition in web applications. We will train a complex interactive to respond to voice and gesture commands. We will then discuss the opportunities and challenges that such interactions provide for inclusive learning and consider the possibilities for a future in education technology that does not require physical controls.
- Simply meeting accessibility standards does not ensure usability
- New machine learning libraries enable voice and gesture recognition in web applications
- Using multiple input modalities can enhance learning for all students
Accessible Educational Materials, Uncategorized
Kelly has over 10 years of experience in edtech design and development, first as a postdoctoral researcher with the PhET interactive simulations group at the University of Colorado Boulder, and most recently as the product owner for accessible, interactive content at Macmillan Learning. She has a PhD in computational chemistry from Georgia Tech.
Damien has over 20 years of experience in technology, government and education, most recently working as a senior software engineer at Macmillan Learning. A Peace Corps alum with a Master’s in Instructional Technology from the University of Texas, Damien leverages his diverse coding, teaching, and writing skills to capture human issues and solve relational problems in both the public and private sectors.