Mission

Our mission is to collect street-level accessibility information from every street in the world and enable design and development of a novel set of location-based technologies for accessibility.

To achieve this, we develop scalable data collection methods to acquire street-level accessibility data with a combination of crowdsourcing, computer vision, and online map imagery—such as Google Street View.

Team Members

Jon E. Froehlich

Spring 2012 - Present

PI, Professor

David Jacobs

Fall 2013 - Present

Co-PI, Professor

Manaswi Saha

Fall 2016 - Present

Graduate Student

Anthony Li

Fall 2016 - Present

Undergraduate Student

Maria Furman

Spring 2017 - Present

Undergraduate Student

Rachael Marr

Spring 2017 - Present

UX Designer

Alumni

Kotaro Hara

Spring 2012 - Fall 2016

Graduate Student

Ruofei Du

Fall 2013

Graduate Student

Jin Sun

Spring 2013 - Fall 2014

Graduate Student

Ladan Najafizadeh

Summer 2015 - Fall 2016

Graduate Student

Soheil Behnezhad

Fall 2016

Graduate Student

Victoria Le

Spring 2012 - Spring 2013

Undergraduate Student

Jonah Chazan

Summer 2013

High School Intern

Robert Moore

Fall 2012 - Fall 2013

Undergraduate Student

Sean Panella

Fall 2012 - Fall 2013

Undergraduate Student

Zachary Lawrence

Fall 2013 - Fall 2015

Undergraduate Student

Alex Zhang

Fall 2014 - Fall 2015

Undergraduate Student

Anthony Li

Summer 2015

High School Intern

Niles Rogoff

Summer 2015

High School Intern

Christine Chan

Summer 2015

Undergraduate Student

Stephanie Nguyen

Spring 2016

Contributor

Daniil Zadorozhnyy

Spring 2016 - Fall 2016

Undergraduate Student

Ji Hyuk Bae

Spring 2017

Undergraduate Student

Sponsors

PDF thumbnail

Publications

PDF thumbnail
The Design of Assistive Location-based Technologies for People with Ambulatory Disabilities: A Formative Study
Hara, K., Chen, C and Froehlich, J.
Proceedings of CHI 2016, San Jose, California, USA
In this paper, we investigate how people with mobility impairments assess and evaluate accessibility in the built environment and the role of current and emerging location- based technologies therein. We conducted a three-part formative study with 20 mobility impaired participants: a semi-structured interview (Part 1), a participatory design activity (Part 2), and a design probe activity (Part 3). Part 2 and 3 actively engaged our participants in exploring and designing the future of what we call assistive location- based technologies (ALTs)—location-based technologies that specifically incorporate accessibility features to support navigating, searching, and exploring the physical world. Our Part 1 findings highlight how existing mapping tools provide accessibility benefits—even though often not explicitly designed for such uses. Findings in Part 2 and 3 help identify and uncover useful features of future ALTs. In particular, we synthesize 10 key features and 6 key data qualities. We conclude with ALT design recommendations.
PDF thumbnail
Characterizing and Visualizing Physical World Accessibility at Scale Using Crowdsourcing, Computer Vision, and Machine Learning
Hara, K. and Froehlich, J.
SIGACCESS Newsletter, Issue 113. 2015
Imagine a mobile phone application that allows users to indicate their ambulatory ability (e.g., motorized wheelchair, walker) and then receive personalized, interactive accessible route recommendations to their destination. In the following article, Kotaro Hara and Jon Froehlich talk about their research work aimed at developing scalable data collection methods for remotely acquiring street-level accessibility information and novel mobile navigation and map tools.
PDF thumbnail
Improving Public Transit Accessibility for Blind Riders by Crowdsourcing Bus Stop Landmark Locations with Google Street View: An Extended Analysis
Hara, K., Azenkot, S., Campbell, M., Bennett, C., Le, V., Pannella, S., Moore, R., Minckler, K., Ng, R., and Froehlich, J.
ACM Transactions on Accessibility 2015
Low-vision and blind bus riders often rely on known physical landmarks to help locate and verify bus stop locations (e.g., by searching for an expected shelter, bench, or newspaper bin). However, there are currently few, if any, methods to determine this information a priori via computational tools or services. In this article, we introduce and evaluate a new scalable method for collecting bus stop location and landmark descriptions by combining online crowdsourcing and Google Street View (GSV). We conduct and report on three studies: (i) a formative interview study of 18 people with visual impairments to inform the design of our crowdsourcing tool, (ii) a comparative study examining differences between physical bus stop audit data and audits conducted virtually with GSV, and (iii) an online study of 153 crowd workers on Amazon Mechanical Turk to examine the feasibility of crowdsourcing bus stop audits using our custom tool with GSV. Our findings reemphasize the importance of landmarks in nonvisual navigation, demonstrate that GSV is a viable bus stop audit dataset, and show that minimally trained crowd workers can find and identify bus stop landmarks with 82.5% accuracy across 150 bus stop locations (87.3% with simple quality control).
PDF thumbnail
Tohme: Detecting Curb Ramps in Google Street View Using Crowdsourcing, Computer Vision, and Machine Learning
Hara, K., Sun, J., Moore, R., Jacobs, D., and Froehlich, J.
Proceedings of UIST 2014, Honolulu, Hawaii, USA
Building on recent prior work that combines Google Street View (GSV) and crowdsourcing to remotely collect information on physical world accessibility, we present the first “smart” system, Tohme, that combines machine learning, computer vision (CV), and custom crowd interfaces to find curb ramps remotely in GSV scenes. Tohme consists of two workflows, a human labeling pipeline and a CV pipeline with human verification, which are scheduled dynamically based on predicted performance. Using 1,086 GSV scenes (street intersections) from four North American cities and data from 403 crowd workers, we show that Tohme performs similarly in detecting curb ramps compared to a manual labeling approach alone (F- measure: 84% vs. 86% baseline) but at a 13% reduction in time cost. Our work contributes the first CV-based curb ramp detection system, a custom machine-learning based workflow controller, a validation of GSV as a viable curb ramp data source, and a detailed examination of why curb ramp detection is a hard problem along with steps forward.
PDF thumbnail
Improving Public Transit Accessibility for Blind Riders by Crowdsourcing Bus Stop Landmark Locations with Google Street View
Hara, K., Azenkot, S., Campbell, M., Bennett, C., Le, V., Pannella, S., Moore, R., Minckler, K., Ng, R., and Froehlich, J
Proceedings of ASSETS 2013, Bellevue, Washington, USA
Low-vision and blind bus riders often rely on known physical landmarks to help locate and verify bus stop locations (e.g., by searching for a shelter, bench, newspaper bin). However, there are currently few, if any, methods to determine this information a priori via computational tools or services. In this paper, we introduce and evaluate a new scalable method for collecting bus stop location and landmark descriptions by combining online crowdsourcing and Google Street View (GSV). We conduct and report on three studies in particular: (i) a formative interview study of 18 people with visual impairments to inform the design of our crowdsourcing tool; (ii) a comparative study examining differences between physical bus stop audit data and audits conducted virtually with GSV; and (iii) an online study of 153 crowd workers on Amazon Mechanical Turk to examine the feasibility of crowdsourcing bus stop audits using our custom tool with GSV. Our findings reemphasize the importance of landmarks in non-visual navigation, demonstrate that GSV is a viable bus stop audit dataset, and show that minimally trained crowd workers can find and identify bus stop landmarks with 82.5% accuracy across 150 bus stop locations (87.3% with simple quality control).
PDF thumbnail
An Initial Study of Automatic Curb Ramp Detection with Crowdsource Verification using Google Street View Images
Hara, K., Sun, J., Chazan, J., Jacobs, D., and Froehlich, J.
Poster Proceedings of HCOMP 2013, Palm Springs, California, USA
In our previous research, we examined whether minimally trained crowd workers could find, categorize, and assess sidewalk accessibility problems using Google Street View (GSV) images. This poster paper presents a first step towards combining automated methods (e.g., machine vision-based curb ramp detectors) in concert with human computation to improve the overall scalability of our approach.
PDF thumbnail
Combining Crowdsourcing and Google Street View to Identify Street-level Accessibility Problems
Hara, K., Le, V., Froehlich, J.
Proceedings of CHI 2013, Paris, France
Poorly maintained sidewalks, missing curb ramps, and other obstacles pose considerable accessibility challenges; however, there are currently few, if any, mechanisms to determine accessible areas of a city a priori. In this paper, we investigate the feasibility of using untrained crowd workers from Amazon Mechanical Turk (turkers) to find, label, and assess sidewalk accessibility problems in Google Street View imagery. We report on two studies: Study 1 examines the feasibility of this labeling task with six dedicated labelers including three wheelchair users; Study 2 investigates the comparative performance of turkers. In all, we collected 13,379 labels and 19,189 verification labels from a total of 402 turkers. We show that turkers are capable of determining the presence of an accessibility problem with 81% accuracy. With simple quality control methods, this number increases to 93%. Our work demonstrates a promising new, highly scalable method for acquiring knowledge about sidewalk accessibility.
PDF thumbnail
A Feasibility Study of Crowdsourcing and Google Street View to Determine Sidewalk Accessibility
Hara, K., Le, V., Froehlich, J.
Poster Proceedings of ASSETS 2012, Boulder, Colorado, USA
We explore the feasibility of using crowd workers from Amazon Mechanical Turk to identify and rank sidewalk accessibility issues from a manually curated database of 100 Google Street View images. We examine the effect of three different interactive labeling interfaces (Point, Rectangle, and Outline) on task accuracy and duration. We close the paper by discussing limitations and opportunities for future work.