Efficient, but Equitable Access to Information
by Mahika Phuthane
The Challenge
The New York City government website (nyc.gov) hosts over 50 city department agencies, and is visited by thousands of residents every day; each agency is constantly updating their pool of resources, current events, legislative forms, the list is endless--- in this growing web landscape, how can visitors get prompt access to the information they need? More importantly, how do we ensure that this access to resources and information remains efficient and accessible to readers with disabilities?
The Mayor’s Office of People with Disabilities (MOPD) charioted the effort of creating an accessible chatbot for the NYC’s government website. Chatbots and large language models (LLMs) have emerged as promising technologies to sift through enormous amounts of text, as well as infer information to respond to questions. I sought out to explore these possibilities with the MOPD as a Siegel Family Endowment PiTech PhD Impact Fellow in summer 2023 which later required partnering with the Office of Technology & Innovation (OTI), and other city agencies.
Building an Accessible Interface
One of the primary challenges we face with readily-available chatbots is their inaccessibility and incompatibility with assistive technologies. For instance, blind and low vision individuals use screen magnification or screen readers, which do not recognize chatbot applications and simply gloss over themit. ThereforeSo, I began working on a front-end interface of a chatbot application that would not only be recognized by assistive software, but would offer additional features and markers for usable accessibility.
The key to understanding web accessibility is ARIA: Accessible Rich Internet Applications.
ARIA is a collection of attributes that 1) bridges gaps in the code, and 2) augments web elements to create more equitable experiences for those using assistive technologies. Web design often relies on visual cues to convey usability, for instance, a grayed-out button, or mouse-hover actions. Implementing ARIA and augmenting the code with proper labels, descriptions, and properties ensures that screen readers and other assistive software can parse the content and convey the same usability cues. I began modifying an open-source chatbot application for MOPD’sour purposes, and utilized ARIA properties to allow for accessibility. I added descriptive names and labels to each element of the application (see Figure 2a); Correctly incorporating ARIA into any web application can be tricky, but critical for the application’s accessibility and overall ease of use.
A challenging ARIA concept was Focus Management. A comfortable user experiences allows focus to intentionally and intuitively transition from one part of the screen to another, and many elements align with a specific Focus order and depth (cite). In Figure 2b, we see the focus shift to the options presented by the chatbot, instead of focussing on other parts of the website, such as other headings, or text fields.
Another common struggle with readily-available chatbots is alert notification and ordering of messages. Usually, screen readers begin reading messages from top down, and don’t shift focus or pause reading when new messages appear. I mitigated these challenges by allowing up and down arrow navigation of messages, and adding ARIA live regions to new message fields. In Figure 3, we see the user read the most recent message, and with a screen reader enabled, this message is parsed as: “Chatbot said Hi Mahika, how can I help you?”, whereas otherwise it would have been read as… Note that the logo image, and other meta information about this message areis not read. Simplicity and minimalism are key aspects to inclusive design, a vision that everyone at MOPD and OTI offices shared.
Exploring Artificial Intelligence Options
Beneath the interface, it was important to consider how the chatbot could synthesize text from the nyc.gov website and respond intelligently to the user. Recent advancements in large language models by companies such as OpenAI offer these services, but the government preferred to not use paid and privatized services and instead focus on open-source AI models. Going forward, I hope to collaborate with back-end engineers and cybersecurity teams within OTI to understand how AI can be safely integrated within the chatbot dialogue systems in ways that mitigate concerns around security.
In summary, currently available chatbot solutions are inaccessible, which fueled the need to develop a customizable and scalable chatbot, with accessibility as a priority built-in from the start. It was a pleasure to work with the MOPD on this initiative over the summer and I am pleased to be continuing my work with them during the Fall semester with support from the PiTech Initiative at Cornell Tech. I proposed building additional features such as ASL-interpretation, or voice-input, which we will co-design with members of the disability community. Through this project, we aspire to provide equitable and efficient access to nyc.gov for users with various disabilities and assistive software, a mission that the NYC government believes in. By documenting this process, through code and natural language, we plan to make this chatbot open-source and easily re-usable by other websites and agencies.