Monday, October 14, 2013

AT-SPI2

Assistive technologies, such as screen readers or magnifiers, can use this logical representation to enable individuals with disabilities to browse and interact with applications.The Accessibility ToolKit (ATK) is a development toolkit from GNOME which allows programmers to use common GNOME accessibility features, such as high-contrast visual themes for the visually-impaired and keyboard behavior modifiers for those with diminished motor control, to make GNOME applications accessible.

The Assistive Technology Service Provider Interface (AT-SPI) is a toolkit-neutral way of facilitating accessibility in applications, by using native accessibility APIs. AT-SPI2 is, as was the intention from the beginning, a platform-neutral framework for providing bi-directional communication between assistive technologies (AT) and applications. Through the use of AT-SPI, an application's components' state, property and role information is communicated directly to the end user's AT, thereby facilitating bi-directional (input and output) user interactivity with, and control over, an application or compound document instance.

Who is Kavita?

Kavita got her B.S. in Computer Science and Mathematics. In 2001, Kavita joined UMBC's graduate program in Computer Science to pursue her Ph.D. Kavita is a student with spinal muscular atrophy (SMA), can no longer physically attend classes and can only type with one finger. Last year, with the help of her Mom and her wheelchair, Kavita was able to come to campus and attend classes and our meetings. This year, she is only able to continue her studies and classes via Skype from home. But even from home, Kavita continuesto maintain the highest grades, a 4.0 GPA while working on her research.

Tuesday, September 24, 2013

Proposal: Part 1

We decided to write and submitted a proposal to present posters at the Tapia conference in order to get experience when presenting our work. We divide the project in two parts. My part was titled Simon speech recognition integration with Caribou On Screen Keyboard (OSK). Our goal is to integrate the Gnome's On-Screen Keyboard, Caribou and Simon, KDE's speech recognition software, through Assistive Technology Service Provider Interface (AT-SPI 2) in order to develop a very potential tool to disabled people and give them the flexibility to obtain all the same possibilities without almost disadvantage. To obtain a multi modal interface that can be a very useful tool to people with disabilities. The integration of there two platforms through AT-SPI2 can allow us to develop software of physically disabled people to give them the means to perform many activities with minimal or no assistance. The implementation of this useful assistive technology are through two different platforms that are both an open source resources.



In previous work, Simon was integrated with a KDE’s gedit software through their AT-SPI2 interface, and there has also been previous work with training vi for programming using voice recognition. See python to code by voice ; however, our objective to create a voice interface to the keyboard, so that the user can use any editor to program without having to program and memorize the two thousand commands that the creator of previous interface did. It can be a powerful tool for their daily lives. The overreaching goal of our work is to implement this integration with any IDE.

Monday, September 16, 2013

Proposal: Part 2

The Tapia Conference 2014 was accepting proposal submissions and in our case we wanted to apply for the posters. We had already discussed most of our ideas for the project but this was the opportunity to present it formally and get feedback on it. Our goal is to create an assistive technology that comprises of two open source projects: KDE's Simon voice recognition Software and GNOME's Caribou on-screen keyboard. By integrating these and expanding their features we hope to create a tool for disabled individuals that, while programming, might enhance their performance and give them access to other features that might have previously been unfeasible under their circumstances.

My part, Part 2, consisted of explaining how the programming interaction would be implemented and why would it be beneficial. This led us to discuss more about the technical limitations of the typical user interface and input mechanisms, and how we could enable a compact interface with advanced features. The integration with programming would come directly from a compatible IDE that would port through ATSPI2 it's procedures and useful data to Caribou and sub-sequentially to Simon's applications. Once the data is ported it would be managed for efficient display on the interface and voice command enabling. To achieve this, the caribou interface must be extensively modified to create an area to process the data depending on it's usefulness at any point.

As part of the proposal, a project draft was also required to be included. This draft is to be the Poster's in the final work. The following is the text submitted as requested:

The second phase of this assistive technology research project consists of integrating an on-screen keyboard (OSK) with an Integrated Development Environment (IDE) that implements a modified interface to expand functionality and facilitate interaction for users, specifically physically disabled individuals. Common assistive tools are on-screen keyboards and speech recognition software, such as Caribou, Simon and Dragon Naturally Speaking. Extensive research has been done on the efficacy of these tools, proving that they work well for controlled environments and specific tasks, but may not be useful for conditions such as motor disorders that inhibit the user’s mobility or pronunciation. Integrating their functionality to manage complex dynamic tasks is an aspect that this project explores.

Programming is an arduous task for individuals with physical disabilities that rely on independent tools to interact with their digital environment. Providing a multimodal Integrated Development Environment where programming and its complex syntax and dynamic structure is key to lessening this burden. By modifying the static structure of a on-screen keyboard to dynamically adjust to the criteria of the environment, our project provides flexibility on what type of information is displayed to the users at a given moment. While facilitating and reducing keystrokes, the application’s speech recognition mode can perform any task available to it through the interface thereby reducing stress on extremities by relying on voice commands and taking advantage of convenient features as word completion, word prediction, embedded application commands (open, close, save), or grammar specific commands (comment, collapse, expand) that are available through both methods.

"Free" vs "Open Source"

Before we start to work on the Kavita project, we read about "free" and "open source" software projects, and the difference between them. It's important to understand that "free" projects isn't about non cost software projects, the word "free" carried with it a moral connotation. It's more about freedom to share and modify for any purpose, but it can't be marketing as proprietary software. In that way, the "open source" software projects can be shared, modified and also can be using as proprietary software. The word "open source" appears to be part of a vocabulary to talk about free software as a development of business methodology, instead of as moral reference. 

Tuesday, July 16, 2013

Introduction

This blog is a collaboration between an Assistant Professor and me, an undergrad, to develop an assistive technology to help another student at the graduate level complete her dissertation. We are working on creating a multimodal On Screen Keyboard (OSK) that can interface with an Integrated Development Environment(IDE). We are hoping to integrate this technology into one of the Humanitarian Free and Open Source Software (HFOSS) communities.