Tuesday, September 24, 2013

Proposal: Part 1

We decided to write and submitted a proposal to present posters at the Tapia conference in order to get experience when presenting our work. We divide the project in two parts. My part was titled Simon speech recognition integration with Caribou On Screen Keyboard (OSK). Our goal is to integrate the Gnome's On-Screen Keyboard, Caribou and Simon, KDE's speech recognition software, through Assistive Technology Service Provider Interface (AT-SPI 2) in order to develop a very potential tool to disabled people and give them the flexibility to obtain all the same possibilities without almost disadvantage. To obtain a multi modal interface that can be a very useful tool to people with disabilities. The integration of there two platforms through AT-SPI2 can allow us to develop software of physically disabled people to give them the means to perform many activities with minimal or no assistance. The implementation of this useful assistive technology are through two different platforms that are both an open source resources.



In previous work, Simon was integrated with a KDE’s gedit software through their AT-SPI2 interface, and there has also been previous work with training vi for programming using voice recognition. See python to code by voice ; however, our objective to create a voice interface to the keyboard, so that the user can use any editor to program without having to program and memorize the two thousand commands that the creator of previous interface did. It can be a powerful tool for their daily lives. The overreaching goal of our work is to implement this integration with any IDE.

Monday, September 16, 2013

Proposal: Part 2

The Tapia Conference 2014 was accepting proposal submissions and in our case we wanted to apply for the posters. We had already discussed most of our ideas for the project but this was the opportunity to present it formally and get feedback on it. Our goal is to create an assistive technology that comprises of two open source projects: KDE's Simon voice recognition Software and GNOME's Caribou on-screen keyboard. By integrating these and expanding their features we hope to create a tool for disabled individuals that, while programming, might enhance their performance and give them access to other features that might have previously been unfeasible under their circumstances.

My part, Part 2, consisted of explaining how the programming interaction would be implemented and why would it be beneficial. This led us to discuss more about the technical limitations of the typical user interface and input mechanisms, and how we could enable a compact interface with advanced features. The integration with programming would come directly from a compatible IDE that would port through ATSPI2 it's procedures and useful data to Caribou and sub-sequentially to Simon's applications. Once the data is ported it would be managed for efficient display on the interface and voice command enabling. To achieve this, the caribou interface must be extensively modified to create an area to process the data depending on it's usefulness at any point.

As part of the proposal, a project draft was also required to be included. This draft is to be the Poster's in the final work. The following is the text submitted as requested:

The second phase of this assistive technology research project consists of integrating an on-screen keyboard (OSK) with an Integrated Development Environment (IDE) that implements a modified interface to expand functionality and facilitate interaction for users, specifically physically disabled individuals. Common assistive tools are on-screen keyboards and speech recognition software, such as Caribou, Simon and Dragon Naturally Speaking. Extensive research has been done on the efficacy of these tools, proving that they work well for controlled environments and specific tasks, but may not be useful for conditions such as motor disorders that inhibit the user’s mobility or pronunciation. Integrating their functionality to manage complex dynamic tasks is an aspect that this project explores.

Programming is an arduous task for individuals with physical disabilities that rely on independent tools to interact with their digital environment. Providing a multimodal Integrated Development Environment where programming and its complex syntax and dynamic structure is key to lessening this burden. By modifying the static structure of a on-screen keyboard to dynamically adjust to the criteria of the environment, our project provides flexibility on what type of information is displayed to the users at a given moment. While facilitating and reducing keystrokes, the application’s speech recognition mode can perform any task available to it through the interface thereby reducing stress on extremities by relying on voice commands and taking advantage of convenient features as word completion, word prediction, embedded application commands (open, close, save), or grammar specific commands (comment, collapse, expand) that are available through both methods.

"Free" vs "Open Source"

Before we start to work on the Kavita project, we read about "free" and "open source" software projects, and the difference between them. It's important to understand that "free" projects isn't about non cost software projects, the word "free" carried with it a moral connotation. It's more about freedom to share and modify for any purpose, but it can't be marketing as proprietary software. In that way, the "open source" software projects can be shared, modified and also can be using as proprietary software. The word "open source" appears to be part of a vocabulary to talk about free software as a development of business methodology, instead of as moral reference.