Archive for the ‘Accessibility’ Category

Fundamental barriers to accessibility to digital materials

November 27, 2014

It’s nearly twenty years ago since I did my D.Phil work on the principles for auditory access to complex notations. I’ve moved away from the field of research into the HCI of accessibility since then, but it’s always been in my mind and recent conversations have prompted me to write a bit about some of my thoughts on the underlying issues of using computers (digital presentations of material) when one cannot see.

 

My D.Phil. research work was motivated by my experience of doing mathematics using algebra notation as part of my masters in Biological Computation at the University of York. I did this after giving up, as a consequence of sight loss, my previous Ph.D. work in biochemistry. There was a lot of mathematics in the master’s degree; not necessarily that hard as mathematics goes, but made much harder by not being able to see the notations and use the notations for manipulation and effective/efficient thinking.

 

At the root of my difficulty, it appeared to me, was not having a piece of paper upon which I could do my algebraic manipulations. The paper remembers the equation for me, I can write down re-arrangements of the symbols, cross things out, all the time only using my brain to work out what to do, both strategically and tactically, then remembering each bit by externalising it on the page in front of me. Without the paper, most of this had to happen in my head – I mimicked some of paper and pencil’s attributes in Braille or, even worse, in text in a text editor, but it isn’t the same at all – and this prompted me to think exactly why it isn’t the same. I discussed these problems with Alastair Edwards and eventually did my masters project with him looking at rendering and browsing algebra written in some linear code in audio and being able to render and browse that audio presentation. This led on to my D.Phil research work with Alastair where I looked at the human computer interaction of the problem of doing algebra, and other complex notations, in audio.

 

There’s no need to go into the details of my D.Phil. work here, because I want to look at basics of interacting with information when one cannot see; in particular what’s possible (if not beautiful) In terms of interaction and what the real “can’t do very well” problems are that, as far as I can tell, still remain.

 

Reading “plain” text (words written one after another in normal natural language form) is more or less OK. I use a screenreader and I can read and write straight-forward text without too much of a problem. My window upon the screen of text is rather small; it’s essentially one line. I can get my screenreader to read bigger portions of text, but the quick look, the scan is still problematic. I can move around the text to find what I want and inspect it with relative ease; the interaction is, to my mind, clunky, but it’s all doable. As soon as moves away from simple, linear strings of works and into two-dimensions, as in algebra notation, and into informationally dense material (again algebra is dense and complex or complex because it’s dense), speech based screenreaders don’t offer an effective reading solution.

 

This comes to two of the things that I worked out during my D.Phil.:

  1. A listening reader tends to be a passive reader. As a listening reader, I tend to lack agility in my control of information flow. In the worst case, e.g., with an audio book, I listen at the rate dictated by the reader, not what my eyes and brain want to do. Obviously I control information flow with keystrokes that makes my screenreader say things, but it’s all a bit clunky, slow and intrusive compared to what one does with ones eyes – they move around the screen (or paper) in a way that basically gets me to the right portion of text, either word by word, or bigger chunks, without my having to consciously do very much at all. So, speed and accuracy in the control of the flow of information turns the reader from being passive to being active.
  2. I lack an adequate external memory. The paper or the screen has the text upon it and it remembers it for me, but as it’s slow and clunky to get at it, I rely more on my brain’s memory and that’s a bit fragile. Of course there is an external memory – the information I have access to on a computer – but it only really plays the role of an external memory if there is sensible (fast and accurate) control in the access to that external memory.

     

    The external memory in conjunction with speed and accuracy in control of information flow makes eyes and paper/screen all rather effective. It was these two issues that I addressed in my D.Phil. work.

     

Despite these issues, access to straight-forward text is OK. I, along with lots of other people, read and write perfectly well with screenreader’s and word processors. In the small the interaction works well, but I find reading and comprehending larger documents much harder work; it’s a burden on my memory and flipping backwards and forwards in text is relatively hard work – not impossible, but harder work than it was when I could see.

 

Some of this difficulty I describe with the large grained view of information comes from the ability, or the lack of it, to glance at material. Typesetters have spent centuries working out styles of layout that make things easy to read, there are visual clues all over pages to aid navigation and orientation. Algebra notation is laid out to group operands in a way that reflects the order of precedence of the operators – it makes a glance at an expression written in algebra easier. Similarly, diagrams need to at least give the illusion of being able to see the whole thing (see below) – the glance at the whole diagram. Work on glancing has been done, including some by myself, and there are ways of doing it for individual information types, but I don’t know of a generic solution and certainly one that is available to me for everyday use.

 

  1. Glancing at information to assess reading strategies, help orientation and navigation, and choices in reading is difficult

     

My final chore is the looking at two things at once problem. Eyes give one the impression that two things can be looked at at once. In the small this is true, the field of accurate vision is narrow, but does see several things in detail at once. However, the speed and accuracy in control of information flow afforded by the eyes, combined with the layout of information (when done well), on an external memory means that eyes can move back and forth between items of information rather well. This is hard in speech and audio- so much layout information is lost – when reading research papers, moving back and forth from the narrative text to the references was easy with eyes; it’s hard with speech (what I do is to have two windows open and move between the windows – this is hard work).

 

My interaction with spreadsheets always seems v clunky to me. My natural view with a speech based screenreader is one cell at a time; looking at column or row headers to see what they are is naturally a matter of flicking one’s eyes up or along to remember the orientation and that’s fine. I can do this, but the means of doing so is intrusive. Similarly, dealing with any tabular information is painful. The ability to compare rows, columns, cells is central; indexing via column and row headings is vital. I have the keystrokes to do it all in my screenreader, but it’s hard work – in contrast, one flicks one’s eyes back and forth and one appears to be looking at two things at once. Tables are nothing in terms of difficulty when it comes to diagrams; even if there is access to the material (e.g., simple line graphs, histograms, and node and arc diagrams) one has to build up a picture of the whole thing piecemeal. The “looking at two things at once” ability of eyes makes this task relatively easy and the inability to do this with speed, accuracy and so on means many interactions are either very hard or impossible.

 

  1. Looking at two things at once is nigh on impossible

 

In conclusion, I think there are still two main unsolved problems in audio interaction with information:

  1. Glancing;
  2. Looking at two things at once.

Once I have general solutions to these two things, I’ll be a much more effective and efficient reader that is satisfied with my reading.

An accessible front end to Google Calendar

September 15, 2014

I’ve not written about being blind and using computers in this forum before, but I actually have something to say – my new Accessible Google Calendar (AGC) is ready and I like it. As can be appreciated, a calendar or diary is a tremendously useful thing. Not having effective (as far as I’m concerned) access to electronic calendars, and being able to share commonly used calendar mechanisms with colleagues, makes working more trying than it need be.

 

The advent of on-line calendars and so on should have made life easier, but the two-dimensional table layout of calendars/diaries makes it too much like hard work. In addition, the Web 2.0 nature of tools like google Calendar is not to my screenreader’s liking and therefore not my cup of tea. As a consequence, for many years I had to organise my diary vicariously and, as a result, badly (just due to the overheads of communication, not the people at the other end of my communications).

 

My first step along the path to a solution was a little command line gadget made for me by Simon Jupp, one of my research associates. This gadget took some arguments that scoped time and then printed out that portion of my google Calendar diary to the screen in text, which was easy for my screenreader to handle. Additions to my diary had to, of course, be done by someone else.

Dimitris Zlitidis then did my M.Sc. project on creating an accessible front end to Google Calendar and this allowed me to both read and write to my google Calendar. This project gave the design of the AGC’s user interface I describe here. I’ve been using this for many years. Google changing their calendar’s API has prompted a re-write by Nikita Abramovs, a vacation student at the School of Computer Science of the University of Manchester, and it’s this re-write I now describe.

 

The Accessible google Calendar (AGC) tool was written in C#; this has all the user interface stuff that is native to Windows, the operating system I use, so its interface is inclined to work with my screenreader JAWS immediately. I then looked at scoping and prioritising what I wanted done. There’s a lot that one can do with Google Calendar – a lot of management of calendar type stuff – who can edit the entries; inclusion of schedules of public holidays etc. I left these out. When I want them I will work with the Web version and do so vicariously as necessary. The two things I really want to do are:

 

  1. Look at entries in the portions of time that I most frequently wish to look;
  2. Add, modify and delete entries. I want to do this with access to the facilities for specifying times (all day and fragments of days) and to do recurring events.

As the “past is a foreign country”, the main things I want to do are to look at the “now” and the “future” events in my diary. So, there’s a list of simple patterns of ways in which I choose events at which to look:

 

  1. Today and tomorrow;
  2. This week and next week;
  3. This month and next month
  4. “Select month period” extends the month functionality by being able to choose months further into the future, which the option of a) single month; b) all months; c) intervening months;
  5. For the rare dates that fall outside this scope there’s a choose date dialogue where I can specify start and end days.
  6. Finally, I can use a search date for events by their content.

 

AGC’s Event Tab is shown here:

 

Figure 1An image of AGC’s Events tab showing a week’s events and the various controls for selecting events; details are in the rest of the text.

 

Events are shown as a simple list that I can move up and down with my cursor key. Unconfirmed events are indicated by a “*” at the start of the entry. I can update events by clicking (pressing return) on the event, which brings up an update event dialogue (similar to the add event dialogue described below). There’s a settings tab that allows me to specify things like: Showing end times; 12 or 24 hour clock; separators for parts of dates (space, slash or dash); and some sounds or text to indicate errors.

 

AGC’s add date functionality is a moderately complex dialogue, but it flattens out any two-dimensional calendar presentation from which to pick dates. Nearly everything is done via little spin boxes that let me pick years, months and days via my cursor keys. As I fix the start time, the end time dialogue keeps track, defaulting to one hour later, to reduce the amount of “setting” I have to do. Checkboxes for whole day events limits the interaction to setting the day date and a recurring events checkbox exposes dialogue for setting for how long the recurring holds and on which days the recurring event happens. Finally the dialogue allows me to set a reminder time and whether or not the event is confirmed. There’s also an “add quick event” tab that lets me use Google’s controlled natural language for setting dates – “Dinner with Isaac Newton 7 p.m. next Friday” does as it says on the tin. There’s a menu of template CNL sentences from which to pick.

The Add Event tab, showing the recurring events bit, is shown here:

 

Figure 2An image of AGC’s Set events tab showing an event that recurs weekly from September to December

 

I’ve used the original version of AGC for several years and it’s been a vital tool. Dimitris and I got the user interface more or less right and Nikita’s re-write and update has made it even better. I rarely need to get outside intervention in my diary setting and the view events tab has a nice regularity, symmetry and simplicity about it that I rather like. I rarely use the choose date and search functions (though they are nice to have for the odd occasion); just having today, tomorrow, this week, next week, this month and next month does it for me nearly all the time. The user interface, having been used for years, has had lots of testing and, while the user base is not extensive (me), it does all that I need to do on a frequent and regular basis. It’s good that Google have exposed the API to their calendar. Ideally I’d like the Web offering of their calendar to work well for me, but I need to do my diary now and AGC is my solution.

 

The AGC installer can be downloaded from

https://github.com/TheOntologist/AGC/releases. There is a short readme file with a description of AGC functionality and how to install it can be found at: https://github.com/TheOntologist/AGC/blob/master/README.md