OBOPedia: An Encyclopaedia made from Open Biomedical Ontologies

March 31, 2015

A little while ago I wrote a blog about using an ontology or ontologies as background knowledge about a field of interest in order to learn about that domain, rather than simply annotating data or building some kind of hierarchical search interface. The idea is that an ontology captures knowledge about a field of interest; I should be able to look at that ontology and gain some kind of understanding about that domain by examining the terms used to name the class and its definition about how to recognise objects in that class (both the natural language definition and the axioms that describe that class’ objects in terms of relationships to other objects. In that blog article I conjectured that an encyclopaedia style presentation of many ontology entries could work as a way of presenting the large amount of background knowledge captured in the ontologies the community has built. My feeling is that the standard graphical presentation of blobs and lines isn’t necessarily a good way of doing this, especially when there are several ontologies at which to look. Encyclopaedia are also meant for “looking up things” and finding out about them – but we can exploit Web technologies and the structure of an ontology to get the best of both worlds. The OBO ontologies are particularly attractive for an encyclopaedia because:

  • They cover a broad spectrum of biology – from sequence, through proteins, processes, functions, cells, cellular components to gross anatomy.
  • Each entry in an ontology has a human readable label, synonyms, a natural language definition, all of which are”standard” parts of an encyclopaedia entry.
  • The relationships between entries in the ontology can provide the “see also” links for an encyclopaedia entry.

 

One of my undergraduate project students for this year, Adam Nogradi, has built OBOPedia for me as an example of this kind of presentation for a group of ontologies. OBOPedia may be found via http://www.OBOPedia.org.uk. The current version of OBOPedia has nine ontologies, including the OBO Foundry ontologies plus a few more and has over 210,000 entries; the ontologies currently available are:

  • The Gene Ontology.
  • The Protein Ontology.
  • The Chemical Entities of Biological Interest ontology.
  • The Ontology of Biomedical Investigation.
  • The Phenotypic Quality Ontology.
  • The Zebra Fish Anatomy and Development Ontology.
  • The Xenopus Anatomy and Development Ontology.
  • The Human Disease Ontology.
  • The Human Phenotype Ontology.
  • The Plant Ontology.

     

An example of an entries page for OBOPedia can be seen in the picture below:

 

This shows Entries are arranged alphabetically. The screen here shows some entries from “E”, after some scrolling; on view are “”ether metabolic process” from GO and “ethmoid cartilage” from the Zebrafish Anatomy and Development Ontology. Each entry has the main label as the entry’s title, the various synonyms, the natural language definition and some “see also” links. The letters down the left hand side takes one to the beginning of entries starting with that letter. Entries are shown 50 at a time. One nice aspect of this style of presentation can be the serendipity of looking at entries surrounding the target entry and seeing something of interest; a typical hierarchical display automatically puts entries that are semantically related more or less in the same place – this encyclopaedia presentation doesn’t, but preserves the hierarchy via the “see also” links (though what those link to is rather hidden until arrival at the end of the link, which isn’t the case in most graphical presentations). Each entry shows the ontology whence the entry came – there are several anatomies containing the entry “lung” and knowing whence the entry comes is just a good thing. The picture also shows the (possible) exact, broader, narrower and related synonyms taken from the entries in the ontologies. At the moment OBOPedia only uses the subsumption links for “see also”s, but the aim is to expand this to other relationships in the fullness of time. I’d also like to include the ability to use DL-queries to search and filter the encyclopaedia, but time in this project has not permitted.

 

The picture below shows OBOPedia’s search in action.

The search was for “lung” and entries were retrieved from the Gene Ontology and the Human Disease Ontology; some of the entries brought back and available for viewing were “lung disease”, “lung leiomynoma”, “lung induction”, “lung lymphoma”, “hyperlucent lung” and many others…

Along with each OBOPedia entry there is also a “rate this definition” feature. This definition rating uses a simple five point scale that allows people to rate the natural language definition (capturing comments will come at some point). The idea here is that feedback can be gathered about the natural language definitions and eventually this will form an evaluation of the definitions.

 

OBOPedia is an encyclopaedia of biology drawn from a subset of the OBO ontologies (there’s no reason not to include more than the nine currently on show, except for resources), exploiting their metadata, especially their natural language definitions. OBOPedia is not a text-book and it’s not a typical blob and line presentation of an ontology. It’s an encyclopaedia that presents many ontologies at once, but without the reader necessarily knowing that he or she is using an ontology. It’s an attempt to give an alternative view on the knowledge captured by the community in a range of ontologies in a way that gives easy access to that knowledge. OBOPedia may be a good thing or a bad thing. Send comments to Robert.Stevens@manchester.ac.uk or add comments to this blog post.

Patterns of authoring activity when using Protégé for building an ontology

February 9, 2015

We’ve continued our work investigating the human-computer interaction of authoring an ontology. We had a couple of papers last year looking at some qualitative aspects of ontology authoring through interviews with experienced ontologists. We wanted to follow this up with quantitative work looking at the activities during the addition of axioms when authoring an ontology. I’m pleased to say we’ve just had a long paper accepted for CHI 2015 with the following details:

 

Markel Vigo, Caroline Jay and Robert Stevens. Constructing Conceptual Knowledge Artefacts Activity Patterns in the Ontology Authoring Process. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems: CHI 2015; 18 Apr 2015-24 Apr 2015; Seoul, Korea.

 

I reported some early work in this quantitative study. In this latest work we’ve taken the following approach:

  • We’ve instrumented a version of Protégé 4.3 (P4) to record every keystroke, mouse click and so on in a time-stamped log file (it’s called Protégé4US – the “Us” is for “user studies”). We divided the events into interaction events (interacting with the ontology and its axioms via the class and property hierarchies and the axiom description window), authoring events (typing an axiom, class declaration, etc.) environment events (invoking the reasoner, getting an explanation, etc.).
  • We had experienced ontology authors perform a series of tasks to build an ontology of potatoes. Three tasks of increasing difficulty involving making various defined classes over descriptions of some 15 potato varieties, the creation of which was also part of the tasks.
  • Whilst this happened we recorded what was happening on the screen.
  • Finally, we recorded eye-tracking data as to where the author’s gaze fell during the ontology authoring.

 

In capturing eye-tracking data, the screen of Protégé4US is divided up into areas of interest as shown below. This picture shows the main view as an area of interest; other views involve classes, properties and individuals and these have their own areas of interest defined. These AOI are used to determine the dwell time of eye gaze during the tasks.

 

 

The patterns of ontology authoring activity we found were:

 

  1. An exploration cycle. The asserted class hierarchy is expanded after ontology loading – in over 31% of the time an expansion is followed by an expansion as users appear to familiarise themselves with the structure of the ontology. Eventually, this behaviour appears to become directed as an author chooses a class to edit. In contrast, the expansion of the inferred class hierarchy appears to be more exploratory as the authors check what has happened post reasoning, perhaps answering the question “have I found all the changes?”.
  2. An editing cycle. Here an entity is selected, followed by selection of another entity 37% of the time or selection of the description area 29% of the time. Once selected, a description will be modified 63% of the time and followed by selection of another entity 59% of the time. This looks like selecting an entity, inspecting its description and then either editing it or moving on to another entity, each decision based on the content of the description.
  3. A reasoning cycle. Just prior to the reasoner being invoked, the ontology is saved 40% of the time; a defined class is created (17%). After the reasoner is run, 41% of the time participants observe the change on the asserted class hierarchy and then look at a description where the effects of reasoning can be seen. The inferred class hierarchy is inspected post-reasoning 30% of the time, which is again followed by the expansion of the hierarchy 43% of the times.

These activity patterns are shown in the following pictures.

 

Overall, we can see the following flow of events:

  • Initial exploration of the ontology.
  • A burst of exploration coupled with editing.
  • Reasoning followed by exploration.

 

An activity pattern is a common sequence of events. The details of our analysis that led to these activity patterns are in the paper, but some of the pretty pictures and the basic analysis steps that gave us these patterns are below.

 

 

This is a simple log plot of the number of each type of event recorded across all participants. The top three events – entity selected, description selected and edit entity:start account for 54% of events. Interaction events account for 65% of events, authoring events for 30% and environment events for 5%. There’s a lot of a few events and interaction with P4 accounts for most things.

 

This picture shows the N-grams of consecutive events. We can see lots of events like expanding the class hierarchy (either asserted or inferred) occurring many times one after the other, indicating people moving down through the hierarchy – the class hierarchy seems to be a centre of interaction – looking for classes to edit and checking for the effects of reasoning.

Those are the events themselves, but what happens after each event. Below there is a plot of transitions from event to event (note the circles around to the same event and the thickness of the lines indicating the likelihood of the event occurring). A matrix of number of transitions from event to event gives a fingerprint for each user. We see that the fingerprints revealed by these transitions from state to state are the same within individuals for each task; that is, each task is operationalised in P4 in the same way.

The inter-user similarity is also high, suggesting patterns of events (though there is also evidence of some different styles here too). What is below is a 16×16 matrix showing the correlation of the fingerprints (i.e. transition matrices of all participants).

 


 

The eye-tracking data showed that the class hierarchy received by far the most fixations (43%) and their attention 45% of the time. The edit entity dialogue has 26% of the fixations and the same for attention, and the description area 17% of the fixations and 15% attention. If we look at events over time we begin to see patterns, but with gaps. Some of these gaps can be filled by looking at where the eye gaze dwell – e.g., a user is looking at the description area and not interacting via events. The picture below shows the distribution of dwell times on each area of interest on the P4 user interface – note these numbers tell the same sort of story as the P4US event logging.

 


Each cell of the following matrix conveys the number of fixations between areas of interest. In other words, it indicates where users will look in t based on t-1 (the x-axis indicate the origin while the y-axis is the destination). The darker the cell is the more transitions there are between areas. We find that given a fixation on a given area the most likely next fixation is on the same area.


We also see other transitions:

  • From class hierarchy to description area (and vice versa).
  • From the class addition pop-up to class hierarchy.
  • From the edit entity dialogue to the class hierarchy.
  • From the edit entity dialogue to the description area.

 

Again, we se the class hierarchy being central to the interactions.

 

To find the activity patterns themselves, we next merged the eye-tracking and N-gram analyses. First we collapsed consecutive events and fixations of the same type. We then took the resulting N-grams of size > 3 and extended them one at a time until it only yielded repeated and smaller N-grams of the merged data. This analysis resulted in the editing, reasoning and exploration activities outlined at the top.

 

So what do we now know?

 

It appears that the class hierarchy is the centre of activity in Protégé. Authors look at the asserted class hierarchy to find the entity he or she wishes to edit and then edits in the description window. The inferred class hierarchy is used to check that what is expected to have happened as a result of reasoning has indeed happened. While activity in each of these windows involves a certain amount of “poking about”, the activity in the asserted class hierarchy looks more directed than that in the inferred class hierarchy. Design work that can ease navigation and orientation within the ontology and checking on the results of reasoning – all results, not just those the author expects to happen – would be a good thing.

 

Adding lots of axioms by hand is hard work and presumably error prone. Bulk addition of axioms is desirable, along with the means of checking that it’s worked OK.

 

There is a lot of eye-gaze transition between the class hierarchy and the editing area. In P4 these are by default these are top-left and bottom right. Defaulting to adjacency of these areas could make authoring a little more efficient.

 

We see experienced authors tend to save the ontology before reasoning. An autosave feature would seem like a good thing – even if the reasoners/Protégé never fell over.

 

Finally, rather than letting the author hunt for changes in the class inferred hierarchy changes should be made explicit in the display; in effect showing what has happened. This would be a role for semantic diff. Knowing what has changed between the ontology before and after a round of editing and reasoning could help – authors look for changes in the inferred class hierarchy – presumably the ones they are expecting to have happened; there may be other, unforeseen consequences of changing axioms and showing these semantic differences to users could be a boon. To do this we’ll be looking to exploit the work the work at Manchester on the Ecco semantic diff tool.

 

The paper has more detailed design recommendations. However, what this work does show is that we can gain insight into how ontologists operationalise their work by extracting patterns from log data. The task we used here (everyone built the same ontology to do the same tasks) is not so ecologically valid. It does provide a base-line and allowed us to develop the analysis methods. One pleasing aspect is that the findings in this quantative work largely supported that of our qualitative work. The next thing is to use Protégé4US while people are doing their everyday authoring jobs (do contact me if you’d be willing to take part and do a couple of hours work on Protégé 4US and receive our gratitude and an Amazon voucher). I expect to see the same patterns, but perhaps with variations, even if it’s just in timings, frequency and regularity.

The International Conference on Biomedical Ontology

January 24, 2015

It’s good to see the calls for the next International Conference on biomedical Ontology coming out; this time it’s to be held in Lisbon, Portugal on 27 – 30 July 2015. I encourage biomedical ontology folk to write papers and schedule themselves to get themselves along to Lisbon in July 2015.

 

The blurb about ontologies on the ICBO site is: “Ontologies are increasingly used in the semantic description of biological and medical data from creation to publication and consumption by semantically enabled applications. To be effective, such ontologies must be of high quality and work well together. Therefore, issues like techniques for ontology use, good ontology design, ontology maintenance, and ontology coordination need to be addressed. The International Conference on Biomedical Ontology (ICBO) series, http://www.icbo-conference.org, is designed to meet this need. ICBO 2015, the sixth event in the highly successful series, will bring together representatives of all major communities involved in ontology use and development in biomedical research, health care, and related areas.”

 

This covers the usual topics in biomedical ontologies, but that’s because there’s still a lot to do. Biomedical ontology is maturing, but there is still too much that is ad hoc and craft based; there are islands of good practice, but we need to learn from those and spread that best practice and really find out what works in the development, assessment and application of biomedical ontologies.

 

Here I especially want to encourage submissions to the workshops at ICBO that Melanie Courtot and Joao Ferreira are charing. Workshops are a great opportunity to focus upon a particular aspect of biomedical ontologies and, ideally, explore that topic deeply with interaction with the participants – workshops where work is done…

The deadline for workshop submissions is 10 February 2015 via the ICBO 2015 EasyChair site. I want to attend a workshop on programmatic development of ontologies where I can experience the various ways of avoiding hand-crafting of axioms and achieve bulk upload of axioms according to patterns etc etc. So, someone organise that and I should come along (to that one), but I’ll come to ICBO 2015 anyway; ICBO is informative and fun, and Lisbon is ace.

Fundamental barriers to accessibility to digital materials

November 27, 2014

It’s nearly twenty years ago since I did my D.Phil work on the principles for auditory access to complex notations. I’ve moved away from the field of research into the HCI of accessibility since then, but it’s always been in my mind and recent conversations have prompted me to write a bit about some of my thoughts on the underlying issues of using computers (digital presentations of material) when one cannot see.

 

My D.Phil. research work was motivated by my experience of doing mathematics using algebra notation as part of my masters in Biological Computation at the University of York. I did this after giving up, as a consequence of sight loss, my previous Ph.D. work in biochemistry. There was a lot of mathematics in the master’s degree; not necessarily that hard as mathematics goes, but made much harder by not being able to see the notations and use the notations for manipulation and effective/efficient thinking.

 

At the root of my difficulty, it appeared to me, was not having a piece of paper upon which I could do my algebraic manipulations. The paper remembers the equation for me, I can write down re-arrangements of the symbols, cross things out, all the time only using my brain to work out what to do, both strategically and tactically, then remembering each bit by externalising it on the page in front of me. Without the paper, most of this had to happen in my head – I mimicked some of paper and pencil’s attributes in Braille or, even worse, in text in a text editor, but it isn’t the same at all – and this prompted me to think exactly why it isn’t the same. I discussed these problems with Alastair Edwards and eventually did my masters project with him looking at rendering and browsing algebra written in some linear code in audio and being able to render and browse that audio presentation. This led on to my D.Phil research work with Alastair where I looked at the human computer interaction of the problem of doing algebra, and other complex notations, in audio.

 

There’s no need to go into the details of my D.Phil. work here, because I want to look at basics of interacting with information when one cannot see; in particular what’s possible (if not beautiful) In terms of interaction and what the real “can’t do very well” problems are that, as far as I can tell, still remain.

 

Reading “plain” text (words written one after another in normal natural language form) is more or less OK. I use a screenreader and I can read and write straight-forward text without too much of a problem. My window upon the screen of text is rather small; it’s essentially one line. I can get my screenreader to read bigger portions of text, but the quick look, the scan is still problematic. I can move around the text to find what I want and inspect it with relative ease; the interaction is, to my mind, clunky, but it’s all doable. As soon as moves away from simple, linear strings of works and into two-dimensions, as in algebra notation, and into informationally dense material (again algebra is dense and complex or complex because it’s dense), speech based screenreaders don’t offer an effective reading solution.

 

This comes to two of the things that I worked out during my D.Phil.:

  1. A listening reader tends to be a passive reader. As a listening reader, I tend to lack agility in my control of information flow. In the worst case, e.g., with an audio book, I listen at the rate dictated by the reader, not what my eyes and brain want to do. Obviously I control information flow with keystrokes that makes my screenreader say things, but it’s all a bit clunky, slow and intrusive compared to what one does with ones eyes – they move around the screen (or paper) in a way that basically gets me to the right portion of text, either word by word, or bigger chunks, without my having to consciously do very much at all. So, speed and accuracy in the control of the flow of information turns the reader from being passive to being active.
  2. I lack an adequate external memory. The paper or the screen has the text upon it and it remembers it for me, but as it’s slow and clunky to get at it, I rely more on my brain’s memory and that’s a bit fragile. Of course there is an external memory – the information I have access to on a computer – but it only really plays the role of an external memory if there is sensible (fast and accurate) control in the access to that external memory.

     

    The external memory in conjunction with speed and accuracy in control of information flow makes eyes and paper/screen all rather effective. It was these two issues that I addressed in my D.Phil. work.

     

Despite these issues, access to straight-forward text is OK. I, along with lots of other people, read and write perfectly well with screenreader’s and word processors. In the small the interaction works well, but I find reading and comprehending larger documents much harder work; it’s a burden on my memory and flipping backwards and forwards in text is relatively hard work – not impossible, but harder work than it was when I could see.

 

Some of this difficulty I describe with the large grained view of information comes from the ability, or the lack of it, to glance at material. Typesetters have spent centuries working out styles of layout that make things easy to read, there are visual clues all over pages to aid navigation and orientation. Algebra notation is laid out to group operands in a way that reflects the order of precedence of the operators – it makes a glance at an expression written in algebra easier. Similarly, diagrams need to at least give the illusion of being able to see the whole thing (see below) – the glance at the whole diagram. Work on glancing has been done, including some by myself, and there are ways of doing it for individual information types, but I don’t know of a generic solution and certainly one that is available to me for everyday use.

 

  1. Glancing at information to assess reading strategies, help orientation and navigation, and choices in reading is difficult

     

My final chore is the looking at two things at once problem. Eyes give one the impression that two things can be looked at at once. In the small this is true, the field of accurate vision is narrow, but does see several things in detail at once. However, the speed and accuracy in control of information flow afforded by the eyes, combined with the layout of information (when done well), on an external memory means that eyes can move back and forth between items of information rather well. This is hard in speech and audio- so much layout information is lost – when reading research papers, moving back and forth from the narrative text to the references was easy with eyes; it’s hard with speech (what I do is to have two windows open and move between the windows – this is hard work).

 

My interaction with spreadsheets always seems v clunky to me. My natural view with a speech based screenreader is one cell at a time; looking at column or row headers to see what they are is naturally a matter of flicking one’s eyes up or along to remember the orientation and that’s fine. I can do this, but the means of doing so is intrusive. Similarly, dealing with any tabular information is painful. The ability to compare rows, columns, cells is central; indexing via column and row headings is vital. I have the keystrokes to do it all in my screenreader, but it’s hard work – in contrast, one flicks one’s eyes back and forth and one appears to be looking at two things at once. Tables are nothing in terms of difficulty when it comes to diagrams; even if there is access to the material (e.g., simple line graphs, histograms, and node and arc diagrams) one has to build up a picture of the whole thing piecemeal. The “looking at two things at once” ability of eyes makes this task relatively easy and the inability to do this with speed, accuracy and so on means many interactions are either very hard or impossible.

 

  1. Looking at two things at once is nigh on impossible

 

In conclusion, I think there are still two main unsolved problems in audio interaction with information:

  1. Glancing;
  2. Looking at two things at once.

Once I have general solutions to these two things, I’ll be a much more effective and efficient reader that is satisfied with my reading.

Getting emotional about ontologies

October 22, 2014

It’s taken a long time, but we’re finally publishing our paper about evaluating the Emotion Ontology (EM). Evaluating ontologies always seems to be hard. All too often there is no evaluation, or it ends up being something like “I thought about it really hard and so it’s OK”, or “I followed this method, so it’s OK”, which really amounts to not evaluating the ontology. Of course there are attempts made to evaluate ontologies with several appealing to the notion of “large usage indicates a good ontology”. Of course, high usage is to be applauded, but it can simply indicate high need and a least bad option. High-usage should imply good feedback about the ontology; so we may hope that high usage would be coupled with high input to, for instance, issue trackers (ignoring the overheads of actually issuing an issue request and the general “Oh, I can’t be bothered” kind of response) – though here I’d like to see typologies of requests issued and how responses were made.

 

Our evaluation of the Emotion Ontology (EM) fits into the “fitness for purpose” type of evaluation – if the EMdoes its job, then it is to some extent a good ontology. A thorough evaluation really needs to do more than this, but our evaluation of the EM is in the realm of fitness for purpose.

 

An ontology is all about making distinctions in a field of interest and an ontology should make the distinctions appropriate to a given field. If the ontology is doing this well, then an ontology delivers the vocabulary terms for the distinctions that need to be made in that field of interest – if we can measure how well people can use an ontology to make the distinctions they feel necessary, then the ontology is fit for purpose. In our paper we attempted to see if the EM makes the distinctions necessary (and thus the appropriate vocabulary) for a conference audience to be able to articulate their emotional response to the talks – in this case the talks at ICBO 2012. That is, the EM should provide the vocabulary distinctions that enables the audience to articulate their emotional response to talks. The nub of our null hypothesis was thus that the EM would not be able to let the audience members articulate their emotions such that we can cluster the audience by their response.

 

The paper about our evaluation of the EM is:

 

Janna Hastings, Andy Brass, Colin Caine, Caroline Jay and Robert Stevens. Evaluating the Emotion Ontology through use in the self-reporting of emotional responses at an academic conference. Journal of Biomedical Semantics, 5(1):38, 2014.

 

The title of the paper says what we did. As back-story, I was talking to Janna Hastings at ISMB in Vienna in 2011 and we were discussing the Emotion Ontology that she’d been working on for a few years and this discussion ended up at evaluation. We know that the ontology world is full of sects and that people can get quite worked up about work presented and assertions made about ontologies. Thus I thought that it would be fun to collect the emotional responses of people attending the forthcoming International Conference on biomedical Ontology 2012 (ICBO), where I was a programme co-chair. If the EM works, then we should be able to capture self-reported emotional responses to presentations at ICBO. We, of course, also had a chuckle about what those emotions may be in light of the well-known community factions. We thought we should try and do it properly as an experiment, thus the hypothesis, method and analysis of the collected data.

 

 

 

 

Colin Caine worked as a vacation student for me in Manchester developing the EmOntoTag tool (see Figure 1 above), which has the following features:

  • It presented the ICBO talk schedule and some user interface to “tag” each talk with terms from the EM. The tags made sentences like “I think I’m bored”, “I feel interested” and “I think this is familiar”.
  • We also added the means by which users could say how well the EM term articulated their emotion. This, we felt, would give us enough to support or refute our hypothesis – testing whether the EM gives the vocabulary for self-reporting emotional response to a talk and how well that term worked as an articulation of an emotion. We also added the ability for the user to say how strongly they felt the emotion – “I was a bit bored”, “I was very bored” sort of thing.
  • We also used a text-entry field to record what the audience members wanted to say, but couldn’t – as a means of expanding the EM’s vocabulary.

 

We only enabled tagging for the talks that gave us permission to do so. Also, users logged in via a meaningless number which were just added to conference packs in such a way that we couldn’t realistically find out whose responses were whose. We also undertook not to release the responses for any individual talk, though we sought permission from one speaker to put his or her talk’s emotional responses into the paper.

 

The “how well was the EM able to articulate the necessary emotion you felt?” score was significantly higher than the neutral “neither easy nor difficult” point. So, the ICBO 2012 audience that participated felt the EM offered the appropriate distinctions for articulating their emotional response to a talk. The bit of EmOntoTag that recorded terms the responders wanted, but weren’t in the EM included:

  • Curious
  • Indifferent
  • Dubious
  • Concerned
  • Confused
  • Worried
  • Schadenfreude
  • Distracted
  • Indifferent or emotionally neutral

 

There are more reported in the paper. Requesting missing terms is not novel. The only observation is that doing the request at the point of perceived need is a good thing; having to change UI mode decreases the motivation to make the request. The notion of indifference or emotionally neutral is interesting. It’s not really a term for the EM, but something I’d do at annotation time, that is, “not has emotion” sort of thing. The cherry-picked terms I’ve put above are some of those one may expect to be needed at an academic conference; I especially like the need to say “Schadenfreude”. All the requested terms, except the emotionally neutral one, are now in the EM.

 

There’s lots of data in the paper, largely recording the terms used and how often. A PCA did separate audience members and talks by their tags. Overall, the terms used were of the positive valence “interested” as opposed to “bored”. These were two of the most used terms; other frequent terms were “amused”, “happy”, “this is familiar” and “this is to be expected”.

 

The picture below

 

Shows the time line for the sample talk for which we had permission to show the emotional responses. Tags used were put into time slot bins and the size of the tags indicates the number of times that tag was used The EM appraisals are blue, the EM’s emotions are red and the EM’s feelings are green. We can see that, of the respondants, there was an over-whelming interest, with one respondant showing more negative emotions: “bored”, “bored”, “tired”, “restless” and “angry”. Of course, we’re only getting the headlines; we’re not seeing the reason or motivation for the responses. However, we can suspect that the negative responses mean that person didn’t like the presentation, but that there was a lot of interest, amusement and some pleasure derived from understanding the domain (“mastery pleasure”).

 

We think that this evaluation shows that the EM’s vocabulary works for the self-reporting of emotional response in an ontology conference setting. I’m prepared to say that I’d expect this to generalise to other settings for the EM. We have, however, only evaluated the ontology’s vocabulary; in this evaluation we’ve not evaluated its structure, its axiomatisation, or its adherence to any guidelines (such as how class labels are structured). There is not one evaluation that will do all the jobs we need of evaluation; many aspects of an ontology should be evaluated. However, fitness for purpose is, I think, a strong evaluation and when coupled with testing against competency questions, some technical evaluation against coding guidelines, and use of some standard axiom patterns, then an evaluation will look pretty good. I suspect that there will be possible contradictions in some evaluations – some axiomatisations may end up being ontologically “pure”, but militate against usability and fitness for purpose. Here one must make one’s choice. All in all, one of the main things we’ve done is to do a formal, experimental evaluation of one aspect of an ontology and that is a rare enough thing in ontology world.

 

Returning to the EM at ICBO 2012, we see what we’d expect to see. Most people that partook in the evaluation were interested and content, with some corresponding negative versions of these emotions. The ontology community has enough factions and, like most disciplines, enough less than good work, to cause dissatisfaction. I don’t think the ICBO audience will be very unusual in its responses to a conference’s talks; I suspect the emotional responses we’ve seen would be in line with what would be seen in a twitter feed for a conference. Being introduced to the notion of cognitive dissonance by my colleague Caroline jay was enlightening. People strive to reduce cognitive dissonance; if in attending a conference one decided, for example, that it was all rubbish one would realise one had made a profound mistake in attending. Plus, it’s a self-selecting audience of those who, on the whole, like ontologies, so overall the audience will be happy. It’s a pity, but absolutely necessary, that we haven’t discussed (or even analysed) the talks about which people showed particularly positive or negative responses, but that’s the ethical deal we made with participants. Finally, one has to be disturbed by the two participants that used the “sexual pleasure” tags in two of the presentations – one shudders to think.

Patterns of bioinformatics software and database usage

September 27, 2014

 

I published a blog on the rise and rise of the Gene Ontology. This described my Ph.D. student Geraint Duck’s work on bioNerDS, a named entity recogniser for bioinformatics databases and software. In a survey of Genome Biology and BMC Bioinformatics full text articles we saw that the Gene Ontology is in the top ten of mentioned resources (a fact reflected in our survey of the whole of 2013’s PMC). This interesting survey was, however, a bit of a side-show to our goal of trying to extract descriptions of bioinformatics and computational biology method from text. Geraint has just presented a paper at ECCB 2014 called:

 

Geraint Duck, Goran Nenadic, Andy Brass, David L. Robertson, and Robert Stevens. Extracting patterns of database and software usage from the bioinformatics literature. Bioinformatics, 30(17):i601-i608, 2014.

 

That has edged towards our ultimate goal of extracting bioinformatics and computational method from text. Ideally this would be in a form that people wishing to use bioinformatics tools and data to analyse their data could consult a resource of methods and see what was commonly done, how and with what it was done, what’s the latest method for data, who’s done each method and so on and so on.

 

Geraint’s paper presents some networks of interacting bioinformatics software and databases that shows patterns of commonly occurring pairs of resources appearing in 22,418 papers from the 2013 PMC corpus that had the MeSH term “Bioinformatics” as a tag. When assembled into a network, there are things that look remarkably like methods, though they are not methods that necessarily appear in any one individual paper. What Geraint did in the ECCB paper was:

 

  1. Take the results of his bioNerDS survey of the articles in PMC 2013 labelled with the MeSH term “Bioinformatics”.
  2. Removed all resources that were only mentioned once (as they probably don’t really reflect “common” method).
  3. Filter the papers down to method sections.
  4. Get all the pairs of adjacent resources.
  5. Assuming the most used ordering (“Software A takes data from Database B” or “Data from Database B is put into Software A”), we used a binomial test to find the dominant ordering and assumed that was the correct ordering (our manually sampled and tested pairs suggests this is the case).
  6. Resources were labelled as to whether they are software or a database. A network is constructed by joining the remaining pairs together.

The paper gives the details of our method for constructing patterns of usage and describes the evaluations of each part of the method’s outputs.

 

Some pictures taken from the paper of these networks created from assembling these ordered pairs of bioinformatics resources are:

 

Figure 1 A network formed from the software recovered by bioNerDS at the 95% confidence level

 

This shows the network with only bioinformatics software. In Figure 1 we can see a central set of sequence alignment tools, split into homologue search, multiple sequence alignment and pairwise sequence alignment tools), which reflects the status of these core, basic techniques in bioinformatics based analyses. Feeding into this are sequence assembly, gene locator and mass spectroscopy tools. Out of the sequence analysis tools come proteomic tools, phylogeny tools and then some manual alignment tools. Together these look like a pipeline of core bioinformatics tasks, orientated around what we may call “bioinformatics 101″ – it’s the core, vital tasks that many biologists and bioinformaticians undertake to analyse their data.

 

The next picture shows a network created from both bioinformatics software and databases. Putting in both software and databases in Figure 2, we can see what the datasets are “doing” in the pipelines above: UniProt and GEO are putting things into BLAST; GenBank links into multiple sequence alignment tools; PDB links into various sequence prediction and evaluation tools.

 

Figure 2 A network formed from the bioinformatics and database recovered by bioNerDS at the 95% confidence level

 

Finally, we have the same network of bioinformatics software and databases, but with the Gene Ontology node (which we count as a database) highlighted.

 

Figure 3 The same network of bioinformatics software and databases, but with the Gene Ontology and its associates highlighted.

 

In another blog I spoke about the significance of the Gene Ontology, as recorded by bioNerDS, and this work also highlights this point. In this network we’re seeing GO as a “data sink”, it’s where data goes, not where it comes from – presumably as it is playing its role in annotation tasks. However, its role in annotation tasks, as well as a way of retrieving data, fits sensibly with what we’ve seen in this work. It may well be that we need a more detailed analysis of the language to pick up and distinguish where GO is used as a means of getting a selection of sequences one wants for an analysis – or to find out if people do report this activity. Again we see GO with a central role in bioinformatics – a sort of confirmation of its appearance in the top flight of bioinformatics resource mentions in the whole PMC corpus.

 

What are we seeing here? We are not extracting methods from the text (and certainly not individual methods from individual papers). What we’ve extracted are patterns of usage as generalised over a corpus of text. What we can see, however, are things that look remarkably like bioinformatics and computational biology method. In particular, we see what we might call “bioinformatics 101″ coming through very strongly. It’s the central dogma of bioinformatics… protein or nucleic acid sequences are taken from a database and then aligned. Geraint’s paper also looks at patterns over time – and we can see change. Assuming that this corpus of papers from PMC is a proxy for biology and bioinformatics as a whole and that, despite the well-known inadequacy of method reporting, the methods are a reasonable proxy for what is actually done, BioNerDS is offering a tool for looking at resources and their patterns of usage.

Being a credible virtual witness

September 19, 2014

Tim Clark introduced me to the notion of a scientific paper acting as a virtual witness upon a scientific investigation. We, the readers, weren’t there to see the experiment being done, but the scientific article acts as a “witness statement” upon the work for us to judge that work. There’s been a deal of work over recent time about how poorly methods are described in scientific papers – method is key to being able to judge the findings in a paper and then to repeat and reproduce the work. Method is thus central to a scientific paper being a “credible virtual witness”. One of the quotes on Wikipedia’s description of credible witness is “Generally, a witness is deemed to be credible if they are recognized (or can be recognized) as a source of reliable information about someone, an event, or a phenomenon”. We need papers to be credible witnesses on the phenomena they report.

 

We’ve recently added to this body of work on reproducibility with a systematic review of method reporting for ‘omic experiments on a set of parasite host investigations. This work was done by Oscar Florez-Vargas, a Ph.D. student supervised by Andy Brass and me; the work was also done with collaborators in Manchester researching into parasite biology. The paper is:

 

Oscar Flórez-Vargas, Michael Bramhall, Harry Noyes, Sheena Cruickshank, Robert Stevens, and Andy Brass. The quality of methods reporting in parasitology experiments. PLoS ONE, 9(7):e101131, July 2014.

 

Oscar has worked for 10 years on the immunogenetics of Chagas disease, which is caused by one of the Trypanosoma parasites. Oscar wished to do some meta-analyses by collecting together various results from ‘omics experiments. He came with one issue of apparently contradictory results – Some papers say that the Th17 immune response, T regulatory cells and Nitric Oxide may be critical to infection and others say that they are not. Our first instinct is to go to the methods used in apparently similar experiments to see if differences in the methods could explain the apparent contradiction; the methods should tell us whether these results can be reasonably compared. Unfortunately the methods of the papers involved don’t give enough information for us to know what’s going on (details of the papers are in Oscar’s article). If we are to compare results from different experiments, we have to base that comparison on the methods by which the data were produced. In a broader context, method lets us judge the validity of the results presented and should enable the results in a paper to be reproduced by other scientists.

 

This need to do meta-analyses of Trypanosoma experiment data caused us to look systematically at a collection of ‘omic experiments from a series of parasite host experiments (Trypanosoma, Leishmania, Toxoplasma, Plasmodium, Trichuris and Schistosoma, as well as the non-parasitic Mycobacterium). Oscar worked with our collaborating parasitologists to develop a checklist of what essential parameters that should be reported in methods sections. This included parameters in three domains – the parasite, the host and the experimental infection. Oscar then used the appropriate PRISMA guidelines in a systematic review of 23 Trypanosoma spp. papers and 10 from each of the other organisms from the literature on these experiments (all the details are in the paper – we aimed to have our method well reported…).

 

We looked for effects on the level of reporting from organism and publication venue (various bibliometric features such as impact factor, the journal’s h-index and citations for the article).

 

Perhaps not unsurprisingly the reporting of methods was not as complete as one may wish. The mean of scores achieved by Trypanosoma articles through the checklist was 65.5% (range 32–90%). The method reporting in the other organisms was similarly poor, except in Trichuriasis experiments, which achieved the highest scores and included the only paper to score 100% in all criteria. We saw no effect of publication (some negative correlation with Google Scholar citation levels, though this is confounded by the tendency of older publications to have more citations). There has been no apparent improvement in reporting over time.

 

Some highlights of what we found were:

  • Species were described, but strains were not and it’s known that this can have a large effect on outcome;
  • Host’s sex has an influence on immunological response and it was sometimes not described;
  • The passage treatment of the parasite influences its infectivity and this treatment was often not reported;
  • Housing and treatment of hosts (food, temperature, etc.) effects infectivity and response to infection and these were frequently not reported.

     

We know method reporting tends to be poor. It is unlikely that any discipline is immune from this phenomenon. Human frailty is probably at the root – as authors, we’d like to think all the parameters we describe are taken into account in the experimental design. The fault is presumably in the reporting rather than the execution. Can we get both authors and reviewers to use checklists? The trick is, I suspect, to make such checklists help scientists do their work – not a stick, but some form of carrot. This is a similar notion that Phil Lord has used in discussing semantic publishing – the semantics have to help the author do their work, not be just another hindrance. We need checklists in a form that help scientists write their methods sections. Methods need to become a first class citizen in scientific writing, rather than a bit of a chore. Method is vital to being a credible virtual witness and we need to enable us all to be credible in our witness statements.

An accessible front end to Google Calendar

September 15, 2014

I’ve not written about being blind and using computers in this forum before, but I actually have something to say – my new Accessible Google Calendar (AGC) is ready and I like it. As can be appreciated, a calendar or diary is a tremendously useful thing. Not having effective (as far as I’m concerned) access to electronic calendars, and being able to share commonly used calendar mechanisms with colleagues, makes working more trying than it need be.

 

The advent of on-line calendars and so on should have made life easier, but the two-dimensional table layout of calendars/diaries makes it too much like hard work. In addition, the Web 2.0 nature of tools like google Calendar is not to my screenreader’s liking and therefore not my cup of tea. As a consequence, for many years I had to organise my diary vicariously and, as a result, badly (just due to the overheads of communication, not the people at the other end of my communications).

 

My first step along the path to a solution was a little command line gadget made for me by Simon Jupp, one of my research associates. This gadget took some arguments that scoped time and then printed out that portion of my google Calendar diary to the screen in text, which was easy for my screenreader to handle. Additions to my diary had to, of course, be done by someone else.

Dimitris Zlitidis then did my M.Sc. project on creating an accessible front end to Google Calendar and this allowed me to both read and write to my google Calendar. This project gave the design of the AGC’s user interface I describe here. I’ve been using this for many years. Google changing their calendar’s API has prompted a re-write by Nikita Abramovs, a vacation student at the School of Computer Science of the University of Manchester, and it’s this re-write I now describe.

 

The Accessible google Calendar (AGC) tool was written in C#; this has all the user interface stuff that is native to Windows, the operating system I use, so its interface is inclined to work with my screenreader JAWS immediately. I then looked at scoping and prioritising what I wanted done. There’s a lot that one can do with Google Calendar – a lot of management of calendar type stuff – who can edit the entries; inclusion of schedules of public holidays etc. I left these out. When I want them I will work with the Web version and do so vicariously as necessary. The two things I really want to do are:

 

  1. Look at entries in the portions of time that I most frequently wish to look;
  2. Add, modify and delete entries. I want to do this with access to the facilities for specifying times (all day and fragments of days) and to do recurring events.

As the “past is a foreign country”, the main things I want to do are to look at the “now” and the “future” events in my diary. So, there’s a list of simple patterns of ways in which I choose events at which to look:

 

  1. Today and tomorrow;
  2. This week and next week;
  3. This month and next month
  4. “Select month period” extends the month functionality by being able to choose months further into the future, which the option of a) single month; b) all months; c) intervening months;
  5. For the rare dates that fall outside this scope there’s a choose date dialogue where I can specify start and end days.
  6. Finally, I can use a search date for events by their content.

 

AGC’s Event Tab is shown here:

 

Figure 1An image of AGC’s Events tab showing a week’s events and the various controls for selecting events; details are in the rest of the text.

 

Events are shown as a simple list that I can move up and down with my cursor key. Unconfirmed events are indicated by a “*” at the start of the entry. I can update events by clicking (pressing return) on the event, which brings up an update event dialogue (similar to the add event dialogue described below). There’s a settings tab that allows me to specify things like: Showing end times; 12 or 24 hour clock; separators for parts of dates (space, slash or dash); and some sounds or text to indicate errors.

 

AGC’s add date functionality is a moderately complex dialogue, but it flattens out any two-dimensional calendar presentation from which to pick dates. Nearly everything is done via little spin boxes that let me pick years, months and days via my cursor keys. As I fix the start time, the end time dialogue keeps track, defaulting to one hour later, to reduce the amount of “setting” I have to do. Checkboxes for whole day events limits the interaction to setting the day date and a recurring events checkbox exposes dialogue for setting for how long the recurring holds and on which days the recurring event happens. Finally the dialogue allows me to set a reminder time and whether or not the event is confirmed. There’s also an “add quick event” tab that lets me use Google’s controlled natural language for setting dates – “Dinner with Isaac Newton 7 p.m. next Friday” does as it says on the tin. There’s a menu of template CNL sentences from which to pick.

The Add Event tab, showing the recurring events bit, is shown here:

 

Figure 2An image of AGC’s Set events tab showing an event that recurs weekly from September to December

 

I’ve used the original version of AGC for several years and it’s been a vital tool. Dimitris and I got the user interface more or less right and Nikita’s re-write and update has made it even better. I rarely need to get outside intervention in my diary setting and the view events tab has a nice regularity, symmetry and simplicity about it that I rather like. I rarely use the choose date and search functions (though they are nice to have for the odd occasion); just having today, tomorrow, this week, next week, this month and next month does it for me nearly all the time. The user interface, having been used for years, has had lots of testing and, while the user base is not extensive (me), it does all that I need to do on a frequent and regular basis. It’s good that Google have exposed the API to their calendar. Ideally I’d like the Web offering of their calendar to work well for me, but I need to do my diary now and AGC is my solution.

 

The AGC installer can be downloaded from

https://github.com/TheOntologist/AGC/releases. There is a short readme file with a description of AGC functionality and how to install it can be found at: https://github.com/TheOntologist/AGC/blob/master/README.md

Learning about a domain from an ontology

June 20, 2014

One of the things I (and I think we collectively have done to a great extent) is forgotten about or neglected ontology as “tutorial”. We used to talk about this way back in TAMBIS days and others did so as well. The idea is that by looking at an ontology I can learn about a field of interest. Our idea in TAMBIS was that one should be able to look at the TAMBIS ontology and learn about the basics of molecular biology and an operational aspect of bioinformatics (though this exact idea was never explored or evaluated). Ontologies are often described as the “background” knowledge of a discipline; they contain the entities in a domain, their definitions, descriptions and inter-relatedness. From this, a “reader” of an ontology should be able to get some kind of understanding of a domain.

With an ontology, there are two ways I can learn about a field of interest: First, I can look at an ontology for that field, explore it and from that derive an understanding of how the entities of that field “work”; Second, I can write an ontology about that field and, in doing so, do the learning. This latter one only works for small topics or learning at a fairly superficial level. I’ve done this for heraldry; cloud nomenclature; anatomy of flowers; plate armour; galenic medicine; and a few others. This isn’t scalable; we can’t all write ontologies for a field of interest, just to learn about it. I have, however, found it a useful way to help myself structure my understanding, even if the resulting ontologies rarely, if ever, amount to very much at all (these have also largely been for fun and not an endeavour to drive some research).

 

Is this tutorial aspect of ontology going to give a full understanding? For most ontologies of which I’m aware, looking at that ontology will not act like a college course in that subject area. Looking at an ontology is more like looking at an encyclopaedia; it is a list of things and descriptions of those things, which is all an ontology is really trying to do. A so-called reference ontology can fit into this encyclopaedic role well; an application ontology should do so, but just for that application area. However, I should be able to look at an ontology or a collection of ontologies and get a decent overview of a domain.

 

Having said this, however, we can make quite a good encyclopaedia from an ontology or set of ontologies, especially if there are an adequate number of semantic relationships between entities, as well as good editorial and other metadata around those entities. I say “ontologies” as just having an encyclopaedia or ontology of molecular function (as an example) tells me what molecular functions there are and how they’re organised, but it doesn’t give me, as a learner, much of a biological context. This isn’t the fault of the ontology; I just need to look at a broader picture of biology to really learn anything. If I could ask questions such as “what molecular functions exist in the mitochondria of mammals and in what processes do they participate”, then I have something to work with (I suspect). There then, of course, remains the question of how all this information knowledge should be presented. I feel there’s mileage in a standard sort of encyclopaedic form, using the label (term), synonyms, natural language definitions,, together with the structure of the ontology to present something useful.

 

I’m still sort of taken with the idea of ontology as tutorial; I should be able to look at the ontologies from a field of interest and learn about that field of interest. It probably won’t be an in-depth learning; shallower even than that offered by the excellent resource Wikipedia, which can readily be used as an introduction to a subject area. However, I should be able to get a decent enough view of a field of interest from its ontologies that I can structure my learning from other resources.

The Software Ontology (SWO)

June 19, 2014

Our paper on the Software Ontology (SWO) has just been published in the Journal of Biomedical Semantics (JBMS) thematic issue on ontologies. The paper is:

 

James Malone, Andy Brown, Allyson Lister, Jon Ison, Duncan Hull, Helen Parkinson, and Robert Stevens. The software ontology (swo): a resource for reproducibility in biomedical data analysis, curation and digital preservation. Journal of Biomedical Semantics, 5(1):25, 2014.

 

There’s also a lot of information about how we went about making the SWO at the SWO blog.

 

We now have a range of bio-ontologies covering sequences, gene products, their functions, the processes in which they participate, cellular and gross anatomy, to diseases and phenotypes. These are primarily used to describe the entities in the masses of data biology now produces. More recently, there’s been work on describing the investigations by which these data were produced and analysed; the SWO fits into the ontology landscape at this location. The data is just a load of stuff; we detect things in these datasets with some software and the provenance trail of how these entities were detected needs to include the software that was used.

 

The SWO describes software, the software suites of which it is a part, its inputs and outputs, the tasks it supports, its versions, licencing its interface, and its developers. It doesn’t capture the hardware upon which the software runs, the software’s dependencies, cost of ownership (not the price in lucre, but does it need a lot of sys admin kind of thing), software architecture… (see the paper and blog for more)

 

The scope of the SWO is thus wide and we could have included a whole lot more than we did; much of the stuff not included is important and useful, but resources are scarce and some of the features, like the hardware, is v hard to represent. One of the major problems in writing an ontology is scope and mission creep – how do we stop modelling the world and spending inordinate amounts of time on pathological edge cases? To help us in this we used some Agile techniques in producing the SWO. Perhaps the most useful was the “planning poker” and “buy a feature” games we played. In the SWO project we used a bunch of stakeholders to help us out and the use of these techniques in the SWO went something like this:

 

  1. We did the usual thing of asking for competency questions (which play the role of user stories); clustering them and drawing out a set of features that needed to be modelled.
  2. For the planning poker, we asked people to estimate the effort needed to represent the feature on a numeric scale. Here the trick is that everyone has cards with notional costs written upon them. All cards are held up simultaneously to prevent bias from the first to reveal his or her card. Discussion ensues and a consensus effort for the ontological feature is decided upon.
  3. We then did the same thing for choosing a feature. Depending on the values for effort an amount of “money” is calculated and distributed evenly amongst the stakeholders; there is not enough money to buy everything. Each feature has a cost and each stakeholder can spend his or her money on the features he or she thinks most important. negotiating and so on takes place and features to be modelled are either bought or not bought.

This actually worked well and produced a list of prioritised SWO features. We didn’t do it often enough, as priorities and cost estimations change, but features to be modelled could be seen to be changed on one iteration of the planning. In the SWO we think this technique struck a good balance between what was needed and what was achieveable.

 

We also needed to add content for these features to the SWO. In the first round this was driven by what our customers needed – this was largely, but not exclusively, the EBI’s Gene Expression Atlas. Later on, we’ve been a bit more systematic about what to put into the SWO. Using a named entity recogniser for bioinformatics software and databases (BioNERDS) we’ve done a survey of all PMC for mentions of said bioinformatics databases and software. We pulled out the top 50 of these software mentions and we’re slowly ploughing our way through those (I’ve put this list at the end of this Blog).

 

The paper itself is one in the JBMS thematic series on ontologies; it does for ontologies what the NAR annual database issue does – describes, in this case, ontologies, their state of play and what updates have happened. This is what the SWO paper does. It has the motivation – we need to know how our data were produced and analysed and software plays a crucial role in this analysis. The paper describes what features were bought by our stakeholders, how we axiomatised descriptions of these software features and outlines some of the more tricky modelling issues. My two favourite tricky bits were:

 

  1. Versions of software. The vast variety of versioning schemes is horrid to represent; we did it with individuals of the class “version name”representing a version for a given bit of software. These versions are linked to preceding and succeeding versions to support the obvious queries. It’s not beautiful, but works well enough.
  2. Licences for software. Again, this has to support the variety of the multitude of licences,but the interesting thing here is to be able to infer that, for instance, a bit of software is open source – the paper describes the axiom pattern to do this trick.

 

 

The paper also describes the SWO’s merger with EDAM, which has brought a lot of content into the SWO. The SWO is being used, and not just by the EBI (the paper has some examples) and will continue to grow. The SWO represents a complex field of human developed artefacts. In doing so the SWO team has very much taken a pragmatic approach in its representation. The SWO is already quite complex, but we have tried to avoid being too baroque.

 

Here’s the top 50 as produced by BioNERDS (it’s actually 49 and there’s a couple of glitches in this data, but it’s good enough)

 

R

PSI-BLAST

BLAT

Firefox

neighbor

BLAST

FASTA

Entrez

Tree View

PSSM

UCSC Genome Browser

MATLAB

RepeatMasker

Weka

SAM

Q

Apache

Image

PAML

Phred

Network

Cytoscape

MIPS

EMBOSS

TMHMM

ClustalW

BLASTN

DAVID

ClustalX

BLASTP

Bioconductor

SAM

MEME/MAST

T-COFFEE

MUMmer

Cluster

HMMER

MUSCLE

SOAP

Primer3

analysis

PHYLIP

PostgreSQL

Match

PhyML

 

Excel

MEDLINE

Microarray Suite

SEQUEST

       

MAFFT


Follow

Get every new post delivered to your Inbox.

Join 159 other followers