The Minimal Information for Reporting an Ontology (MIRO) Guidelines

March 19, 2018

We have finally published our paper on the Minimal Information for Reporting an Ontology (MIRO). We have spent years on this paper; events have intervened to cause the marathon. The paper’s details are

 

MIRO: guidelines for minimum information for the reporting of an ontology

N Matentzoglu, J Malone, C Mungall, R Stevens

Journal of biomedical semantics 9 (1), 6; 2018.

 

The motivation for this came from the observation that the many ontology description papers published did not make a consistent, good description of the ontology in question. It is easy to make the observation, but less easy to say what should be contained in a “here’s my ontology” paper.

 

This is not a matter of reproducibility – one shouldn’t expect to take a method from an ontology paper, follow that method, and get the same ontology. Perhaps roughly the same ontology – same topic area, same naming conventions, same patterns, etc. etc., but the exact same ontology isn’t possible. One should be able, however, to read an ontology description paper and understand the topic, scope, content and development process for an ontology. There isn’t much substitute for looking at the ontology in some form, but the report should be able to convey most of what one needs.

 

The ultimate goal is community involvement in shaping and then following the MIRO guidelines so that ontology reports are adequate, so the ontology community should be involved in what the MIRO contains. This is an outline of what we did:

  1. James Malone, Chris Mungall and I drafted the guidelines.
  2. We put them up in an on-line survey tool.
  3. We gathered input from the community via this survey
  4. WE updated the MIRO guidelines
  5. We then selected some extant ontology reporting papers and analysed them to see how well they complied with the MIRO guidelines.

The MIRO guidelines can be seen on their GitHub repository and the on-line survey still exists. The main sections of the MIRO guidelines are:

  1. Basics – name and so on.
  2. Motivation – why this ontology?
  3. Scope, requirements etc. – what should be in the ontology.
  4. Knowledge acquisition. – How was the ontology content obtained?
  5. Ontology content. – What’s in it and how is it represented?
  6. Managing change – obvious.
  7. Quality assurance. – is it any good?

 

At the time of the survey’s analysis we had had 110 responses that, given the size of the bio-ontology community, is a pretty good response. (MIRO is not focused on the bio-health community, but that community is such a big player and I’m part of it, so it’s a good yardstick.)

Here is not the place for the details of the survey response, it can be seen in the paper itself and via the RPubs site for MIRO. Overall, however, we had a very positive response to our proposal.

In the survey, we only gave the names of the reporting guideline; we did not operationalise the guideline. This was deliberate: First to get focus on the guideline, its name etc. and second so that comments could be used to add to the existing operationalisations without too much bias as to how that operationalisation should be done. We got much in the way of useful input on names and on how to operationalise the guidelines. There were, however, no substantial changes to the MIRO guidelines themselves.

One bit of community feedback is that everything (almost) in the MIRO guidelines are mandatory and this may be too onerous. I make no apologies for this – MIRO is minimal; thus it has only or nearly only that which is essential – most optional things should be beyond the minimal. Nevertheless, complying with the MIRO may be an onerous task, but sensible development techniques may make the task less onerous.

Two items that were deemed to be of less importance surprised us:

  1. Ontology metrics – numbers of classes, axioms and so on. This is easy to compute and gives some idea of scale – though obviously no detail – of an ontology. It may be that readers of ontology reports find these kinds of numbers less useful or easy enough to find by looking at the ontology.
  2. How the content was chosen- this is not just scope, but also priority (though we asked for that exercise to be reported too). An explanation of what is in and out of scope would seem important – it may be, as we say in the paper, that scope is implied by the requirements and so on.

 

Our review of ontology papers showed that only 41% of the MIRO items were covered in the ontology papers. Some of the main areas in which ontology reporting papers didn’t comply with MIRO were:

  • Testing the ontology – not an easy task, but even simply reporting that the ontology has no unsatisfiable classes and can answer queries that show its core competence would be a good start.
  • Evaluation reporting was scant – is it the ontology that people want in the way that they want it?
  • Versioning.
  • Sustainability – will the ontology carry on being developed?
  • Entity deprecation policy – as the ontology develops and entities change, is there a process involved?
  • Metadata policy – what metadata, usually decorating a class – is supplied; this was not well reported.
  • The ontology’s licence; surprisingly poorly reported. Here we did not check whether the ontology was released with a licence.

 

 

The survey of compliance generally supports the original motivation for the MIRO guidelines – that ontologies are not well reported in the literature. We’ve also had good input into shaping the MIRO guidelines; all we need now is adoption…. The MIRO guidelines should be adopted and most of the information features are generally easy to report. Support for MIRO by the community should make ontology reports better and I recommend them to the community.

Reporting the Age and Sex of mice in Research Papers

August 2, 2016

Oscar Florez-Vargas has just published a paper in the journal eLife as part of his Ph.D. work here in Manchester. The paper is:

Oscar Flórez-Vargas, Andy Brass, George Karystianis, Michael Bramhall, Robert Stevens, Sheena Cruickshank, and Goran Nenadic. Bias in the reporting of sex and age in biomedical research on mouse models. eLife, 5:e13615, 2016.

Earlier in his Ph.D. Oscar did some studies “in depth” into the quality of methods reporting in parasitology experiments in research papers. I reported on this study in a blog “being a credible virtual witness” – the gist of this is that for a research paper to act as a credible witness for a scientific experiment there must be enough of that experiment reported such that it can be reproduced. Oscar found that the overwhelming majority of the papers in his study failed to report the minimal features required for reproducibility. We did another study, led by Michael Bramhall during his Ph.D., with similar findings “quality of methods reporting in animal models of colitis” published in Inflammatory Bowel Diseases.

In both studies, the reporting of experimental method was found to be wanting.

Two of the important factors to report in the experiments in these and other studies are the age and the sex of mice; both factors have significant impact on many aspects of an organism’s biology. The original studies Oscar did were in Depth for many factors in a relatively small area of biology and a smallish number of papers captured by following a systematic review; this time, we wanted to do a broad survey across these two factors. We chose age and sex as they are important factors across many aspects of an animal’s biology and, hence, they influence the outcome of experiments.

We used text analytics on all papers in the PMC full-text collection that had mouse as the focus of their study; this amounted to 15,311 papers published between 1994 and 2014. I won’t report the details of the study here, but we got good recovery of these two factors and were able to report the following observations:

  • The reporting of both sex and age of mice has increased over time, but by 2014 only 50% of papers using mice as the focus of their study reported both the age and sex of those mice.
  • There is a distinct bias towards using female mice in studies.
  • There are sex biases between six pre-clinical research areas; there’s a strong bias to male mice in cardiovascular disease studies and a bias towards female mice in studies of infectious disease.
  • The reporting of age and sex have steadily increased; this change started before the US Institute of Medicine report in 2001 or the ARRIVE guidelines that called for better reporting of method.
  • There were differences in the reporting of sex in the research areas we tested (cardiovascular diseases; cancer; diabetes mellitus; lung diseases; infectious diseases; and neurological disorders). Diabetes had the best reporting of sex and cancer the worst. Age was also reported the least well in cancer studies. Taking both sex and age into account, neurological disorders had the best reporting.
  • We also looked at reporting of sex in four sub-groups of study type (genetics, immunology, physiopathology and therapy): male mice were preferred in genetics studies and female mice preferred in immunological studies.

Age and sex of mice used is important in experiments as it is an important factor in the biology being studied. It is difficult to understand why exactly these factors are not better reported. Reporting of sex and age is done simply in about 40 characters of text; so it’s not a space issue. Previous studies in both human and animal models concluded that males were studied more than females; our study contradicts these studies. This bias towards female mice may be because of practical factors: they are smaller – therefore they need less drug, inoculum, etc. to be administered); are less aggressive to each other and to experimenters, and are cheaper to house. Our study did have a large sample size and focused on only one model (mouse) and this may be a factor in why our study has different outcomes to others. Nevertheless, there appear to be biases in the choice of mouse sex to be used in experiments. The profound effects of sex on an organism’s biology has influenced the creation of the journal of Biology of Sex Differences. As sex influences so many aspects of biology one would suppose that balance for sex of mice used would be a good thing. In this regard, the NIH is engaging the scientific community to improve the sex balance in research. We have used some straight-forward text-analytics to undertake this study and it has enabled some very interesting questions to be asked and has highlighted some very interesting issues that may affect with what certainty we interpret results reported in papers and their broader applicability. It should be entirely possible to use text-analytics in a similar way for other experimental factors both pre- and post-publication.

JOB ADVERT: Knowledge Transfer Partnership Associate – Semantic Systems Application Architect

May 26, 2016

This is an exciting opportunity for an ambitious doctoral graduate, or someone with equivalent experience, with the ability and confidence to manage a strategic industrial project with Telematicus Limited.  This opportunity is intended to extend beyond the initial 21-month project with a view to establishing the person as a recognised leading expert in semantic web technologies applied to automotive and related applications on a global basis.

 

This joint project between the University of Manchester and Telematicus has the overall aim of developing ontology authoring tools to support the use of semantic technologies in information management.  By doing so, you will have a unique opportunity to provide a vital role in introducing and applying semantic web technologies such as OWL and RDF to transform the development and delivery of industrial software products.  You will work with a dedicated team of developers to establish new software tools based on the application of state-of-the-art semantic web technologies.  The primary industrial application space for the project is automotive (parts, service, diagnostics) and is expected to expand to include vehicle insurance based on telematics.

 

The position will provide you with an excellent opportunity to be part of collaborative development and knowledge transfer between the University and Telematicus, providers of enterprise software solutions that simplify, improve and enhance business operations.  You will not only receive formal management training but will also have access to and manage a £4,000 personal and professional development budget.

 

This is a fixed term, full time post for 21 months. Salary £30,738 to £37,768 per annum

 

Closing date : 20/06/16

 

Enquiries about the vacancy, shortlisting and interviews:

Sean Bechhofer

Email:  sean.bechhofer@manchester.ac.uk

 

General enquiries:

Email:  hrservices@manchester.ac.uk

Telephone:  0161 275 4499

 

Technical support:

Email:  universityofmanchester@helpmeapply.co.uk

Telephone:  01565 818 234

 

More information: https://www.jobs.manchester.ac.uk/displayjob.aspx?jobid=11513

Call for Input on a Proposal for the Minimal Reporting on an Ontology

April 11, 2016

If one were reading or reviewing a paper reporting on an ontology’s development, what information would you like to be reported? The survey below is a proposal for some guidelines for the Minimal Information for Reporting an Ontology (MIRO).

 

The survey asks you to rate the importance of each guideline and optionally comment on each guideline – on any aspect including wording. There’s also an opportunity to say what you believe is missing.

 

The survey may be found at the link:

 

https://jamesmalone.typeform.com/to/uJIhzR

 

Participants so far report completion times of around ten minutes.

 

As we read and review papers describing an ontology we often find that various aspects of the ontology itself and how it was made are not well reported. As an author of a paper we have views on what to report in a given space, but these views may or may not coincide with those of the reviewers and the readers. So, we’d like to find out what as wide a collection of people as we can reasonably reach think are the minimal information for reporting on an ontology. These guidelines will then be available to the community of authors and reviewers to help make the process of disseminating information about an ontology more consistent and contain what readers need to see. These can form guidelines for both reviewers and authors of papers. To do this we’d like to have as much ontology community input as possible.

 

Once we have input we will review what we get as feedback and revise the MIRO guidelines appropriately. We will also publish a summary of the responses to the survey and what we plan to do in response.

 

We appreciate your co-operation in reviewing our proposal.

 

Robert Stevens (1), James Malone (2) and Chris Mungall (3)

 

(1) University of Manchester, UK

(2) FactBio, UK

(3) Lawrence Berkeley Laboratory, USA

 

Funded Attendance at Introduction to Implementing Ontologies in the Web Ontology Language (OWL) Tutorial

February 6, 2016

 

The School of Computer Science at The University of Manchester is looking for individuals to participate in a funded OWL tutorial that covers the basic language concepts in OWL using our well-known “Pizza Tutorial”. We will cover reasonable expenses for travel in the UK, subsistence and up to 2 nights of accommodation to a maximum of £500 (travel from outside the UK will be funded up to this limit). The tutorial is gratis and will take place at the
University of Manchester on the 3rd and 4th March 2016.

 

From those who attend the tutorial we will seek volunteers to take part in two studies. In the first attender’s interaction with Protégé will be logged. In doing so, we will learn how people go about authoring ontologies. In the second study attenders will use a prototype ontology authoring environment that allows ontologists to test their ontology against a series of “authoring tests” based on competency questions for the ontology. Taking part in the study will not interfere significantly in your learning objectives as no additional tasks have to be carried out and data will be collected silently. We will award volunteers a £20 Amazon voucher for those that take part in the two studies. Not taking part in the study or opting-out from it won’t be detrimental to your funding and participation in the tutorial. If you decide to participate and change your mind later on there won’t be any consequences with regard to your participation in the tutorial except for (1) losing the entitlement to the voucher and (2) log files removal.

 

This two-day introductory ‘hands-on’ workshop aims to provide attenders with both the theoretical foundations and practical experience to begin building OWL ontologies using the Protégé-OWL tools. Attenders will take Manchester’s well-known “Pizza tutorial” (see http://owl.cs.manchester.ac.uk/publications/talks-and-tutorials/protg-owl-tutorial/). This tutorial will cover the main conceptual parts of the Web Ontology Language (OWL) through the hands-on building of an ontology focusing on pizzas and their ingredients. A series of practical exercises take attenders through the process of: forming competency questions that the ontology should support; conceptualizing the toppings found on a pizza; the entry of this classification into the Protégé environment; the description of many types of pizza. All this is set in the context of using automated reasoning to check the consistency of the growing ontology and to use the reasoner to make queries about pizzas.

 

Aims

The aims of this tutorial are to:

  • Understand the use of ontologies.
  • Gain experience in basic ontology engineering techniques such as knowledge elicitation and use of competency questions.
  • Understand statements written in OWL.
  • Understand the role of automated reasoning in ontology building.
  • Build an ontology and use a reasoner to draw inferences from that ontology.
  • Gain experience in the Protégé ontology environment.
  • Gain experience in using competency questions to drive ontology building and assessment.

 

Registration and Further Information

To register, please check https://www.eventbrite.co.uk/e/funded-owl-protege-tutorial-tickets-21384739331. There are 15 places available on this tutorial. Registrants will be given places on a first come first served basis. A reserve list will be formed from those people that express an interest in attending the tutorial, but are not automatically assigned a place. Attenders will be reimbursed for travel within the UK (or price equivalent), subsistence and accommodation for up to 2 nights in compliance with the University of Manchester expenses policy; we will reimburse up to £500. The tutorial is gratis.

 

For further enquiries about the tutorial and funding email Robert Stevens (robert.stevens@manchester.ac.uk). For enquiries about the study email Markel Vigo (markel.vigo@manchester.ac.uk).

 


 

Questions for a panel at Semantic Web Applications and Tools for Life Sciences

September 1, 2015

James Malone and I are the Scientific Chairs for Semantic Web Applications and Tools for Life Sciences (SWAT4LS) 2015, Cambridge UK in December 2015. As we’re coming to the end of the paper submission period we’ll be doing reviewing and forming the programme. Rather than wall-to-wall talks, James and I want to break up the day a bit; panel sessions are a standard way of adding a little variety to a programme and can be both lively and informative – if the questions are right etc.

One can either form a panel and choose questions appropriate to a panel; choose questions and form the panel to suit; or some combination of both. For SWAT4LS 2015 we want to open up the questions or theme for the panel(s) to the SWAT4LS audience, gather a corpus of good panel questions and then form a panel around these questions, with some moderation by who we can get to be on the panel.

So, this is a “call for panel questions” or a CFPQ. Add your proposed questions along to the short CFPQ survey at https://www.surveymonkey.com/r/RLF3VPG James and I will take a look and along with the rest of the SWAT4Ls organising committee choose some panel questions. Do send along anything you wish to ask about Semantic Web technologies and the life sciences; we’ll use these questions to prime the panels sessions, which will then be opened up to the audience. Those people that have their question used for a panel will receive:

  1. An acknowledgement of their contribution (unless they choose not to do so).
  2. A £10 Amazon voucher (or equivalent in some other countrys currency).

This CFPQ will close on 1st October 2015.

Where a Pizza Ontology may help

August 13, 2015

A few people have pointed me at a recent news story on the BBC Web site about valid and invalid types of pizza and suggested that the Pizza Ontology should have helped. The story is entitled “the day I ordered pizza that doesn’t exist” and is written by Dany Mitzman in Bologna. The nub of the story as far as this blog is concerned is that she ordered a marinara pizza, which should be a simple pizza of just tamato and garlic. In Pizza Ontology terms this would be:

 

Class: marinaraPizza

    SubClassOf: NamedPizza,

    HasTopping some TomatoSauceTopping,

    HasTopping some Garlictopping,

    hasTopping only ( TomatoSaucetopping or GarlicTopping)

 

the NamedPizza class supplies the pizza base; MarinaraPizza is disjoint with all other NamedPizza and the restrictions on the MarinaraPizza class says that there is a tomato sauce topping, a garlic topping and that those are the only toppings that appear on this type of pizza.

 

What Dany, the journalist did, was ask for a marinara pizza with mozzarella; that is:

 

Individual: “Dany’s very own marinara pizza”

 

    Types:

        MarinaraPizza,

        hasTopping some MozzarellaTopping

 

the article reports that the pizza maker found this request inconsistant, as would the Pizza Ontology, if the class of pizza existed as above and an automated reasoner were used – the MarinaraPizza class describes that only tomato sauce and garlictoppings occur on this type of pizza, so adding another topping, such as the mozzarella, no matter what (as long as it cannot be inferred to be the same as either tomato sauce or garlic topping) mean that the stated constraints are broken and we all descend into a maelstrom of sin and corruption (or there is an inconsistency reported by the reasoner). Quoting the quote of the pizza maker in question “”You can’t have a marinara with mozzarella,” she says. “It doesn’t exist.”” – the marinara with mozzarella cannot exist as described in our ontology.

 

Of course, describing a class of pizza that has tomato sauce, garlic and mozzarella as its toppings would be fine, just as long as one doesn’t claim it’s a marinara pizza. Creating such a new pizza in the Pizza Ontology is possible; it’s just not a marinara according to the ontology.

 

This ability to make a bespoke class or individual pizza may also be the case with the pizza maker encountered by Dany. It may all be down to the name; asking “may I have a pizza with tomato sauce, garlic and mozzarella” may have elicited a different response, unles it is believed that only specified types of pizza exist. In this were the case, one would need a covering axiom in the ontology. Such an axiom would look like

 

NamedPizza

EquivalentTo:

        (MargheritaPizza or MarinaraPizza or NapolitanoPizza)

 

If one wanted a world in which only these three pizza existed (which I don’t and neither would any sensible person). This axiom asserts that if there is a pizza then it must be one of the three pizza covering the NamedPizza class In the Pizza Ontology itself there are many named pizza and should one wish to construct such a covering axiom, keeping it up to date with the disjoint named pizza in a tool such as protégé would be found to be tedious – Tawny-OWL allows covering axioms and so on to be generated with ease programmatically.

 

So, what have we learnt? From the original BBC article we find that, for at least one Italian pizza maker, that a pizza’s name is as good as its definition; a marinara pizza with another ingredient is no longer a marinara pizza. The Pizza Ontology plus a reasoner can convey the same kind of stance. For the pizza maker, it seems that the name of the pizza is its definition. In an OWL ontology we make such definitions explicit. Of course, even though the Pizza Tutorial adopts the use case of the “intelligent pizza finder” that uses the Pizza Ontology to allow diners to pick topping to include and exclude, form a DL query behind the scenes and select pizza that fulfil that query, the Pizza Ontology is not really going to help in such cross-cultural circmstances as described in the original article. One may perhaps imagine an ontology driven app on a mobile device where one describes one’s pizza of choice and one is told what it is and how to order it in a particular cultural setting, but the phrase “hammer to crack a nut” comes to mind.

Open data and the need for ontologies

July 24, 2015

This is an abstract for “Digital Scholarship and Open Science in Psychology and the Behavioural

Sciences”, a Dagstuhl Perspectives Seminar (15302) held in the week commencing 20 July 2015. The workshop brought together computer scientists, computational biologists and people from the behavioural sciences. The workshop explored eScience, data, data standards and ontologies in psychology and other behavioural sciences. This abstract gives my view on the advent of eScience in parts of biology and the role open data and metadata supplied by ontologies played in this change.

There is a path that can be traced with the use of open data in the biological domain and the rise in the use of ontologies for describing those data. Biology has had open repositories for its nucleic acid and protein sequence data and controlled vocabularies were used to describe those data. These sequence data are core, ground truth in biology; all else comes from nucleic acids and, these days, the environment. As whole genome sequences became available, different organism communities found that the common vocabulary used to represent sequences facilitated their comparison at that level, but a lack of a common vocabulary for what was known about those sequences blocked the comparison of the knowledge of those sequences. Thus we could tell that sequence A and sequence B were very similar, but finding that the function, processes in which they were involved and where they were to be found etc. was much more difficult, especially for computers. Thus biologists created common vocabularies, delivered by ontologies, for describing the knowledge held about sequences. This has spread too many types of data and many types of biological phenomenon, from genotype to phenotype and beyond, so that there is now a rich, common language for describing what we know about biological entities of many types.

At roughly the same time was the advent of eScience. The availability of data and tools open and available via the Web, together with sufficient network infra-structure to use them, led to systems that co-ordinated distributed resources to achieve some scientific goal, often in the form of workflows. Open tools, open data, open standards, open, common metadata all contribute to this working, but it can be done in stages; not all has to be perfect for something to happen – just availability of data will help, irrespective of its metadata. Open data will, however provoke the advent of common data and metadata standards, as people wish to do more and do it more easily.

In summary, we can use the FAIR principles (Findable, Accessible, Interoperable and re-usable) to chart this story. First we need data and tools to be accessible and this means openness. Metadata, via ontologies, also have a role to play in this accessibility – do we know what those data are etc.? Metadata has an obvious role in making tools and data findable – calling the same things by the same term and knowing what those terms mean makes things findable. The same argument works for interoperable tools and data.

OBOPedia: An Encyclopaedia made from Open Biomedical Ontologies

March 31, 2015

A little while ago I wrote a blog about using an ontology or ontologies as background knowledge about a field of interest in order to learn about that domain, rather than simply annotating data or building some kind of hierarchical search interface. The idea is that an ontology captures knowledge about a field of interest; I should be able to look at that ontology and gain some kind of understanding about that domain by examining the terms used to name the class and its definition about how to recognise objects in that class (both the natural language definition and the axioms that describe that class’ objects in terms of relationships to other objects. In that blog article I conjectured that an encyclopaedia style presentation of many ontology entries could work as a way of presenting the large amount of background knowledge captured in the ontologies the community has built. My feeling is that the standard graphical presentation of blobs and lines isn’t necessarily a good way of doing this, especially when there are several ontologies at which to look. Encyclopaedia are also meant for “looking up things” and finding out about them – but we can exploit Web technologies and the structure of an ontology to get the best of both worlds. The OBO ontologies are particularly attractive for an encyclopaedia because:

  • They cover a broad spectrum of biology – from sequence, through proteins, processes, functions, cells, cellular components to gross anatomy.
  • Each entry in an ontology has a human readable label, synonyms, a natural language definition, all of which are”standard” parts of an encyclopaedia entry.
  • The relationships between entries in the ontology can provide the “see also” links for an encyclopaedia entry.

 

One of my undergraduate project students for this year, Adam Nogradi, has built OBOPedia for me as an example of this kind of presentation for a group of ontologies. OBOPedia may be found via http://www.OBOPedia.org.uk. The current version of OBOPedia has nine ontologies, including the OBO Foundry ontologies plus a few more and has over 210,000 entries; the ontologies currently available are:

  • The Gene Ontology.
  • The Protein Ontology.
  • The Chemical Entities of Biological Interest ontology.
  • The Ontology of Biomedical Investigation.
  • The Phenotypic Quality Ontology.
  • The Zebra Fish Anatomy and Development Ontology.
  • The Xenopus Anatomy and Development Ontology.
  • The Human Disease Ontology.
  • The Human Phenotype Ontology.
  • The Plant Ontology.

     

An example of an entries page for OBOPedia can be seen in the picture below:

 

This shows Entries are arranged alphabetically. The screen here shows some entries from “E”, after some scrolling; on view are “”ether metabolic process” from GO and “ethmoid cartilage” from the Zebrafish Anatomy and Development Ontology. Each entry has the main label as the entry’s title, the various synonyms, the natural language definition and some “see also” links. The letters down the left hand side takes one to the beginning of entries starting with that letter. Entries are shown 50 at a time. One nice aspect of this style of presentation can be the serendipity of looking at entries surrounding the target entry and seeing something of interest; a typical hierarchical display automatically puts entries that are semantically related more or less in the same place – this encyclopaedia presentation doesn’t, but preserves the hierarchy via the “see also” links (though what those link to is rather hidden until arrival at the end of the link, which isn’t the case in most graphical presentations). Each entry shows the ontology whence the entry came – there are several anatomies containing the entry “lung” and knowing whence the entry comes is just a good thing. The picture also shows the (possible) exact, broader, narrower and related synonyms taken from the entries in the ontologies. At the moment OBOPedia only uses the subsumption links for “see also”s, but the aim is to expand this to other relationships in the fullness of time. I’d also like to include the ability to use DL-queries to search and filter the encyclopaedia, but time in this project has not permitted.

 

The picture below shows OBOPedia’s search in action.

The search was for “lung” and entries were retrieved from the Gene Ontology and the Human Disease Ontology; some of the entries brought back and available for viewing were “lung disease”, “lung leiomynoma”, “lung induction”, “lung lymphoma”, “hyperlucent lung” and many others…

Along with each OBOPedia entry there is also a “rate this definition” feature. This definition rating uses a simple five point scale that allows people to rate the natural language definition (capturing comments will come at some point). The idea here is that feedback can be gathered about the natural language definitions and eventually this will form an evaluation of the definitions.

 

OBOPedia is an encyclopaedia of biology drawn from a subset of the OBO ontologies (there’s no reason not to include more than the nine currently on show, except for resources), exploiting their metadata, especially their natural language definitions. OBOPedia is not a text-book and it’s not a typical blob and line presentation of an ontology. It’s an encyclopaedia that presents many ontologies at once, but without the reader necessarily knowing that he or she is using an ontology. It’s an attempt to give an alternative view on the knowledge captured by the community in a range of ontologies in a way that gives easy access to that knowledge. OBOPedia may be a good thing or a bad thing. Send comments to Robert.Stevens@manchester.ac.uk or add comments to this blog post.

Patterns of authoring activity when using Protégé for building an ontology

February 9, 2015

We’ve continued our work investigating the human-computer interaction of authoring an ontology. We had a couple of papers last year looking at some qualitative aspects of ontology authoring through interviews with experienced ontologists. We wanted to follow this up with quantitative work looking at the activities during the addition of axioms when authoring an ontology. I’m pleased to say we’ve just had a long paper accepted for CHI 2015 with the following details:

 

Markel Vigo, Caroline Jay and Robert Stevens. Constructing Conceptual Knowledge Artefacts Activity Patterns in the Ontology Authoring Process. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems: CHI 2015; 18 Apr 2015-24 Apr 2015; Seoul, Korea.

 

I reported some early work in this quantitative study. In this latest work we’ve taken the following approach:

  • We’ve instrumented a version of Protégé 4.3 (P4) to record every keystroke, mouse click and so on in a time-stamped log file (it’s called Protégé4US – the “Us” is for “user studies”). We divided the events into interaction events (interacting with the ontology and its axioms via the class and property hierarchies and the axiom description window), authoring events (typing an axiom, class declaration, etc.) environment events (invoking the reasoner, getting an explanation, etc.).
  • We had experienced ontology authors perform a series of tasks to build an ontology of potatoes. Three tasks of increasing difficulty involving making various defined classes over descriptions of some 15 potato varieties, the creation of which was also part of the tasks.
  • Whilst this happened we recorded what was happening on the screen.
  • Finally, we recorded eye-tracking data as to where the author’s gaze fell during the ontology authoring.

 

In capturing eye-tracking data, the screen of Protégé4US is divided up into areas of interest as shown below. This picture shows the main view as an area of interest; other views involve classes, properties and individuals and these have their own areas of interest defined. These AOI are used to determine the dwell time of eye gaze during the tasks.

 

 

The patterns of ontology authoring activity we found were:

 

  1. An exploration cycle. The asserted class hierarchy is expanded after ontology loading – in over 31% of the time an expansion is followed by an expansion as users appear to familiarise themselves with the structure of the ontology. Eventually, this behaviour appears to become directed as an author chooses a class to edit. In contrast, the expansion of the inferred class hierarchy appears to be more exploratory as the authors check what has happened post reasoning, perhaps answering the question “have I found all the changes?”.
  2. An editing cycle. Here an entity is selected, followed by selection of another entity 37% of the time or selection of the description area 29% of the time. Once selected, a description will be modified 63% of the time and followed by selection of another entity 59% of the time. This looks like selecting an entity, inspecting its description and then either editing it or moving on to another entity, each decision based on the content of the description.
  3. A reasoning cycle. Just prior to the reasoner being invoked, the ontology is saved 40% of the time; a defined class is created (17%). After the reasoner is run, 41% of the time participants observe the change on the asserted class hierarchy and then look at a description where the effects of reasoning can be seen. The inferred class hierarchy is inspected post-reasoning 30% of the time, which is again followed by the expansion of the hierarchy 43% of the times.

These activity patterns are shown in the following pictures.

 

Overall, we can see the following flow of events:

  • Initial exploration of the ontology.
  • A burst of exploration coupled with editing.
  • Reasoning followed by exploration.

 

An activity pattern is a common sequence of events. The details of our analysis that led to these activity patterns are in the paper, but some of the pretty pictures and the basic analysis steps that gave us these patterns are below.

 

 

This is a simple log plot of the number of each type of event recorded across all participants. The top three events – entity selected, description selected and edit entity:start account for 54% of events. Interaction events account for 65% of events, authoring events for 30% and environment events for 5%. There’s a lot of a few events and interaction with P4 accounts for most things.

 

This picture shows the N-grams of consecutive events. We can see lots of events like expanding the class hierarchy (either asserted or inferred) occurring many times one after the other, indicating people moving down through the hierarchy – the class hierarchy seems to be a centre of interaction – looking for classes to edit and checking for the effects of reasoning.

Those are the events themselves, but what happens after each event. Below there is a plot of transitions from event to event (note the circles around to the same event and the thickness of the lines indicating the likelihood of the event occurring). A matrix of number of transitions from event to event gives a fingerprint for each user. We see that the fingerprints revealed by these transitions from state to state are the same within individuals for each task; that is, each task is operationalised in P4 in the same way.

The inter-user similarity is also high, suggesting patterns of events (though there is also evidence of some different styles here too). What is below is a 16×16 matrix showing the correlation of the fingerprints (i.e. transition matrices of all participants).

 


 

The eye-tracking data showed that the class hierarchy received by far the most fixations (43%) and their attention 45% of the time. The edit entity dialogue has 26% of the fixations and the same for attention, and the description area 17% of the fixations and 15% attention. If we look at events over time we begin to see patterns, but with gaps. Some of these gaps can be filled by looking at where the eye gaze dwell – e.g., a user is looking at the description area and not interacting via events. The picture below shows the distribution of dwell times on each area of interest on the P4 user interface – note these numbers tell the same sort of story as the P4US event logging.

 


Each cell of the following matrix conveys the number of fixations between areas of interest. In other words, it indicates where users will look in t based on t-1 (the x-axis indicate the origin while the y-axis is the destination). The darker the cell is the more transitions there are between areas. We find that given a fixation on a given area the most likely next fixation is on the same area.


We also see other transitions:

  • From class hierarchy to description area (and vice versa).
  • From the class addition pop-up to class hierarchy.
  • From the edit entity dialogue to the class hierarchy.
  • From the edit entity dialogue to the description area.

 

Again, we se the class hierarchy being central to the interactions.

 

To find the activity patterns themselves, we next merged the eye-tracking and N-gram analyses. First we collapsed consecutive events and fixations of the same type. We then took the resulting N-grams of size > 3 and extended them one at a time until it only yielded repeated and smaller N-grams of the merged data. This analysis resulted in the editing, reasoning and exploration activities outlined at the top.

 

So what do we now know?

 

It appears that the class hierarchy is the centre of activity in Protégé. Authors look at the asserted class hierarchy to find the entity he or she wishes to edit and then edits in the description window. The inferred class hierarchy is used to check that what is expected to have happened as a result of reasoning has indeed happened. While activity in each of these windows involves a certain amount of “poking about”, the activity in the asserted class hierarchy looks more directed than that in the inferred class hierarchy. Design work that can ease navigation and orientation within the ontology and checking on the results of reasoning – all results, not just those the author expects to happen – would be a good thing.

 

Adding lots of axioms by hand is hard work and presumably error prone. Bulk addition of axioms is desirable, along with the means of checking that it’s worked OK.

 

There is a lot of eye-gaze transition between the class hierarchy and the editing area. In P4 these are by default these are top-left and bottom right. Defaulting to adjacency of these areas could make authoring a little more efficient.

 

We see experienced authors tend to save the ontology before reasoning. An autosave feature would seem like a good thing – even if the reasoners/Protégé never fell over.

 

Finally, rather than letting the author hunt for changes in the class inferred hierarchy changes should be made explicit in the display; in effect showing what has happened. This would be a role for semantic diff. Knowing what has changed between the ontology before and after a round of editing and reasoning could help – authors look for changes in the inferred class hierarchy – presumably the ones they are expecting to have happened; there may be other, unforeseen consequences of changing axioms and showing these semantic differences to users could be a boon. To do this we’ll be looking to exploit the work the work at Manchester on the Ecco semantic diff tool.

 

The paper has more detailed design recommendations. However, what this work does show is that we can gain insight into how ontologists operationalise their work by extracting patterns from log data. The task we used here (everyone built the same ontology to do the same tasks) is not so ecologically valid. It does provide a base-line and allowed us to develop the analysis methods. One pleasing aspect is that the findings in this quantative work largely supported that of our qualitative work. The next thing is to use Protégé4US while people are doing their everyday authoring jobs (do contact me if you’d be willing to take part and do a couple of hours work on Protégé 4US and receive our gratitude and an Amazon voucher). I expect to see the same patterns, but perhaps with variations, even if it’s just in timings, frequency and regularity.