Wednesday 11 April 2012

Assessing autism the ADTree classifier way

The Autism Diagnostic Observation Schedule (ADOS) has a hallowed place in autism research and practice. At 23 years of age, ADOS and its counterpart, the Autism Diagnostic Interview (ADI), have pretty much cornered the gold-standard autism assessment market over the years. ADOS in particular with its sliding scale of modules covering the spectrum of language and developmental ability and generous assortment of 'props', resides in quite a few cupboards of child development centres and research institutes worldwide.

One of the nice things about ADOS aside from the hands-on, interactive tasks particularly in the more 'early-years' modules, is the way that it valiantly attempts to standardise behaviour and codes autism, autism spectrum or not via the use of an algorithm. This paper headed by one of the primary architects of ADOS, Cathy Lord, summarises the development of that algorithm based on the ADOS-G (generic). Said algorithm having fairly recently been revised by Katherine Gotham and colleagues to amalgamate some features (communication + social interaction = social affect) potentially tied into the proposed revisions for autism diagnosis in DSM-V.

Perfect you might say, so if it ain't broken why fix it? Well, gold standard or not, ADOS can be quite time-consuming to deliver (but not as time-consuming as the ADI I might add). Assessors also need to be properly trained and kept up-to-date with their ADOS training, and despite all its standardised prowess, people are people and sometimes mistakes are made - which is partly why ADOS is an assessment tool and not an all-encompassing diagnostic tool. Of course there are other 'issues' that have been raised with ADOS in mind (e.g. comorbid ADHD interfering, assessing Asperger syndrome) but I'm not here to start that critique.

Enter then a study by Dennis Wall and colleagues* (full-text) and their suggestion that based on some clever machine-learning algorithms, 8 out of the 29 items normally used in the delivery of a module 1 ADOS (no speech present) might just have the ability 'to classify autism' with complete or near 100% accuracy.

  • Wall and colleagues constructed 16 possible machine-learning algorithms based on the 29 coding items of the ADOS module 1. For anyone really interested, these 29 items are spread across several domains including: (a) language and communication (9 items including vocalisations, response to name, pointing and use of gestures), (b) reciprocal social interaction (11 items including eye contact, giving, requesting, showing and response to joint attention), (c) play (2 items - functional object play and imagination), (d) restricted, repetitive behaviours and interests (4 items including unusual sensory interests and hand / finger mannerisms) and (e) 'other abnormal behaviours' (3 items including overactivity and anxiety). I should point out that when it comes to the final algorithm, only 17 of these 29 items are actually used (at least according to the original pre-Gotham algorithm - the revised algorithm uses 14 items I think??) and even then, only 12 items contribute to the autism / autism spectrum cut-off scores used based on items in the communication and social domains.
  • Based on the use of data from a group of children with autism with module 1 ADOS scores from the AGRE dataset (autism: n=612; non spectrum: n=11), 90% of participants were used as part of a training set and the remaining 10% as a tester set as part of the proposed algorithms (much the same way that other studies have used). 
  • Further validation of the best classifiers was undertaken based on other independent samples who reached cut-offs from module 1 ADOS to include data from the Boston Autism Consortium (autism: n=110; non spectrum: n=4) and the Simons Simplex Collection (autism: n=336; non spectrum: n=0).
  • Although 2 out of the 16 algorithms "operated with perfect sensitivity, specificity and accuracy", one of them, the ADTree classifier, was selected as the best option because it relied on only 8 items of the module 1 ADOS to produce such results (the alternative used a whopping 9 items). Validation using the Autism Consortium and Simons Simplex Collection data correctly classifying all but 2 participants (from the Simons collection) who exhibited "... the potential presence of non spectrum behaviours".
  • Drum-roll for the 8 distinguishing module 1 items isolated: (i) frequency of vocalisation directed to others, (ii) unusual eye contact, (iii) responsive social smile, (iv) shared enjoyment in interaction, (v) showing, (vi) spontaneous initiation of joint attention, (vii) functional play with objects and (viii) imagination/creativity. Use of these 8 items would also reduce the number of activities needed to elicit behaviours; so goodbye 'response to name' and 'response to joint attention'. I'm glad to say that the 'birthday party' activity remained!

Despite the obvious issues concerning a reliance on pre-ADOSed children and the use of only a handful of non spectrum controls, I find myself very interested in these results and the performance of the ADTree classifier. As per the study conclusions "The ADTree classifier consisted of eight questions, 72.4% less
than the complete ADOS Module 1". Purely from a practical point of view and the quite significant levels of concentration needed to manage tasks and elicit scores from the original module 1 ADOS, there is an obvious advantage to focusing on 8 items as opposed to 29 items. I do wonder exactly how much time will be shaved off an assessment bearing in mind that the majority of module 1 activities still need to be covered in order to elicit proper responses to items. So activities like 'free play', 'functional and symbolic imitation' and 'birthday party' are still required and can often take up the lion's share of assessment time.

Whilst the 8 selected items are interesting, there are some notable absences from the classifiers based on things like issues with pointing, the use of gestures and the use of facial expressions directed to others. I was always under the impression that these were important facets of autism, or at least autism in the early years but this study suggests not so in terms of classification. What exactly this might say for other results linking pointing, proto-declarative pointing for example, to autism is source for speculation. 

Homing in on those classifier items also opens a number of doors to things like quick and early screening and even where attention perhaps needs to be focused with regards to measuring outcome - that is how maturation and/or intervention may impact on core symptoms (at least for young and/or non-verbal people on the autism spectrum). Whether also this type of machine-analysis is applicable to other schedules related to autism screening and assessment is another question.

* Wall DP. et al. Use of machine learning to shorten observation-based screening and diagnosis of autism. Translational Psychiatry. April 2012.
DOI: 10.1038/tp.2012.10

6 comments:

  1. A few pretty major limitations:

    1. Only 15 non-autistic controls, so they "simulated" more controls by randomly sampling items from the 15 to make 1000 more controls.

    2. They only looked at kids meeting criteria for "autism" (ADOS scores of 12 or above), not with "autism spectrum" (scores of 7 to 11). So (a) they only looked at really clear-cut cases and (b) they could have chopped out *any* four items and still got perfect results.

    3. The ADOS Module 1 has 29 items spread across 10 activities. The classifier reduces this to 8 items, but these are still spread across 8 activities. So at best, we can scrap 2 activities, which obviously isn't the dramatic reduction in assessment time that is being claimed.

    So the conclusion should really have been "ADOS assessment time can be reduced by about 20% for cases that are already clear cut".

    ReplyDelete
  2. Thanks Jon, indeed important details.

    There is a bit of misnomer going on here with regards to the saving of time. Those 2 activities are fairly unremarkable parts of the ADOS in terms of props and what behaviours they are trying to elicit i.e. response to name. It still leaves quite a chunk of activities to get through.

    I note also that the media reporting of this study seems to be extending into areas outside of this paper with regards to some online work trying to replace ADI:

    http://healthland.time.com/2012/04/11/can-autism-really-be-diagnosed-in-minutes/?iid=hl-main-lede?xid=gonewsedit

    Cathy Lord comments: “Arguing you should do this via a five-minute video and a seven-minute questionnaire is ridiculous,” she says. “Even if you do identify a child with autism, it’s not an adequate diagnosis. You still are going to have to talk to parents and interact with the child.”

    ReplyDelete
  3. This looks spurious to me - they say that some algorithms "operated with perfect sensitivity, specificity and accuracy" - which just means that they didn't have a good enough testing regime (here sample size etc.) to estimate the true sensitivity and specificity. You often fit any data set perfectly or near perfectly with the eight degrees of freedom, but need to show that it generalizes beyond the training set to be useful in practice. Perfect accuracy/sensitivity is trivial to achieve but to get specificity at low rates (i.e., few false positives) for a complex diagnosis is almost impossible in practice.

    ReplyDelete
  4. Thanks for the comment Thom. What is perhaps needed from this work is some independent validation with (a) larger participant numbers and (b) more controls - non spectrum controls.

    ReplyDelete
  5. We are looking for an alternative to the ADOS that is public domain... do you know of any? I'm sure something is out there, but we are having a hard time tracking down options. Thanks in advance!

    ReplyDelete
  6. Thanks for dropping by Nikki.

    Open access alternatives to ADOS - carrying the standardisation, reliability and validity - are a little sparse at the current time. The strength of ADOS lies in the involvement of a trained external rater using standardised measures and appropriate cues to facilitate responses to the required elements. I'm not sure anything on the market comes close to this at the moment.

    I am becoming a fan of the ATEC (Autism Treatment Evaluation Schedule)
    http://www.autism.com/index.php/ind_atec
    which is open-access but still has some way to go to catch ADOS and is probably not best suited to assessing autism:
    http://www.ncbi.nlm.nih.gov/pubmed/21199043

    Other than that...

    ReplyDelete

Note: only a member of this blog may post a comment.