Utilizing the highly regarded ADOS (Autism Diagnostic Observation Schedule-Generic) exam as their starting point, Ward et al meticulously edited the potential 90 minute exam into a segment no longer than the previews before a major motion picture, managed to keep the reporting accuracy above 99%, and even offer YouTube as a valuable tool for diagnosis.
What makes this research stand out against the other advancements, abbreviations, & fine-tunings, of the ADOS exam is not necessarily the content but how it was derived. In a game of virtual evolution, Ward et al employed a series of 16 machine-learning algorithms to breakdown and compare the 29 items of Module 1 and the results (1,073 total) of previously administered exams from the Autism Genetic Resource Exchange (AGRE) – 623, the Boston Autism Consortium (AC) – 114, and the Simons Simplex Collection (SSC) – 336. As each algorithm played out its own solution, they were constantly evaluated on the sensitivity, specificity, and accuracy of their chosen items to distinguish individuals with & without autism. Ultimately, one algorithm (ADTree) came out ahead using only eight of the original 29 items and managed to misdiagnosed only 2 cases.
This revised ADOS examination is by no means complete and should not be considered a catch-all test. Due to the limited number of non-autistic test results (only 15 actual, the rest simulated) this current iteration cannot be sufficiently tested to distinguish between autism, Asperger’s syndrome, and other developmental disorders. Rather, Ward et al focus on the potential of the machine-learning algorithms utilized within this study to revise and create more malleable and accurate testing structures, as well as, offer a more efficient and obtainable means to test for autism spectrum disorders during those crucial early developmental stages.
Interesting. looking forward to more details on accuracy.