Accuracy is not enough — technologies with great promise fail to factor in real-world issues
When Google Health researchers conducted a study in Thailand that looked at the effectiveness of their company’s medical imaging technology, the most interesting results had little to do with how the algorithms worked.
Instead, they discovered the human dimension that so often undermines the potential of such technology — real-world problems that can hamper the use of AI.
The study, into the use of a deep learning system to identify a diabetic eye disease, was the first “human-centred observational study” of its kind, according to Google Health. The research team had good reason to be hopeful about the effectiveness of the underlying technology: when examining images of patients in a lab setting, the software missed far fewer cases than specialists (it reduced the rate of false negatives by 23 per cent), at the cost of increasing false positives by 2 per cent.
In the real world, however, things went awry. In some clinics, the lighting was not right, or internet speeds were too slow to upload the images. Some clinics did not use eye drops on patients, exacerbating problems with the images. The result was a large number of images that could not be analysed by the system.