Many of the thorny ethical issues surrounding AI arise when it is used to make predictions about people, particularly predictions that affect life chances. The over arching narrative surrounding the use of AI technology is that if we have lots of data and the ability to process it with machine learning, we can accurately drive predictions. We know that there are issues with inaccuracies for AI-enabled systems when it comes to ensuring that AI works for everyone. Yet, when it comes to this crystal ball, predictive capability, does AI actually deliver on its promise at all? Is AI an accurate predictive tool?
A study entitled "Measuring the predictability of life outcomes with a scientific mass collaboration" asked hundreds of researcher to predict life outcomes for vulnerable families using big data from a 15 year longitudinal study. (Hao, 2020)
"Despite using a rich dataset and applying machine-learning methods optimized for prediction, the best predictions were not very accurate and were only slightly better than those from a simple benchmark model." (Salganick, 2020)
In another study of COVID-19 models recently published in Nature, out of the hundreds of models reviewed, there was little if any clinical predictive capability.
“In their current reported form, none of the machine learning models included in this review are likely candidates for clinical translation for the diagnosis/prognosis of COVID-19,” the paper reads. “Despite the huge efforts of researchers to develop machine learning models for COVID-19 diagnosis and prognosis, we found methodological flaws and many biases throughout the literature, leading to highly optimistic reported performance.”(Johnson, 2021)
Have we over-hyped the predictive crystal ball narrative for AI?
Social media algorithms correct for the messiness of our unpredictability not by getting better at prediction, but by making us more predictable. From filter bubbles to radicalizing recommendation engines (the subject of the excellent podcast Rabbit Hole), the variability issue is addressed by an exercise in reshaping our preferences to the will of the algorithm.
"Now people have become targets for remote control, as surveillance capitalists discovered that the most predictive data come from intervening in behavior to tune, herd and modify action in the direction of commercial objectives." (Zuboff, 2020)
Moving into an Internet of Things enabled smart environment, where the world around us is served up as a curated platform, we may be setting the groundwork to enact this experiment at a bigger scale. We'd be fulfilling the promise of the crystal ball in this scenario, just not quite in the way we originally conceived.
-- Katrina Ingram
I highly recommend Rabbit Hole! Tech reporter Kevin Roose explores the question - what is the internet doing to us?
Johnson, K. (2021, March 23). Major flaws found in machine learning for COVID-19 diagnostics. Venture Beat. Retrieved from -
Roberts, M., Driggs, D., Thorpe, M. et al. Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. Nature. 199–217 (2021). Retrieved from - https://www.nature.com/articles/s42256-021-00307-0
Salganik, M., Lundberg, I., Alexander T. et al (2020, April). Measuring the predictability of life outcomes with a scientific mass collaboration. Proceedings of the National Academy of Sciences Apr 2020, 117 (15) Retrieved from - https://www.pnas.org/content/117/15/8398
Zuboff, S. (2020, January 24). You are now remotely controlled. New York Times. Retrieved form - https://www.nytimes.com/2020/01/24/opinion/sunday/surveillance-capitalism.html