On selling AI fever dreams to gullible publics
The Guardian recently had an op-ed by John Naughton on how the media is guilty in selling us AI fantasies at the behest of the technology industry. This scratches my long held belief that (a) AI is both poorly understood and as a result (b) completely oversold (at least when it comes to non-trivial problems of optimising consumption patterns1. In his op-ed Naughton primarily looks at how industry serving narratives about AI have come to dominate media coverage of AI, which he mainly attributes at journalists doing a fairly shoddy job:
The tech giants that own and control the technology have plans to exponentially increase that impact and to that end have crafted a distinctive narrative. Crudely summarised, it goes like this: “While there may be odd glitches and the occasional regrettable downside on the way to a glorious future, on balance AI will be good for humanity. Oh – and by the way – its progress is unstoppable, so don’t worry your silly little heads fretting about it because we take ethics very seriously.” […]
Why do people believe so much nonsense about AI? The obvious answer is that they are influenced by what they see, hear and read in mainstream media. But until now that was just an anecdotal conjecture. The good news is that we now have some empirical support for it, in the shape of a remarkable investigation by the Reuters Institute for the Study of Journalism at Oxford University into how UK media cover artificial intelligence. […] The main conclusion of the study is that media coverage of AI is dominated by the industry itself. Nearly 60% of articles were focused on new products, announcements and initiatives supposedly involving AI; a third were based on industry sources; and 12% explicitly mentioned Elon Musk, the would-be colonist of Mars.
Critically, AI products were often portrayed as relevant and competent solutions to a range of public problems. Journalists rarely questioned whether AI was likely to be the best answer to these problems, nor did they acknowledge debates about the technology’s public effects.[…]
In essence this observation is neither new or specific to media coverage of AI. Similar dynamics can be observed across the whole gamut of technology journalism where the media is breathlessly amplifying thinly veiled sales pitches of technology companies. A couple of years ago, Adam Greenfield did an excellent job at dissecting these dynamics for the “smart cities” narrative. Adam’s post went one step further by focussing on how these media narratives find their way into public policies via bureaucrats who are ill-equipped to critically question them.
Even if we assume that the current capacities and impacts of AI systems are massively oversold, it is still clear that widespread deployment of Artificial Intelligence has the potential to further wreck the social fabric of our societies in pursuit of optimising the extraction of value. Given this it is not entirely surprising that the purveyors of the AI driven future are anticipating on the inevitable backlash:
Another plank in the industry’s strategy is to pretend that all the important issues about AI are about ethics and accordingly the companies have banded together to finance numerous initiatives to study ethical issues in the hope of earning brownie points from gullible politicians and potential regulators. This is what is known in rugby circles as “getting your retaliation in first” and the result is what can only be described as “ethics theatre”, much like the security theatre that goes on at airports.
The term “ethics theatre” seems spot on in this context. So far the whole discussion about AI ethics does indeed resemble a theatre more than anything else2. On multiple occasions I have seen otherwise critical people become almost deferential to some imagined higher order of discourse as soon as discussions were framed as being about the “ethics of…”. Having unmasked the abundance of ethics talk as an attempt to proactively deflect regulation Naughton points out that what we really need is indeed regulation:
…in the end it’s law, not ethics, that should decide what happens, as Paul Nemitz, principal adviser to the European commission, points out in a terrific article just published by the Royal Society. Just as architects have to think about building codes when designing a house, he writes, tech companies “will have to think from the outset… about how their future program could affect democracy, fundamental rights and the rule of law and how to ensure that the program does not undermine or disregard… these basic tenets of constitutional democracy”.
This is an idea that we should take very seriously. Now that our public spaces are more and more defined by code and data, it is high time to realise that ideas like “moving fast and breaking things” are the equivalent ignoring building codes when constructing schools in earthquake-prone areas.
-
That being said, I would totally be in the market for an AI powered app that can reliably tell me if an avocado is indeed ripe to eat. i would imagine that it can’t be that hard to train a neural network to do so by feeding it thousands of avocado images labeled according to ripeness. ↩︎
-
The notable exception is the 2017 MIT experiment about the who should be killed by autonomous vehicles, which probably kick-started this entire AI ethics trope. Although in retrospect that was not so much about AI ethics but about the personal ethics of the participants. In my case the strongly held belief that any “self driving” car must always attempt to minimise harm done to anyone not driving in a car, even if that comes at the cost of maximising deaths among vehicle passengers. ↩︎