The piece showcases the recent rise of deepfake vocals (vocals produced by computers that closely resemble the voices of well known artists) in new songs that are generally produced without involvement of the artists who’s voices are reproduced1. It is a super interesting introduction into a world of music production full of delightfull wired-ness. I particularly like this track featuring a recreation of Travis Scott’s voice, which combines glitchy aesthetics with what the agency producing the song “considered to be cohesive lyrics”:
Sadly the main thesis of the piece (“Faux songs created from the original voices of star artists are becoming more popular (and more convincing), leading to murky questions of morality and legality”) does not do this phenomenon much justice. The idea that this type artistic production enabled by digital manipulation techniques can somehow be understood though concepts like “faux” vs “orignial” strikes me as outright silly. Quite obviously the “faux” works are just as orignal as any of the “originals” that the voices are derived from.
Even more problematic are continuing attempts to fit this type of creativity into existing concepts like authorship and copyright. The realities of creative production today are so far removed from the realities in the late 19th century (when modern ideas about copyright and authorship were codified) that it really does not make much sense to try to analyse them throught the lens of these concepts.
We urgently need new concepts that do a better job at recognising artistic innovation while at the same time ensuring that the value created in the process is fairly distributed.
Fortunately the author does fall into the trap pretending that these songs have been produced by “artificial intelligence”. While he mentions the term once, the article makes it clear that these songs have been produced by people who are using machine learning models to generate the deepfake voices. In the examples discussed in the piece the actual lyrics, beats and compositions (ie the copyrighted works) are all written by individuals or teams. ↩︎
A couple of days ago, I came across the website generated.photos (via the Verge) a new service that offers 100.000 computer generated portrait images and positions them as an alternative to traditional stock photos. The Verge article highlights the fact that the pictures can be used “royalty free” and the generated.photos website claims that “Copyrights … will be a thing of the past”. This made me postulate on twitter that we might very well be witnessing the emergence of an “auto-generative public domain”.
It has since become clear that the creators of generated.photos do not intended to contribute the output of their algorithms (or for that matter the algorithms themselves) to the public domain: By now the website has been updated to note that the images are available for non-commercial use only. A new terms and conditions page states that “Legal usage rights for content produced by artificial intelligence is a new, largely unknown domain” only to go on to list a number of restrictions on the use of the “materials and software” made available on generated.photos.
As noted in the terms of conditions the copyright status of images (and other types of artworks) that are autonomously created by AI-powered software is largely unsettled. As Andres Guadamuz notes in his excellent overview post on the topic, there are generally two schools when it comes to the question if computer generated artworks are (or should be) protected by copyright. One school argues that copyright protection only attaches to works that have been created by humans and as a result computer generated artworks can by definition not be copyrighted. The other school points out that such works are not created without any human intervention (someone needs to start up the software and set basic parameters) and that whoever initiated the generation of these works should be considered the creator and receive at least a minimum level of (copyright) protection as a reward for their investment.
In the case of generated.photos it is evident that the people behind the project have made a considerable investment into the project. The website states that they have shot more than 29.000 photos of 69 models that have subsequently been used as a training set for the software. Judging by notes on their website, the 100.000 images made available on the website have been created using the open source generative adversarial network StyleGAN that is freely available via GitHub. It remains to seen if creating photos (which are copyright protected) that are then used to train a out of the box GAN does indeed mean that the output of the network is (a) protected by copyright and (b) that the copyright belongs to the entity that trained the GAN.
While it seems to be at least possible that the creators of generated.photos do have a legitimate copyright claim in their output, that does not necessarily invalidate the idea that we are witnessing the emergence of an auto-generative public domain, i.e circumstances in which computer algorithms produce a (possibly endless) stream of artworks, that are indistinguishable from human created works and that are free from copyright and can be used by anyone for any purpose.
In terms of quality, the images provided by generated.photos are still far from indistinguishable from human made stock photos, but it is clear that it is only a matter of time before the technology gets good enough to produce high enough quality outputs at scale. Projects like the next Rembrandt illustrate this development is not limited to stock photography but will likely happen across the full width of human creative expression.
Such a development would dramatically upend a large number of creative professions. It seems like it will only be a matter of time before stock photography and other forms of creative work where the primary draw is not the specific style of a particular creator will be replaced by AI-generated output that will cost almost nothing to create. Once AI powered systems will be able to deliver high quality creative output at zero marginal cost the question if these outputs are protected by copyright or not will be largely meaningless (a single system releasing its output into the public domain will render any attempts to enforce copyright futile).
From the perspective of those making a living by creating stock photos, background music and other forms of creative work that is about to be eaten up by AI, the emergence of this “auto-generative public domain” must feel dystopian. Under these conditions the primary question that we must ask ourselves is not how we can fit works created by computer algorithms within the framework of copyright law. Instead we should ask ourselves how we can create the conditions for human creators to leverage these technologies as tools for their own creative expression. Instead of mourning a future in which humans are no longer employed to shoot endless variations of the same stock photos, we should look out for entirely new forms of creative expression enabled by these tools.
The tech giants that own and control the technology have plans to exponentially increase that impact and to that end have crafted a distinctive narrative. Crudely summarised, it goes like this: “While there may be odd glitches and the occasional regrettable downside on the way to a glorious future, on balance AI will be good for humanity. Oh – and by the way – its progress is unstoppable, so don’t worry your silly little heads fretting about it because we take ethics very seriously.” […]
Why do people believe so much nonsense about AI? The obvious answer is that they are influenced by what they see, hear and read in mainstream media. But until now that was just an anecdotal conjecture. The good news is that we now have some empirical support for it, in the shape of a remarkable investigation by the Reuters Institute for the Study of Journalism at Oxford University into how UK media cover artificial intelligence. […] The main conclusion of the study is that media coverage of AI is dominated by the industry itself. Nearly 60% of articles were focused on new products, announcements and initiatives supposedly involving AI; a third were based on industry sources; and 12% explicitly mentioned Elon Musk, the would-be colonist of Mars.
Critically, AI products were often portrayed as relevant and competent solutions to a range of public problems. Journalists rarely questioned whether AI was likely to be the best answer to these problems, nor did they acknowledge debates about the technology’s public effects.[…]
In essence this observation is neither new or specific to media coverage of AI. Similar dynamics can be observed across the whole gamut of technology journalism where the media is breathlessly amplifying thinly veiled sales pitches of technology companies. A couple of years ago, Adam Greenfield did an excellent job at dissecting these dynamics for the “smart cities” narrative. Adam’s post went one step further by focussing on how these media narratives find their way into public policies via bureaucrats who are ill-equipped to critically question them.
Even if we assume that the current capacities and impacts of AI systems are massively oversold, it is still clear that widespread deployment of Artificial Intelligence has the potential to further wreck the social fabric of our societies in pursuit of optimising the extraction of value. Given this it is not entirely surprising that the purveyors of the AI driven future are anticipating on the inevitable backlash:
Another plank in the industry’s strategy is to pretend that all the important issues about AI are about ethics and accordingly the companies have banded together to finance numerous initiatives to study ethical issues in the hope of earning brownie points from gullible politicians and potential regulators. This is what is known in rugby circles as “getting your retaliation in first” and the result is what can only be described as “ethics theatre”, much like the security theatre that goes on at airports.
The term “ethics theatre” seems spot on in this context. So far the whole discussion about AI ethics does indeed resemble a theatre more than anything else2. On multiple occasions I have seen otherwise critical people become almost deferential to some imagined higher order of discourse as soon as discussions were framed as being about the “ethics of…”. Having unmasked the abundance of ethics talk as an attempt to proactively deflect regulation Naughton points out that what we really need is indeed regulation:
…in the end it’s law, not ethics, that should decide what happens, as Paul Nemitz, principal adviser to the European commission, points out in a terrific article just published by the Royal Society. Just as architects have to think about building codes when designing a house, he writes, tech companies “will have to think from the outset… about how their future program could affect democracy, fundamental rights and the rule of law and how to ensure that the program does not undermine or disregard… these basic tenets of constitutional democracy”.
That being said, I would totally be in the market for an AI powered app that can reliably tell me if an avocado is indeed ripe to eat. i would imagine that it can’t be that hard to train a neural network to do so by feeding it thousands of avocado images labeled according to ripeness. ↩︎
The notable exception is the 2017 MIT experiment about the who should be killed by autonomous vehicles, which probably kick-started this entire AI ethics trope. Although in retrospect that was not so much about AI ethics but about the personal ethics of the participants. In my case the strongly held belief that any “self driving” car must always attempt to minimise harm done to anyone not driving in a car, even if that comes at the cost of maximising deaths among vehicle passengers. ↩︎