... in artificial intelligence

Machines vs Elections

08 Aug 2023 | 500 words | artificial intelligence democracy elections

So over the wekend we learned that Sam Altman is apparently nervous about the impact of AI on democratic elections:

i am nervous about the impact AI is going to have on future elections (at least until everyone gets used to it). personalized 1:1 persuasion, combined with high-quality generated media, is going to be a powerful force. (@sama on x, 04-08-2023)1

And that even though he is the CEO of the company that has arguably done more than anyone else to get generative ML tools into the hands of the public, he cannot come up with anything better than “raising awareness”:

although not a complete solution, raising awareness of it is better than nothing. we are curious to hear ideas, and will have some events soon to discuss more. (@sama on x, 04-08-2023)

It seems that Altmans “concern” is something that has already had its impact on OpenAIs public policy narrative. A few hours after Altman’s tweet, Open AI’s new European head op policy and partnerships had translated this into a call to action:

Who are the best thinkers/builders at the intersection of generative AI and elections in Europe? Ideas welcome! (@sGianella on x, 04-08-2023)

Now, there are plenty of reasons to suspect that Altman’s “nervousness” is nothing more than self-serving criti-hype (see here for such an interpretation), but in this case it is worth dwelling a bit on the particular connection being made between AI and elections (especially since both the US and Europe have upcoming elections).

There are indeed reasons to be concerned about the impact of generative ML models on elections and other democratic processes2, but it is beyond absurd that such concerns are being articulated by Open AI and its representatives:

The way in which Open AI introduces its models into the public sphere stands in stark contrast to the very norms of openness, transparency, and equality on which democratic elections are based. Instead of these values, Open AI deliberately hides how its models are trained, making it impossible for researchers, policymakers, and the general public to understand “the impact AI is going to have on future elections.”

If Sam Altman and his surrogates are truly interested in limiting any undue interference of the technology they are peddling to the public with upcoming elections, then they should apply the same level of transparency to their publicly available models. Until that happens, we should keep ML systems as far away from the electoral process as possible.


  1. Given the unstable nature of the platform formerly known as twitter i have made a screenshot of the posts quoted in this conversation available here↩︎

  2. Although the impact is probably overstated, the underlying problem is not so much “AI” but rather disinformation that may or may not be aided by “AI” systems. As Sayash Kapoor & Arvind Narayanan have convincingly argued the real bottleneck for disinformation campaigns is not its generation but rather its distribution and as a result it is at the distribution level where this problem should be adressed. ↩︎

Faux songs and old concepts

24 Jul 2021 | 373 words | music artificial intelligence copyright

Billboard magazine has a fascinating piece on the rise of the use of deepfake vocals to create what the piece terms “faux songs”: ‘It’s Fan Fiction For Music’: Why Deepfake Vocals of Music Legends Are on the Rise.

The piece showcases the recent rise of deepfake vocals (vocals produced by computers that closely resemble the voices of well known artists) in new songs that are generally produced without involvement of the artists who’s voices are reproduced1. It is a super interesting introduction into a world of music production full of delightfull wired-ness. I particularly like this track featuring a recreation of Travis Scott’s voice, which combines glitchy aesthetics with what the agency producing the song “considered to be cohesive lyrics”:

Sadly the main thesis of the piece (“Faux songs created from the original voices of star artists are becoming more popular (and more convincing), leading to murky questions of morality and legality”) does not do this phenomenon much justice. The idea that this type artistic production enabled by digital manipulation techniques can somehow be understood though concepts like “faux” vs “orignial” strikes me as outright silly. Quite obviously the “faux” works are just as orignal as any of the “originals” that the voices are derived from.

Even more problematic are continuing attempts to fit this type of creativity into existing concepts like authorship and copyright. The realities of creative production today are so far removed from the realities in the late 19th century (when modern ideas about copyright and authorship were codified) that it really does not make much sense to try to analyse them throught the lens of these concepts.

We urgently need new concepts that do a better job at recognising artistic innovation while at the same time ensuring that the value created in the process is fairly distributed.


  1. Fortunately the author does fall into the trap pretending that these songs have been produced by “artificial intelligence”. While he mentions the term once, the article makes it clear that these songs have been produced by people who are using machine learning models to generate the deepfake voices. In the examples discussed in the piece the actual lyrics, beats and compositions (ie the copyrighted works) are all written by individuals or teams. ↩︎

Towards an auto-generative Public Domain?

A couple of days ago,  I came across the website generated.photos (via the Verge) a new service that offers 100.000 computer generated portrait images and positions them as an alternative to traditional stock photos. The Verge article highlights the fact that the pictures can be used “royalty free” and the generated.photos website claims that “Copyrights … will be a thing of the past”. This made me postulate on twitter that we might very well be witnessing the emergence of an “auto-generative public domain”.

It has since become clear that the creators of generated.photos do not intended to contribute the output of their algorithms (or for that matter the algorithms themselves) to the public domain: By now the website has been updated to note that the images are available for non-commercial use only. A new terms and conditions page states that “Legal usage rights for content produced by artificial intelligence is a new, largely unknown domain” only to go on to list a number of restrictions on the use of the “materials and software” made available on generated.photos.

As noted in the terms of conditions the copyright status of images (and other types of artworks) that are autonomously created by AI-powered software is largely unsettled. As Andres Guadamuz notes in his excellent overview post on the topic, there are generally two schools when it comes to the question if computer generated artworks are (or should be) protected by copyright. One school argues that copyright protection only attaches to works that have been created by humans and as a result computer generated artworks can by definition not be copyrighted. The other school points out that such works are not created without any human intervention (someone needs to start up the software and set basic parameters) and that whoever initiated the generation of these works should be considered the creator and receive at least a minimum level of (copyright) protection as a reward for their investment.

In the case of generated.photos it is evident that the people behind the project have made a considerable investment into the project. The website states that they have shot more than 29.000 photos of 69 models that have subsequently been used as a training set for the software. Judging by notes on their website, the 100.000 images made available on the website have been created using the open source generative adversarial network StyleGAN that is freely available via GitHub. It remains to seen if creating photos (which are copyright protected) that are then used to train a out of the box GAN does indeed mean that the output of the network is (a) protected by copyright and (b) that the copyright belongs to the entity that trained the GAN.

While it seems to be at least possible that the creators of generated.photos do have a legitimate copyright claim in their output, that does not necessarily invalidate the idea that we are witnessing the emergence of an auto-generative public domain, i.e circumstances in which computer algorithms produce a (possibly endless) stream of artworks, that are indistinguishable from human created works and that are free from copyright and can be used by anyone for any purpose.

In terms of quality, the images provided by generated.photos are still far from indistinguishable from human made stock photos, but it is clear that it is only a matter of time before the technology gets good enough to produce high enough quality outputs at scale. Projects like the next Rembrandt illustrate this development is not limited to stock photography but will likely happen across the full width of human creative expression.

The future: AI driven on demand creation of visual assets

Such a development would dramatically upend a large number of creative professions. It seems like it will only be a matter of time before stock photography and other forms of creative work where the primary draw is not the specific style of a particular creator will be replaced by AI-generated output that will cost almost nothing to create. Once AI powered systems will be able to deliver high quality creative output at zero marginal cost the question if these outputs are protected by copyright or not will be largely meaningless (a single system releasing its output into the public domain will render any attempts to enforce copyright futile).

From the perspective of those making a living by creating stock photos, background music and other forms of creative work that is about to be eaten up by AI, the emergence of this “auto-generative public domain” must feel dystopian. Under these conditions the primary question that we must ask ourselves is not how we can fit works created by computer algorithms within the framework of copyright law. Instead we should ask ourselves how we can create the conditions for human creators to leverage these technologies as tools for their own creative expression. Instead of mourning a future in which humans are no longer employed to shoot endless variations of the same stock photos, we should look out for entirely new forms of creative expression enabled by these tools.

On selling AI fever dreams to gullible publics

The Guardian recently had an op-ed by John Naughton on how the media is guilty in selling us AI fantasies at the behest of the technology industry. This scratches my long held belief that (a) AI is both poorly understood and as a result (b) completely oversold (at least when it comes to non-trivial problems of optimising consumption patterns1. In his op-ed Naughton primarily looks at how industry serving narratives about AI have come to dominate media coverage of AI, which he mainly attributes at journalists doing a fairly shoddy job:

The tech giants that own and control the technology have plans to exponentially increase that impact and to that end have crafted a distinctive narrative. Crudely summarised, it goes like this: “While there may be odd glitches and the occasional regrettable downside on the way to a glorious future, on balance AI will be good for humanity. Oh – and by the way – its progress is unstoppable, so don’t worry your silly little heads fretting about it because we take ethics very seriously.” […]

Why do people believe so much nonsense about AI? The obvious answer is that they are influenced by what they see, hear and read in mainstream media. But until now that was just an anecdotal conjecture. The good news is that we now have some empirical support for it, in the shape of a remarkable investigation by the Reuters Institute for the Study of Journalism at Oxford University into how UK media cover artificial intelligence. […] The main conclusion of the study is that media coverage of AI is dominated by the industry itself. Nearly 60% of articles were focused on new products, announcements and initiatives supposedly involving AI; a third were based on industry sources; and 12% explicitly mentioned Elon Musk, the would-be colonist of Mars.

Critically, AI products were often portrayed as relevant and competent solutions to a range of public problems. Journalists rarely questioned whether AI was likely to be the best answer to these problems, nor did they acknowledge debates about the technology’s public effects.[…]

In essence this observation is neither new or specific to media coverage of AI. Similar dynamics can be observed across the whole gamut of technology journalism where the media is breathlessly amplifying thinly veiled sales pitches of technology companies. A couple of years ago, Adam Greenfield did an excellent job at dissecting these dynamics for the “smart cities” narrative. Adam’s post went one step further by focussing on how these media narratives find their way into public policies via bureaucrats who are ill-equipped to critically question them.

Even if we assume that the current capacities and impacts of AI systems are massively oversold, it is still clear that widespread deployment of Artificial Intelligence has the potential to further wreck the social fabric of our societies in pursuit of optimising the extraction of value. Given this it is not entirely surprising that the purveyors of the AI driven future are anticipating on the inevitable backlash:

Another plank in the industry’s strategy is to pretend that all the important issues about AI are about ethics and accordingly the companies have banded together to finance numerous initiatives to study ethical issues in the hope of earning brownie points from gullible politicians and potential regulators. This is what is known in rugby circles as “getting your retaliation in first” and the result is what can only be described as “ethics theatre”, much like the security theatre that goes on at airports.

The term “ethics theatre” seems spot on in this context. So far the whole discussion about AI ethics does indeed resemble a theatre more than anything else2. On multiple occasions I have seen otherwise critical people become almost deferential to some imagined higher order of discourse as soon as discussions were framed as being about the “ethics of…”. Having unmasked the abundance of ethics talk as an attempt to proactively deflect regulation Naughton points out that what we really need is indeed regulation:

…in the end it’s law, not ethics, that should decide what happens, as Paul Nemitz, principal adviser to the European commission, points out in a terrific article just published by the Royal Society. Just as architects have to think about building codes when designing a house, he writes, tech companies “will have to think from the outset… about how their future program could affect democracy, fundamental rights and the rule of law and how to ensure that the program does not undermine or disregard… these basic tenets of constitutional democracy”.

This is an idea that we should take very seriously. Now that our public spaces are more and more defined by code and data, it is high time to realise that ideas like “moving fast and breaking things” are the equivalent ignoring building codes when constructing schools in earthquake-prone areas.



  1. That being said, I would totally be in the market for an AI powered app that can reliably tell me if an avocado is indeed ripe to eat. i would imagine that it can’t be that hard to train a neural network to do so by feeding it thousands of avocado images labeled according to ripeness. ↩︎

  2. The notable exception is the 2017 MIT experiment about the who should be killed by autonomous vehicles, which probably kick-started this entire AI ethics trope. Although in retrospect that was not so much about AI ethics but about the personal ethics of the participants. In my case the strongly held belief that any “self driving” car must always attempt to minimise harm done to anyone not driving in a car, even if that comes at the cost of maximising deaths among vehicle passengers. ↩︎

meanwhile... is the personal weblog of Paul Keller. I am currently policy director at Open Future and President of the COMMUNIA Association for the Public Domain. This weblog is largely inactive but contains an archive of posts (mixing both work and personal) going back to 2005.

I also maintain a collection of cards from African mediums (which is the reason for the domain name), a collection of photos on flickr and a website collecting my professional writings and appearances.

Other things that i have made online: