Salmon in the river
An archaeological excavation of AI-generated salmon fillets, and what these say about the narrative circulation of algorithmic folklore
You've probably seen a version of this image before. Salmon fillets floating in water, realistic yet uncanny. The caption stresses the reason why this is funny: AI was asked to generate images of salmon in a river, and it did - just not in the way one would expect.
This compact multimodal narrative is a textbook example of algorithmic folklore - the genres and repertoires that emerge when everyday creative practices encounter automated systems - and embodies some of its key features. Algorithmic folklore is, quite often, created with or alongside technologies of automation as much as it is about some aspects of the uneasy interaction between people and computation. In this case, the salmon fillets were prompted by a person, generated by a text-to-image model, and compiled into a humorous collage by a person who wanted to highlight a specific shortcoming of the model.
As other kinds of folklore, the algorithmic one is also often anonymous or of unmarked authorship: someone prompted the model, some model generated the images, someone compiled the images, someone added the caption, someone posted it online, someone screenshotted and reposted it - but these actions and their authors leave little to no traces in the image. And just like folklore, this image affords different kinds of reproduction: it is accessible enough to be understood and circulated by social media users (and/or algorithms) as a funny dig at the vagaries of generative artificial intelligence, but it can also function as a blueprint for its own recreation as a test, benchmark or experiment - does that prompt really generate those images? Do newer models also fail in this way? What other versions of the joke can be produced?
When analyzed on its own, this image is already a quite rich palimpsest of human-AI collaboration: it is likely a screenshot of a tweet (without an author or date), it includes four synthetic images of unclear provenance and a line of text making a claim about their origin, and exists as a meme circulating across platforms, often accompanied by critical commentary about artificial intelligence. But a core argument of the ALGOFOLK project is that this sort of object needs to be studied not only through deep, interpretive readings, but also by following its circulation in larger ecosystems of content, as one among many versions of a joke that changes and branches out in time and across places. For example, the image in question was posted on the iFunny website by user InterStellaLoaf5555 in October 2022, and received around 3,000 likes and 50 comments. Among these, a couple of more critical ones already highlight something interesting:
I don’t believe that’s what the prompt was. If it really was, they wouldn’t have cropped it out, then told us. (1996CutlassCiera_2016)
The prompt was actually “salmon filet in River, HD photography” smh normies just mad a computer can do better art than them (memeboy67)
For the first commenter, the lack of a prompt text box is suspicious; according to the second, the caption is partly misleading, as they seem to know that the original prompt explicitly asked for a salmon fillet - and yet, both these claims don't present any evidence, and suspicion becomes part of the image's reception: did the AI model really fail, or was it set up to?
The analysis could stop here, as iFunny doesn't keep track of where or how much this image was reshared, but luckily reverse image search tools allow one to dig a bit deeper. By isolating and searching for each of the four salmon fillet images, I find a tweet posted by DeltyThe73rd two days before the iFunny upload. The tweet contains three of the four images, and explains them as such:
Delty does not explicitly mention a prompt, and claims that the images were sent to them by someone else. The tweet is quite popular, with around 36,000 retweets, 273,000 likes and more than 600 replies, to the point that the author themselves recognizes it as a "hit tweet" in a reply on the same day. Many of the replies are jokes on ways different models hallucinate, and some share the same critical angle as the iFunny post, showcasing their own proofs that models can in fact distinguish between live salmon and salmon fillets. Delty's tweet is shared widely enough to reach across social media publics and capturing the attention of AI experts; on the next day, computer scientist François Chollet quote-tweets it with a detailed commentary:
There's a meme floating around that says "AI" interprets prompts overly literally, to comedic effect. In reality, these models interpret your prompts *statistically*, not *literally*. They show you the most likely output consistent with your query, in terms of the training data. "salmon in the river" will get you the visual most likely to be associated with that caption on the Internet (i.e. the data the model was trained on). To get something that's a statistical outlier, you have to go with a prompt that's also an outlier. So these models do possess a fair bit of common sense -- the kind that can be obtained via statistical mining of web data. (Which is only a subset of human common sense.)
Chollet's explanation is interesting not only because of its technical nuance, but also because it sets up a rhetorical boundary between "memes" that "float around" for "comedic effect" and the "reality" of how generative AI models work - or, in more technical terms, between algorithmic folklore and expert knowledge. By distinguishing between literal and statistical interpretations of the output, the computer scientist seems to imply that the images are the result of a skewed representation of salmon in the training data, which leads the model to default to images of fillets rather than live fish. But who sent these images to Delty? And who created them in the first place?
The first question is difficult to answer (DeltyThe73rd please get in touch if you want to help out), but the second one leads to an interesting turn of events. In a reply to the tweet, Japanese user favorite_Bonsai links back to one of their own tweets from late August 2022, which contains one of the images and the following explanation:
I asked an AI to create an image of "Salmon Run" (sāmonran) for me, and the AI pretended to know everything.
As evident from their Twitter profile, favorite_Bonsai is a fan of the Splatoon videogame series, and "Salmon Run" is the name of a game mode introduced with the 2017 release of Splatoon 2. This is confirmed by other members of the Japanese Splatoon 2 community replying to the tweet with other versions of AI-generated salmon runs, or asking favorite_Bonsai for instructions to create their own. In response, the original creator shares a screenshot of their conversation with an automated account on the LINE messaging app called DrawingVeryWell-kun, which I discover being a chatbot-based implementation of Stable Diffusion. In the screenshot, favorite_Bonsai's prompt is captured alongside the output image: sāmonran (written in Japanese kana), and a long strip of vibrant pink salmon fillet floating on the foamy surface of a body of water.
This brief account of an archaeological dig helps answering a few questions. Yes, the salmon in a river images are AI-generated - more specifically, generated by a Japanese gamer trying to reproduce a Splatoon 2 level with Stable Diffusion via a LINE chatbot. And no, they were not prompted with "salmon in a river" nor with "salmon fillet", but rather with the Japanese transliteration of "salmon run". They were not generated to trick or critique AI, but the creator found the algorithmic failure funny enough to be worth sharing. These images circulated online (likely within Japanese Twitter) for a few months before reaching DeltyThe73rd, who reposted them by adding the "salmon in a river" interpretation; from here, they were recreated and reposted in different combinations until someone took a screenshot of the tweet "The AI prompt was salmon in the river. So majestic" which in turn became the most widely-shared version of this narrative.
Over the following weeks, months and years, versions of the salmon in a river meme pop up in a Spanish tweet (with an emphasis on the dangers of AI translation), a Mastodon post (with users attributing it to DALL-E), a Reddit thread (which identifies it as a "Peak Artificial Intelligence Moment"), and even a Shanghai Biennale review (in which it functions as a critical lynchpin). Some version have new captions which explicitly make fun dystopian visions of AI domination, others update the joke to newer generative models; some discussions circle around doubts about the original prompt, while others identify this as a nostalgic image from years past that keeps being regularly reposted to farm engagement. The salmon in the river becomes algorithmic folklore.
In terms of methodology, this pilot study demonstrates why it is important to go beyond the interpretive analysis of individual pieces of content or commentary. Taken at face value, the most widely shared version of this object presents a clear narrative and lends itself to a specific interpretation; when contextualized in a longer, more complex chain of versionings, translations and recreations, the narrative is challenged and interpretations multiply. Tracing the circulation of algorithmic folklore is not easy: social media posts disappear, multimodal content is difficult to search across platforms, and establishing clear links requires capturing a wide variety of data. For this reason, the ALGOFOLK project has set up the Black Box, an Obsidian-based archive in which we can collect examples of algorithmic folklore as individual entries connected by both relationships of derivation and thematic tags. At the moment, our salmon cluster consists of eleven entries, branching out of the original Sāmonran image, through the Delty tweet, into a variety of other interconnected versions, each with its own screenshot, transcriptions and metadata - ready to capture the next stage of the salmon's algorithmic migration.