The Chatbots May Poison Themselves

The Atlantic

The Chatbots May Poison Themselves

Full Article Source

Generative-AI programs may eventually consume material that was created by other machineswith disastrous consequences. In the beginning, the chatbots and their ilk fed on the human-made internet. Various generative-AI models of the sort that power ChatGPT got their start by devouring data from sites including Wikipedia, Getty , and Scribd. They consumed text, images, and other content, learning through algorithmic digestion their flavors and texture, which ingredients go well together and which do not, in order to concoct their own art and writing. But this feast only whet their appetite. Generative AI is utterly reliant on the sustenance it gets from the web: Computers mime intelligence by processing almost unfathomable amounts of data and deriving patterns from them. ChatGPT can write a passable high-school essay because it has read libraries worth of digitized books and articles, while DALL-E 2 can produce Picasso-esque images because it has analyzed something like the entire trajectory of art history. The more they train on, the smarter they appear. Eventually, these programs will have ingested almost every human-made bit of digital material . And they are already being used to engorge the web with their own machine-made content, which will only continue to proliferateacross TikTok and Instagram, on the sites of media outlets and retailers , and even in academic experiments . To develop ever more advanced AI products, Big Tech might have no choice but to feed its programs AI-generated content, or just might not be able to sift human fodder from the synthetica potentially disastrous change in diet for both the models and the internet, according to researchers. Read: AI doomerism is a decoy The problem with using AI output to train future AI is straightforward. Despite stunning advances, chatbots and other generative tools such as the image-making Midjourney and Stable Diffusion remain sometimes shockingly dysfunctionaltheir outputs filled with biases, falsehoods, and absurdities. Those mistakes will migrate into future iterations of the programs, Ilia Shumailov, a machine-learning researcher at Oxford University, told me. If you imagine this happening over and over again, you will amplify errors over time. In a recent study on this phenomenon, which has not been peer-reviewed, Shumailov and his co-authors describe the conclusion of those amplified errors as model collapse : a degenerative process whereby, over time, models forget, almost as if they were growing senile. (The authors originally called the phenomenon model dementia, but renamed it after receiving criticism for trivializing human dementia.) Generative AI produces outputs that, based on its training data, are most probable. (For instance, ChatGPT will predict that, in a greeting, doing? is likely to follow how are you .) That means events that seem to be less probable, whether because of flaws in an algorithm or a training sample that doesnt adequately reflect the real worldunconventional word choices, strange shapes, images of people with darker skin (melanin is often scant in image datasets)will not show up as much in the models outputs, or will show up with deep flaws. Each successive AI trained on past AI would lose information on improbable events and compound those errors, Aditi Raghunathan, a computer scientist at Carnegie Mellon University, told me. You are what you eat. Recursive training could magnify bias and error, as previous research also suggestschatbots trained on the writings of a racist chatbot, such as early versions of ChatGPT that racially profiled Muslim men as terrorists, would only become more prejudiced. And if taken to an extreme, such recursion would also degrade an AI models most basic functions. As each generation of AI misunderstands or forgets underrepresented concepts, it will become overconfident about what it does know. Eventually, what the machine deems probable will begin to look incoherent to humans, Nicolas Papernot, a computer scientist at the University of Toronto and one of Shumailovs co-authors, told me. The study tested how model collapse would play out in various AI programsthink GPT-2 trained on the outputs of GPT-1, GPT-3 on the outputs of GPT-2, GPT-4 on the outputs of GPT-3, and so on, until the nth generation. A model that started out producing a grid of numbers displayed an array of blurry zeroes after 20 generations; a model meant to sort data into two groups eventually lost the ability to distinguish between them at all, producing a single dot after 2,000 generations. The study provides a nice, concrete way of demonstrating what happens with such a data feedback loop, Raghunathan, who was not involved with the research, said. The AIs gobbled up one anothers outputs, and in turn one another, a sort of recursive cannibalism that left nothing of use or substance behindthese are not Shakespeares anthropophagi , or human-eaters, so much as mechanophagi of Silicon Valleys design. The language model they tested, too, completely broke down. The program at first fluently finished a sentence about English Gothic architecture, but after nine generations of learning from AI-generated data, it responded to the same prompt by spewing gibberish: architecture. In addition to being home to some of the worlds largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @-. For a machine to create a functional map of a language and its meanings, it must plot every possible word, regardless of how common it is. In language, you have to model the distribution of all possible words that may make up a sentence, Papernot said. Because there is a failure [to do so] over multiple generations of models, it converges to outputting nonsensical sequences. In other words, the programs could only spit back out a meaningless averagelike a cassette that, after being copied enough times on a tape deck, sounds like static. As the science-fiction author Ted Chiang has written , if ChatGPT is a condensed version of the internet, akin to how a JPEG file compresses a photograph, then training future chatbots on ChatGPTs output is the digital equivalent of repeatedly making photocopies of photocopies in the old days. The image quality only gets worse. The risk of eventual model collapse does not mean the technology is worthless or fated to poison itself. Alex Dimakis, a computer scientist at the University of Texas at Austin and a co-director of the National AI Institute for Foundations of Machine Learning, which is sponsored by the National Science Foundation, pointed to privacy and copyright concerns as potential reasons to train AI on synthetic data. Consider medical applications: Using real patients medical information to train AI poses huge privacy violations that using representative synthetic records could bypasssay, by taking a collection of peoples records and using a computer program to generate a new dataset that, in the aggregate, contains the same information. To take another example, limited training material is available in rare languages, but a machine-learning program could produce permutations of what is available to augment the dataset. Read: ChatGPT is already obsolete The potential for AI-generated data to result in model collapse, then, emphasizes the need to curate training datasets. Filtering is a whole research area right now, Dimakis told me. And we see it has a huge impact on the quality of the modelsgiven enough data, a program trained on a smaller amount of high-quality inputs can outperform a bloated one. Just as synthetic data arent inherently bad, human-generated data is not a gold standard, Ilia Shumailov said. We need data that represents the underlying distribution well. Human and machine outputs are just as likely to be misaligned with reality (many existing discriminatory AI products were trained on human creations). Researchers could potentially curate AI-generated data to alleviate bias and other problems, by training their models on more representative data. Using AI to generate text or images that counterbalance prejudice in existing datasets and computer programs, for instance, could provide a way to potentially debias systems by using this controlled generation of data, Aditi Raghunathan said. A model that is shown to have dramatically collapsed to the extent that Shumailov and Papernot documented would never be released as a product, anyway. Of greater concern is the compounding of smaller, hard-to-detect biases and misperceptionsespecially as machine-made content becomes harder , if not impossible, to distinguish from human creations. I think the danger is really more when you train on the synthetic data and as a result have some flaws that are so subtle that our current evaluation pipelines do not capture them, Raghunathan said. Gender bias in a resume-screening tool, for instance, could, in a subsequent generation of the program, morph into more insidious forms. The chatbots might not eat themselves so much as leach undetectable traces of cybernetic lead that accumulate across the internet with time, poisoning not just their own food and water supply, but humanitys.