New AI fake text generator may be too dangerous to release, say creators by an author (the Guardian)

From a research standpoint, GPT2 is groundbreaking in two ways. One is its size, says Dario Amodei, OpenAI’s research director. The models “were 12 times bigger, and the dataset was 15 times bigger and much broader” than the previous state-of-the-art AI model. It was trained on a dataset containing about 10m articles, selected by trawling the social news site Reddit for links with more than three votes. The vast collection of text weighed in at 40 GB, enough to store about 35,000 copies of Moby Dick.

The amount of data GPT2 was trained on directly affected its quality, giving it more knowledge of how to understand written text. It also led to the second breakthrough. GPT2 is far more general purpose than previous text models. By structuring the text that is input, it can perform tasks including translation and summarisation, and pass simple reading comprehension tests, often performing as well or better than other AIs that have been built specifically for those tasks.

Tip of the hat to Chris Aldrich.


The more far-out treatments of AI tend to explore the potential for programmed entities to develop sentience or even consciousness. But the effects play out both ways. In an interview to promote her new book Surveillance Capitalism, Shoshana Zuboff argues that it “is no longer enough to automate information flows about us; the goal now is to automate us.” And indeed, one of the core tenets of the Dumbularity is that even as machines take on more functions once reserved for humans, humans are being programmed and behaving as if they are machines.

It’s difficult to imagine a future in which these developments lead to a future with more authentic, varied and deeply-felt expressions of human experience.

I’m starting to think the people who think AI will usher in a golden age of human endeavor and freedom from drudgery are actually not people but AI. How would we know?