Posts about ai

Journalism is lossy compression

There has been much praise in human chat — Twitter — about Ted Chiang’s New Yorker piece on machine chat — ChatGPT. Because New Yorker; because Ted Chiang. He makes a clever comparison between lossy compression — how JPEGs or MP3s save a good-enough artifact of a thing, with some pieces missing and fudged to save space — and large-language models, which learn from and spit back but do not record the entire web. “Think of ChatGTP as a blurry JPEG of all the text on the Web,” he instructs. 

What strikes me about the piece is how unselfaware media are when covering technology.

For what is journalism itself but lossy compression of the world? To save space, the journalist cannot and does not save or report everything known about an issue or event, compressing what is learned into so many available inches of type. For that matter, what is a library or a museum or a curriculum but lossy compression — that which fits? What is culture but lossy compression of creativity? As Umberto Eco said, “Now more than ever, we realize that culture is made up of what remains after everything else has been forgotten.”

Chiang analogizes ChatGPT et al to a computational Xerox machine that made an error because it extrapolated one set of bits for others. Matthew Kirschenbaum quibbles:

Agreed. This reminds me of the sometimes rancorous debate between Elizabeth Eisenstein, credited as the founder of the discipline of book history, and her chief critic, Adrian Johns. Eisenstein valued fixity as a key attribute of print, its authority and thus its culture. “Typographical fixity,” she said, “is a basic prerequisite for the rapid advancement of learning.” Johns dismissed her idea of print culture, arguing that early books were not fixed and authoritative but often sloppy and wrong (which Eisenstein also said). They were both right. Early books were filled with errors and, as Eisenstein pointed out, spread disinformation. “But new forms of scurrilous gossip, erotic fantasy, idle pleasure-seeking, and freethinking were also linked” to printing, she wrote. “Like piety, pornography assumed new forms.” It took time for print to earn its reputation of uniformity, accuracy, and quality and for new institutions — editing and publishing — to imbue the form with authority. 

That is precisely the process we are witnessing now with the new technologies of the day. The problem, often, is that we — especially journalists — make assumptions and set expectations about the new based on the analog and presumptions of the old. 

Media have been making quite the fuss about ChatGPT, declaring in many a headline that Google better watch out because it could replace its Search. As we all know by now, Microsoft is adding ChatGPT to its Bing and Google is said to have stumbled in its announcements about large-language models and search last week. 

But it’s evident that the large-language models we have seen so far are not yet good for search or for factual divination; see the Stochastic Parrots paper that got Tinmit Gebru fired from Google; see also her coauthor Emily Bender’s continuing and cautionary writing on the topic. Then read David Weinberger’s Everyday Chaos, an excellent and slightly ahead of its moment explanation of what artificial intelligence, machine learning, and large language models do. They predict. They take their learnings — whether from the web or some other large set of data — and predict what might happen next or what should come next in a sequence of, say, words. (I wrote about his book here.) 

Said Weinberger: “Our new engines of prediction are able to make more accurate predictions and to make predictions in domains that we used to think were impervious to them because this new technology can handle far more data, constrained by fewer human expectations about how that data fits together, with more complex rules, more complex interdependencies, and more sensitivity to starting points.”

To predict the next, best word in a sequence is a different task from finding the correct answer to a math problem or verifying a factual assertion or searching for the best match to a query. This is not to say that these functions cannot be added onto large-language models as rhetorical machines. As Google and Microsoft are about to learn, these functions damned well better be bolted together before LLMs are unleashed on the world with the promise of accuracy. 

When media report on these new technologies they too often ignore underlying lessons about what they say about us. They too often set high expectations — ChatGPT can replace search! — and then delight in shooting down those expectations — ChatGPT made mistakes!

Chiang wishes ChatGPT to search and calculate and compose and when it is not good at those tasks, he all but dismisses the utility of LLMs. As a writer, he just might be engaging in wishful thinking. Here I speculate about how ChatGPT might help expand literacy and also devalue the special status of the writer in society. In my upcoming book, The Gutenberg Parenthesis (preorder here /plug), I note that it was not until a century and a half after Gutenberg that major innovation occurred with print: the invention of the essay (Montaigne), the modern novel (Cervantes), and the newspaper. We are early our progression of learning what we can do with new technologies such as large-language models. It may be too early to use them in certain circumstances (e.g., search) but it is also too early to dismiss them.

It is equally important to recognize the faults in these technologies — and the faults that they expose in us — and understand the source of each. Large-language models such as ChatGPT and Google’s LaMDA are trained on, among other things, the web, which is to say society’s sooty exhaust, carrying all the errors, mistakes, conspiracies, biases, bigotries, presumptions, and stupidities — as well as genius — of humanity online. When we blame an algorithm for exhibiting bias we should start with the realization that it is reflecting our own biases. We must fix both: the data it learns from and the underlying corruption in society’s soul. 

Chiang’s story is lossy in that he quotes and cites none of the many scientists, researchers, and philosophers who are working in the field, making it as difficult as ChatGPT does to track down the source of his logic and conclusions.

The lossiest algorithm of all is the form of story. Said Weinberger:

Why have we so insisted on turning complex histories into simple stories? Marshall McLuhan was right: the medium is the message. We shrank our ideas to fit on pages sewn in a sequence that we then glued between cardboard stops. Books are good at telling stories and bad at guiding us through knowledge that bursts out in every conceivable direction, as all knowledge does when we let it.
But now the medium of our daily experiences — the internet — has the capacity, the connections, and the engine needed to express the richly chaotic nature of the world.

In the end, Chiang prefers the web to an algorithm’s rephrasing of it. Hurrah for the web. 

We are only beginning to learn what the net can and cannot do, what is good and bad from it, what we should or should not make of it, what it reflects in us. The institutions created to grant print fixity and authority — editing and publishing — are proving inadequate to cope with the scale of speech (aka content) online. The current, temporary proprietors of the net, the platforms, are also so far not up to the task. We will need to overhaul or invent new institutions to grapple with issues of credibility and quality, to discover and recommend and nurture talent and authority. As with print, that will take time, more time than journalists have to file their next story.


 Original painting by Johannes Vermeer; transformed (pixelated) by acagastya., CC0, via Wikimedia Commons

Writing as exclusion

DALL-E image of quill, ink pot, and paper with writing on it.
DALL-E

In The Gutenberg Parenthesis (my upcoming book), I ask whether, “in bringing his inner debates to print, Montaigne raised the stakes for joining the public conversation, requiring that one be a writer to be heard. That is, to share one’s thoughts, even about oneself, necessitated the talent of writing as qualification. How many people today say they are intimidated setting fingers to keys for any written form — letter, email, memo, blog, social-media post, school assignment, story, book, anything — because they claim not to be writers, while all the internet asks them to be is a speaker? What voices were left out of the conversation because they did not believe they were qualified to write? … The greatest means of control of speech might not have been censorship or copyright or publishing but instead the intimidation of writing.”

Thus I am struck by the opportunity presented by generative AI — lately and specifically ChatGPT— to provide people with an opportunity to better express themselves, to help them write, to act as Cyrano at their ear. Fellow educators everywhere are freaking out, wondering how they can ever teach writing and assign essays without wondering whether they are grading student or machine. I, on the other hand, look for opportunity — to open up the public conversation to more people in more ways, which I will explore here.

Let me first be clear that I do not advocate an end to writing or teaching it — especially as I work in a journalism school. It is said by some that a journalism degree is the new English degree, for we teach the value of research and the skill of clear expression. In our Engagement Journalism program, we teach that rather than always extracting and exploiting others’ stories, we should help people tell their own. Perhaps now we have more tools to aid in the effort.

I have for some time argued that we must expand the boundaries of literacy to include more people and to value more means of expression. Audio in the form of podcasts, video on YouTube or TikTok, visual expression in photography and memes, and the new alphabets of emoji enable people to speak and be understood as they wish, without writing. I have contended to faculty in communications schools (besides just my own) that we must value the languages (by that I mean especially dialects) and skills (including in social media) that our students bring.

Having said all that, let us examine the opportunities presented by generative AI. When some professors were freaking out on Mastodon about ChatGPT, one prof — sorry I can’t recall who — suggested creating different assignments with it: Provide students with the product of AI and ask them to critique it for accuracy, logic, expression — that is, make the students teachers of the machines.

This is also an opportunity to teach students the limitations and biases of AI and large language models, as laid out by Timnit Gebru, Emily Bender, Margaret Mitchell, and Angelina McMillan-Major in their Stochastic Parrots paper. Users must understand when they are listening to a machine that is trained merely to predict the next most sensible word, not to deliver and verify facts; the machine does not understand meaning. They also must realize when the data used to train a language model reflects the biases and exclusions of the web as source — when it reflects society’s existing inequities — or when it has been trained with curated content and rules to present a different worldview. The creators of these models need to be transparent about their makings and users must be made aware of their limitations.

It occurs to me that we will probably soon be teaching the skill of prompt writing: how to get what you want out of a machine. We started exercising this new muscle with DALL-E and other generative image AI — and we learned it’s not easy to guide the machine to draw exactly what we have in mind. At the same time, lots of folks are already using ChatGPT to write code. That is profound, for it means that we can tell the machine how to tell itself how to do what we want it to do. Coders should be more immediately worried about their career prospects than writers. Illustrators should also sweat more than scribblers.

In the end, writing a prompt for the machine — being able to exactly and clearly communicate one’s desires for the text, image, or code to be produced — is itself a new way to teach self-expression.

Generative AI also brings the reverse potential: helping to prompt the writer. This morning on Mastodon, I empathized with a writer who lamented that he was in the “I’m at the ‘(BETTER WORDS TK)’ stage” and I suggested that he try ChatGPT just to inspire a break in the logjam. It could act like a super-powered thesaurus. Even now, of course, Google often anticipates where I’m headed with a sentence and offers a suggested next word. That still feels like cheating — I usually try to prove Google wrong by avoiding what I now sense as a cliché — but is it so bad to have a friend who can finish your sentences for you?

For years, AI has been able to take simple, structured data — sports scores, financial results — and turn that into stories for wire services and news organizations. Text, after all, is just another form of data visualization. Long ago, I sat in a small newsroom for an advisory board meeting and when the topic of using such AI came up, I asked the eavesdropping, young sports writer a few desks over whether this worried him. Not at all, he said: He would have the machine write all the damned high-school game stories the paper wanted so he could concentrate on more interesting tales. ChatGPT is also proving to be good at churning out dull but necessary manuals and documentation. One might argue, then, that if the machine takes over the most drudgerous forms of writing, we humans would be left with brainpower to write more creative, thoughtful, interesting work. Maybe the machine could help improve writing overall.

A decade ago, I met a professor from INSEAD, Philip Parker, who insisted that contrary to popular belief, there is not too much content in the world; there is too little. After our conversation, I blogged: “Parker’s system has written tens of thousands of books and is even creating fully automated radio shows in many languages…. He used his software to create a directory of tropical plants that didn’t exist. And he has radio beaming out to farmers in poor third-world nations.”

By turning text into radio, Parker’s project, too, redefines literacy, making listening, rather than reading or writing, the necessary skill to become informed. As it happens, in that post from 2011, I starting musing about the theory Tom Pettitt had brought to the U.S. from the University of Southern Denmark: the Gutenberg Parenthesis. In my book, which that theory inspired, I explore the idea that we might be returning to an age of orality — and aurality — past the age of text. Could we be leaving the era of the writer?

And that is perhaps the real challenge presented by ChatGPT: Writers are no longer so special. Writing is no longer a privilege. Content is a commodity. Everyone will have more means to express themselves, bringing more voices to public discourse — further threatening those who once held a monopoly on it. What “content creators” — as erstwhile writers and illustrators are now known — must come to realize is that value will reside not only in creation but also in conversation, in the experiences people bring and the conversations they join.

Montaigne’s time, too, was marked by a new abundance of speech, of writing, of content. “Montaigne was acutely aware that printing, far from simplifying knowledge, had multiplied it, creating a flood of increasingly specialized information without furnishing uniform procedures for organizing it,” wrote Barry Lydgate. “Montaigne laments the chaotic proliferation of books in his time and singles out in his jeremiad a new race of ‘escrivains ineptes et inutiles’ ‘inept and useless writers’ on whose indiscriminate scribbling he diagnoses a society in decay…. ‘Scribbling seems to be a sort of symptom of an unruly age.’”

Today, the machine, too, scribbles.

https://link.medium.com/35wnOIMyYvb