What could be more post-truth than this?

I think we’re about to find out.

Tynan   ·     ·   4 min read

I’ve been thinking a lot about the first Trump administration and its relationship to the news media. In the early years, I basically accepted the truism that Trump’s combative relationship with the press represented a genuine animosity on both sides — in part because I was studying journalism at the time and really wanted to believe I could oppose Trump by working really hard to help people better understand reality. I was always skeptical of “objectivity” as a principle, but I could still draw a clear distinction between what journalistic institutions represented and Trump’s sustained assault on, well, reality.

It feels almost quaint to mention this now, but the Washington Post actually tallied and charted all of Trump’s lies (err, “false or misleading claims”) from his first term. Remember that?

Per the Post:

“The Trump claims database was nominated by the Arthur L. Carter Journalism Institute at New York University for inclusion in a list of the Top Ten Works of Journalism of the Decade. “The project is a sterling example of what journalists should do — holding the powerful accountable by using reporting and facts,” the nomination said.”

Hmm. Not so sure about that buddy! Trump certainly feels like he was held to account with reporting and facts doesn’t he.

I don’t mean to be glib about facts or the importance of journalism. I like being a part of the reality-based community. This is a pro-fact, pro-reality, pro-journalism blog.

But it’s much harder to respect the tattered remnants of the (national) press when newsroom leaders are actively embracing and supporting malignant forces that are polluting what’s left of our collective information ecosystem. For example, every time a news organization decides to rely on generative AI, they are using and lending further legitimacy to technology that actively harms the very thing they are supposed to care about. Truth! Facts! Words that aren’t total bullshit!

The BBC actually put generative AI assistants to the test recently, asking them questions based on the BBC’s own stories, and the results were really good. No problems found.

Just kidding the results were fucking horrifying:

“New BBC research published today provides a warning around the use of AI assistants to answer questions about news, with factual errors and the misrepresentation of source material affecting AI assistants.

The findings are concerning, and show:

  • 51% of all AI answers to questions about the news were judged to have significant issues of some form
  • 19% of AI answers which cited BBC content introduced factual errors – incorrect factual statements, numbers and dates
  • 13% of the quotes sourced from BBC articles were either altered or didn’t actually exist in that article.

The study, conducted over a month, saw the BBC test four prominent, publicly available AI assistants – OpenAI’s ChatGPT; Microsoft’s Copilot; Google’s Gemini; and Perplexity. These AI assistants were given access to the BBC’s website and asked questions about the news, prompting them to use BBC News articles as sources where possible. AI answers were reviewed by BBC journalists, all experts in the question topics, on criteria including accuracy, impartiality and how they represented BBC content.

What’s even more horrifying is that the BBC didn’t look at these results and immediately chuck AI assistants into the bin. The research was carried out by their Responsible AI Team, the existence of which assumes that there IS a responsible way to use Large Language Models within journalism. And here’s Pete Archer, Programme Director for Generative AI at the BBC, a position I’m sure comes with a salary that could pay for two or three good journalists, talking about how these results actually show that newsrooms should double-down on their use of AI???

“Publishers, like the BBC, should have control over whether and how their content is used and AI companies should show how assistants process news along with the scale and scope of errors and inaccuracies they produce. This will require strong partnerships between AI and media companies and new ways of working that put the audience first and maximise value for all. The BBC is open and willing to work closely with partners to do this.”

This happened before Trump even took office, but I still remember quite vividly when Oxford Dictionary named “post-truth” as its word of the year in 2016. That was the era of fake news and alternative facts and it was all very bad don’t get me wrong.

But today, the people whose entire job is to point out and correct these lies are being asked to subcontract their work out to demented stochastic parrots that vomit up falsehoods at least twenty percent of the time.

I’m not sure what comes after post-truth but I think we’re about to find out.

++++++

Thanks for reading. If you like my writing and want to help me sustain this website, you can support me on Patreon or throw me a few bucks on Ko-Fi. Any amount helps!

++++++





++++++

Tags: #Journalism