Impact

How AI is reshaping truth and quality in the professional world

Faisal Hoque|Published

.

Image: Jakob Berg/Getty Images; Vaceslav Romanov/Adobe Stock]

Stories about AI-generated fabrications in the professional world have become part of the background hum of life since generative AI hit the mainstream three years ago. Invented quotes, fake figures, and citations that lead to nonexistent research have shown up in academic publications, legal briefs, government reports, and media articles.

We can often understand these events as technical failures: the AI hallucinated, someone forgot to fact-check, and an embarrassing but honest mistake became a national news story.

But in some cases, they represent the tip of a much bigger iceberg—the visible portion of a much more insidious phenomenon that predates AI but that will be supercharged by it.

Because in some industries, the question of whether a statement is true or false doesn’t matter much at all—what counts is whether it is persuasive.

While talking heads have tended to focus on the “post-truth moment” in politics, consultants and other “knowledge producers” have been happily treating the truth as a malleable construct for decades. If it is better for the bottom line for the data to point in one direction rather than another, someone out there will happily conduct research that has the sole goal of finding the “right” answer.

Information is commonly packaged in decks and reports with the intention of supporting a client narrative or a firm’s own goals while inconvenient facts are either minimised or ignored entirely. Generative AI provides an incredibly powerful tool for supporting this kind of misdirection: Even if it is not pulling data out of thin air and inventing claims from the ground up, it can provide a dozen ways to hide the truth or to make “alternative facts” sound convincing. Wherever the appearance of rigor matters more than rigor itself, AI becomes not a liability but a competitive advantage. 

Not to put too fine a point on it, many “knowledge workers” spend much of their time producing what the philosopher Harry Frankfurt calls “bullshit.” And what is “bullshit” according to Frankfurt? Its essence, he says, “is not that it is false, but it is phony.” The liar, Frankfurt explains, cares about truth, even if only negatively, since he or she wants to conceal it. The bullshitter, however, does not care at all.

They may even tell the truth by accident. What matters to bullshitters isn’t accuracy but effect: how their words work on an audience, what impression they create, what their words allow them to get away with. For many individuals and firms in these industries, words in reports and slide decks are not there to describe reality or to conduct honest argumentation; they are there to do the work of the persuasive bullshitter.

Knowledge work is one of the leading providers of what the anthropologist David Graeber famously called “bullshit jobs”—jobs that involve work that even those doing it quietly suspect serves no real purpose.

For decades, product vendors, analysts, and consultants have been rewarded for producing material that looks rigorous, authoritative, and data-driven—the 30-page slide deck, the glossy report, snazzy frameworks, and slick 2-by-2s. The material did not need to be good. It simply needed to look good.

And if that is the goal, if words are meant to perform rather than inform, if the aim is to produce effective bullshit rather than tell the truth, then it makes perfect sense to use AI. AI can produce bullshit better, more quickly, and in greater volume than any human being.

So, when consultants and analysts turn to generative AI to help them with their reports and presentations, they are obeying the underlying logic and fundamental goals of the system in which they operate.

The problem here is not that AI produces bullshit—the problem is that many in this business are willing to say whatever needs to be said to pad the bottom line.

Bullshit versus quality

The answer here is neither new policies nor training programs. These things have their places, but at best they address symptoms rather than underlying causes.

If we want to address causes rather than apply Band-Aids, we have to understand what we have lost in the move to bullshit, because then we can begin figuring out how to recover it.

In Zen and the Art of Motorcycle Maintenance, Robert Pirsig uses the term “quality” to name the property that makes a good thing good. This is an intangible characteristic: It cannot be defined, but everyone knows it when they see it.

You know quality when you run your hand along a well-made table and feel the seamless join between two pieces of wood; you know quality when you see that every line and curve is just as it should be. There is a quiet rightness to something that has this character, and when you see it, you glimpse what it means for something to be genuinely good.

If the institutions that are responsible for creating knowledge—not just consulting firms but universities, corporations, governments, and media platforms—were animated by a genuine sense of quality, it would be far harder for bullshit to take root.

Institutions teach values through what they reward, and we have spent decades rewarding the production of bullshit. Consultants simply do in excelsis what we have all learned to do to some degree: produce something that looks good without caring whether it really is good.

First you wear the mask, they say, and then the mask wears you. Initially, perhaps, we can produce bullshit while at least retaining our capacity to see it as bullshit. But over time, the longer we operate in the bullshit-industrial complex, the more bullshit we produce, the more we tend to lose even that capacity. We drink the Kool Aid and start thinking that bullshit is quality. AI does not cause that blindness. It simply reveals it.

What leaders can do

Make life hard. Bullshit flourishes because it is easy. If we want to produce quality work, we need to take the harder road.

AI isn’t going away, and nor should we wish it away. It is an incredible tool for enhancing productivity and allowing us to do more with our time. But it often does so by encouraging us to produce bullshit, because that is the quickest and easiest path in a world that has given up on quality. The challenge is to harness AI without allowing ourselves to be beguiled into shortcuts that ultimately pull us down into the mire. To avoid that trap, leaders must take deliberate steps at both the individual and organisational levels.

At the individual level. Never accept anything that AI outputs without making it your own first. For every sentence, every fact, every claim, every reference, ask yourself: Do I stand by that? If you don’t know, you need to check the claims and think through the arguments until they truly become your own. Often, this will mean rewriting, revising, reassessing, and even flat out rejecting. And this is hard when there is an easier path available. But the fact that it is hard is what makes it necessary.

At the organisational level: Yes, we must trust our people to use AI responsibly. But—if we choose not to keep company with the bullshitters of the world—we must also commit and recommit our organisations to producing work of real quality. That means instituting real, rigorous quality checks. Leaders need to stand behind everything their team produces. They need to take responsibility and affirm that they are allowing it to pass out of the door not because it sounds good but because it really is good. Again, this is hard. It takes time and effort. It means not accepting a throwaway glance across the text but settling down to read and understand in detail. It means being prepared to challenge ourselves and to challenge our teams, not just periodically, but every day. 

The path forward is not to resist AI or to romanticise slowness and inefficiency. It is to be ruthlessly honest about what we are producing and why. Every time we are tempted to let AI-generated material slide because it looks good enough, we should ask: Are we creating something of quality, or are we just adding to the pile of bullshit? That question—and our willingness to answer it honestly—will determine whether AI becomes a tool for excellence or just another engine that trades insight for appearance.

ABOUT THE AUTHOR

Faisal Hoque is on a mission to humanise organisational transformation. Founder of ShadokaNextChapter, and other award-winning ventures—and the #1 Wall Street Journal bestselling author of REINVENTTRANSCENDReimagining Government, and more—he has guided some of the world’s leading organisations, including MasterCard, Northrop Grumman, French Social Security, PepsiCo, GE, and the U.S.

FAST COMPANY