Skip to content

That study on pseudo-profound bullshit is pseudo-profound bullshit

One of the things I use to quote over and over when discussing about culture and philosophy is the Sokal affair. To make it simple, Alan Sokal, a physicist, submitted a fake paper to Social Text, an academic journal of philosophy. The paper was a hoax: it was basically a nonsensical collection of buzzwords and vague statements, crafted to appear “deep” without conveying any meaningful information (or declaring blatantly false “scientific” statements). Social Text swallowed the bait, hook, line and sink, and published it. The very day it was published, Sokal revealed the hoax on another magazine, Lingua Franca. The whole affair became the root for a book, Fashionable Nonsense, denouncing how a significant part of the philosophical discourse has became prey of its own penchant for obscurity and disregard for intellectual rigor, becoming little more than a self-referential discussion about nonsense.

This was a key moment of the so-called “science wars”, and it still resonates today. In the everyday discourse we perhaps do not engage daily with Derrida or Baudrillard (and luckly so, might I add!) but we deal with more mundane and more socially significant deep-echoing meaninglessness – everytime you hear someone blabbling about “quantum healing”, homeopathy, astrology and stuff like that. We can condense such quasi-concepts in a word: bullshit.

It is an interesting topic, to investigate why we believe in bullshit – that is, why and how collections of words that sound like concepts, but are not, are instead actually judged to be concepts worth of listening and repeating. A step in this direction was taken by Pennycook et al. who published “On the reception and detection of pseudo-profound bullshit” on the last issue of Judgement and Decision Making. The study resonated across lots of media, and for a quite obvious reason: it tells us something many of us – the ones of us who like to think of themselves as rational, clear-minded individuals – always thought to be true. That is: people who judge bullshit as actually profound are dumber, more gullible and incoherent in their reasoning. “Ha ha, these people I thought were morons are, indeed, morons!”

It is thus a sad misfortune that such a potentially important study is so deeply and tragically flawed. The problem is at the very beginning and foundation: what is bullshit? How can we decide clearly what is bullshit and what is not?

Pennycook et al. ignore completely this issue. They give a definition, following Harry Frankfurt’s On Bullshit (which I haven’t read):

[Bullshit is ]something that is designed to impress but that was constructed absent direct concern for the truth

They care to clarify that a bullshit statement has to be syntactically correct and deceptive. Bullshit, in this strict sense, is not merely indicating nonsensical statements -so Lewis Carroll’s Jabberwocky is not bullshit, since it makes no attempt to deceive, and it is blatantly nonsensical. It has to be like cheap fake banknotes: realistic from the distance, false at scrutiny.

Yet language and thought is not like a banknote: there is no “gold standard” of clarity, unless you want to reduce all language to formal logic and the utterance of tautologies. This is tragically evident when you read the first sentence that they give as a clear -in their eyes- example of bullshit:

Hidden meaning transforms unparalleled abstract beauty

I nearly jumped on my chair when I read this sentence as an example because, hey, it sounds like bullshit but it is not! It is actually a quite clear sentence, even if it is reasonably out of a bigger context and clumsly worded. It informs us that the existence of a hidden meaning in a work of abstract art, can change the perception of the aesthetic value of such a work. This is far from being a controversial concept, let alone a “bullshit” one: the aesthetics of lots of modern art actually only reveals once you have been explained its background and the intentions. The only problematic word in the sentence is “unparalleled”, which is weird because the greatness or not of the work of art in question shouldn’t be relevant, so why is it here? But still, this makes it more of a “uhm, could’ve been written better, and I do not have the context for that ‘unparalleled’ ” than a nonsensical phrase.

Before you think that, well, I just fell for bullshit, let’s see how just changing one word could make it pure real bullshit:

Hidden meaning transforms unparalleled abstract energy

And here everything falls to pieces: what the heck is “abstract energy”? Does such a thing even make sense, let alone exist? How would you “transform” it? Now, yes, we have a deceptive collection of words inserted in a syntactic structure, which however means nothing.

The authors care to explain that the “hidden meaning” example sentence, and others they used in the study, have been actually randomly generated. This could let someone to say “a-ha! you fell for randomly generated bullshit!” but that something has been randomly generated has no bearing whatsoever on it being bullshit, especially in the experiments performed in study, which is that to evaluate isolated, context-free sentences. This simple generator for example manages to return casual, but often perfectly meaningful (even if pretty boring) sentences. Would you say that “Those taxi drivers made him some coffee” is not a clear English sentence just because a computer generated it?

I had a look at the randomly generated “bullshit” sentences they used -you can find them in the Table S1 here. I am seriously perplexed. Some of these sentences are indeed genuine bullshit: non-concepts dressed as concepts, for example “Good health imparts reality to subtle creativity.”, but what about “Today, science tells us that the essence of nature is joy”? This is clearly false, and clearly silly, but it is not a nonsensical non-concept.

 

It looks like bullshit, because you don't know what it means. (Actually, it was explicitly meant to be bullshit indeed. Oh the recursion.)

It seems to me that Pennycook et al. are conflating together a rigorous definition of bullshit, as “text which gives the impression of meaning but actually devoid of meaning” with “generally stupid stuff”. Almost all the sentences used in their tests belong to the “generally stupid” category, but not all belong to the bullshit category – I would roughly say it’s a 50:50. Pennycook et al. didn’t really investigate bullshit: they just demonstrated that people who believe stupid statements are (on average) more stupid: hardly a groundbreaking result.

Another problem of Pennycook et al. is that they asked to rate the profundity of a sentence, so defined:

Profound means “of deep meaning; of great and broadly inclusive significance.”

Problem is, something can be both profound and false. “Today, science tells us that the essence of nature is joy” is a profound sentence: its meaning is indeed deep and broadly inclusive (what is deeper and more inclusive than the “essence of nature”?) Yet it is also ridicolously false: the essence of nature, whatever it is, is not joy. A better test would have asked, perhaps: “Do you believe this is true or false? If true, do you think it is also profound? Would you endorse (quote, retweet) this sentence to your friends on social networks?” – Also, I would have run the examples through some linguist to see if indeed you can extract meaning from them (even if it is not immediately apparent) or if they are indeed devoid of meaning.

There is more. While amusing, the fact that the study focused on the pseudoscientific quantum woo of Deepak Chopra (in its both randomly-generated and real examples) is actually telling more on who believes such specific woo than on the concept of bullshit itself. What is really bullshit or not depends also on context. Using the same automatic generator trick, I wonder how would the authors of the study would rate in this little game, where you asked to identify if a high-energy physics paper title is real or randomly generated. Would you expect that there is, indeed, a real physics paper titled Magnetized static black Saturn? It sounds like gibberish to me, but in the context of the paper (the realm of 5-dimensional charged black holes, apparently) it seems to have actually a clear, rational meaning.

Language is a code that depends on context. The “hidden meaning transforms abstract beauty” example is probably revealing in this sense: To the authors of the study, an interpretation of the sentence as referring to art simply didn’t came to mind, and as such they considered it nonsensical. Other “bullshit” sentences can make sense if our cultural and personal background allows us to give a definite meaning to the terms and their combination. The problem is that a single sentence is too ambiguous to be unequivocally determined as meaningful or meaningless. “A rose has teeth in the mouth of a beast” seems bullshit, but it has actually a well defined and clear meaning when Wittgenstein utters it. A decent “bullshit” study would have used paragraph-length texts, so that the context and the response to non sequiturs in reasoning would have been assessed. Alan Sokal didn’t send a single sentence, but a full carefully constructed hoax paper, to prove that Social Text was dabbling in nonsense. When we say that a certain philosopher or charlatan is writing bullshit, is because we know the context, we know what is going on, and we can judge accordingly.

In conclusion, Pennycook et al. study is more of an interesting exercise in how we react to comforting narratives than a real study. It makes us “rationalists” cheer up, but it disgregates upon close scrutiny – actually, whoever shared it without analyzing it, fell for bullshit. It has the appearance and structure of meaning, but meaning, it has not. A curious irony.

 

 

5 Comments

  1. Fundamental lemma of information theory:
    Meaning is irrelevant.

    When humanists will understand that will always be too late.

  2. Hi, I made the Deepak Chopra quote generator referenced in the paper. I can tell you that anything it generates automatically qualifies as bullshit (by any measure) because this is how it works:

    The generator has four arrays of words and phrases. Roughly speaking, the first array contains nouns, the second verbs, the third adjectives, and the fourth all nouns. The words and phrases have all been taken from real Deepak Chopra tweets.

    When the words and phrases are put together they are all grammatically correct but they are bullshit because they were never intended to have any context. If they could somehow have context I would be very surprised!

    • Hi Tom, glad to meet you. I am aware of how random sentence generators work (indeed, when I was in high school I wrote a few myself!) but -as also explained in the post- the algorithm used to generate a sentence has no bearing whatsoever on the sentence being “bullshit” or not, nor has any bearing on their lack of context or not. Who gives context to a sentence is the reader. Words themselves, after all, are just jumbles of wiggles on papers or screens: it is us, readers, which internally have a set of rules and contexts that give them meaning.

      In other words, you cannot say “this sentence never has any context”, because the context of the sentence is not a property of the sentence itself: it is a property of the system of the sentence plus the reader.

      Now, of course for an enormous amount of possible sentences it is indeed practically impossible to find someone who gives them a reasonable, meaningful context. It is also easier to produce sentences like this if we start from Deepak Chopra as a vocabulary source, since there are very few combinations of such words that can be recognized reasonably to be meaningful. But this has nothing to do with the way the sentence has been produced.

      This naive misunderstanding on the nature of meaning is what makes the study by Pennycook et al. flawed. I am in no way defending the ridiculous pseudoscience of Chopra and similar scammers, but if you want to study how the appeal of “pseudoconcepts” works, a better and deeper investigation is needed.

  3. Miles Miles

    This is the most ironic thing I’ve ever read

  4. Lucho Lucho

    You are missing the point. It was a very good study to demonstrate that people with some kind of mystic or religious bias, and with lesser cognitive skills are easier to be impressed with the combination of words that sound profound but that don’t make any sense. And that is how many people may be actually fall in the writings of someone like D. Chopra without really understanding their meaning… if there is one. Personally I would include Ronald Hubbard there.

Leave a Reply

Your email address will not be published. Required fields are marked *