Neutralizing misinformation through inoculation: selecting 33 answers in just ONE minute?

Looking at the data of the second experiment of the Neutralizing misinformation through inoculation paper, I came across something rather strange. The time to complete the survey was recorded also and some of the participants finished the survey in an incredibly short timespan.

Let me first explain how I got there. I incidentally stumbled on it by looking at something else that initially puzzled me. This is how the population of experiment 2 was selected as explained in the section “Participants”:

Participants (N = 400) were a representative U.S. sample, recruited through Qualtrics.com, based on U.S. demographic data on gender, age, and income in the same fashion as for Experiment 1 (49.2% female, average age M ≈ 43 years, SD ≈ 15 years). The sample delivered by Qualtrics comprised only participants who had successfully answered all attention filter items. None of the participants had participated in Experiment 1. Outliers in the time taken to complete the survey (n = 8) were eliminated according to the outlier labelling rule as in Experiment 1. The final sample of participants (N = 392) were randomly allocated to the four experimental conditions: control (n = 98), inoculation (n = 98), misinformation (n = 99), and inoculation+misinformation (n = 97).

As it is explained here, I understood that there were 400 participants and 8 of them were outliers (in the sense that it took them too long to complete the survey). Subtracting those eight outliers left them with 392 final participants who were randomly allocated to the four experimental conditions.

In the order as it is explained in that paragraph, it didn’t make much sense to me.

First, how could they allocate those groups on basis of something that wasn’t know yet at the time the groups were formed? Secondly, the number 392 can be perfectly divided by 4, so why would they randomly allocate those 392 in groups of 98 – 98 – 99 – 97, in stead of groups of 98 – 98 – 98- 98?

Time to look at their data to see what they have done exactly. I quickly found those eight outliers. Sorting the table according to the “Total Duration” column in a descending order made it clear that those participants did considerable longer to complete the survey than the others. There was a rather large gap between the time these eight participants finished the survey (minimum 2017) and the rest (maximum 1761).

Looking at the groups those outliers were allocated in, it became clear where that odd group allocation came from. From those eight participants who were eliminated, there were two from the control group, two from the inoculation group, one from the misinformation group and three from the inoculation + misinformation group.

Therefor I think this is how the experiment was done:

  1. there were 400 participants selected by Qualtrics
  2. they were randomly allocated in 4 groups of 100
  3. after the survey was done, the 8 outliers were eliminated and 392 participants remained for the analysis:
    • control group: 98 (100 – 2)
    • inoculation group: 98 (100 – 2)
    • misinformation group: 99 (100 – 1)
    • inoculation + misinformation group: 97 (100 – 3)

Now that allocation made sense.

That is where I bumped into the subject of this post. Until now I only looked at the side of the longest duration, but I was curious about the other side. What was the fastest time participants finished the test? So I sorted the column “Total Duration” in an ascending order and found that he fastest time to complete the survey was “60”. The second fastest participant came at “63”. The five quickest participants finished the survey all below “100”.

This raised the question: what is the unit of measurement? My first assumption (when I saw the outliers in the high end) was that the measurement was done in “seconds”. This would bring the average and the median somewhere between seven and nine minutes, which seemed reasonably considering the number of items in the survey. But then this wouldn’t fit those fast participants: 60 seconds, that would be just one minute to complete the survey…

It couldn’t be minutes: that would mean a range from 1 hour to 29 hours to complete the survey. That seems very unlikely.

I then found the unit of the total duration in the readme file: the duration was indeed measured in “seconds” after all.

But this would mean that the fastest participant finished a 36 items survey in one minute flat. That is under two seconds per item!

That is lightning fast. Reading the question, understanding its meaning, reading the options and select the one that suits best in less than two seconds and keep that same pace for the rest 35 items of the survey. Strange that such very low values weren’t eliminated as outliers too.

I couldn’t imagine doing such a survey in one minute. Becoming rather curious how long it would take to complete such a 36 items survey, I decided to create a survey myself with the same questions from the materials download files. Easier said than done. For example, the paper claimed that the survay had 36 items, but the table with the description of the survey items listed 39 items…

Two of those could be explained by the two attention filters mentioned in the quote above and there also was a description of those two questions in the table. Since participants had to successfully answer these two questions, that makes it a survey with 38 items (from which 36 were used in the analysis).

There was also one question that was mentioned twice in the table. Both entries had the same name, the same values and the same description. I assume this question was only asked once in the actual survey and that this is an error in the documentation. Looking at the structure of the document, my guess is that this is a copy-paste issue.

In the table there was also another item that was mentioned twice. It was a question in the group “Policy support”. Since there were five items in that group and there also was data from five policy support items, I assume this is also an error in the documentation: one of those two items was probably different from what is documented (unless they actually asked that question twice in the same group, which seems rather unlikely).

From all this, I guess that there were 32 questions with radio button choices and 6 questions with a slide bar. Also that the questions were divided into 13 question groups. Since I had no idea how the actual survey looked like, I decided to go for the option that let me advance in the fastest possible way. My assumption was that it would not be possible to complete a 38 items survey in one minute when the items are spread over multiple pages. Therefor I made the survey on a single page, so I only needed to scroll down to navigate though the questions. That also allowed me to easily write a timer to record the time needed to complete the survey.

Since I got used to the order of the questions, I also wrote a function to reshuffle the question groups so they came in a different order every time. I also noticed that the four fastest participants didn’t answer five specific questions (all from the group asking about the “third person effect”), so I skipped those as well. Meaning that I would answer only 33 items in the 38 items survey.

In general, I tended to overestimate the time needed to complete such a survey. In most cases, I was rather surprised that the time required to complete the text seemed longer than what was actually measured (at first I even thought that the timer was not written well). But even when my perception was different than the actual measured time, I couldn’t complete the survey in one minute. Not even close.

Completing the survey took me at least 5 minutes. I think it would take longer in reality: although the question groups were reshuffled every time, I knew which ones I could expect. In a real life situation, it would definitely take me longer.

I also did some other tests. How long would it take to complete the survey by just clicking around and skipping the items in the “third person effect” group? That took me around 45 seconds.

That is just randomly selecting 33 of the 38 items without reading the different options…

I then tested how long it would take to just read the questions and clarifying texts. That took me around 3 minutes.

That is just reading all the questions without thinking about their meaning or eventual nuances, without looking at the different options, without even selecting one…

If the data of this “total duration” column was really the time needed to complete the survey, then I wonder how on Earth those participants managed to read the question, comprehend it (including eventual nuances), read the options, select the appropriate one and doing that 33 times in one minute? Then I also wonder why the investigators would think that the answers given by the participants who selected 33 options in one minute, could somehow relied upon?

Advertisements

3 thoughts on “Neutralizing misinformation through inoculation: selecting 33 answers in just ONE minute?

  1. poitsplace

    I seem to recall their study linking climate change “deniers” to people that thought the moon landing was faked…they actually let through some impossible ages for the participants. I think one stated that they were thousands of years old and one stated they were 7.

    So many prominent “scientists” pushing climate alarmism are perfectly described by the comments of Dean Yeager in Ghostbusters (when he throws them out)

    Doctor… Venkman. We believe that the purpose of science is to serve mankind. You, however, seem to regard science as some kind of dodge… or hustle. Your theories are the worst kind of popular tripe, your methods are sloppy, and your conclusions are highly questionable! You are a poor scientist, Dr. Venkman!

    Reply
    1. trustyetverify Post author

      This reminded me of that paper too. There are some differences however. Age is a rather straight forward variable. If one sees participants with the age of 5 years and with the age of 32,757 years, then it is clear that something is wrong. If this obviously faulty data is actually used in their analysis, then it is clear that something is seriously wrong.

      It is not so clear cut with the “total duration” values. It can mean many things. As defined in the readme file:

      TotalDuration: time to complete survey (seconds).

      What is their definition of “total” and “survey”? Do they mean all items of the survey as being documented in the table (with 38 items)? Or the definition used in their paper (with 36 items)? What did they measure exactly and how? Maybe they just timed a part of the survey. For example, maybe they had already some info (gender, age, income) and only measured the time to complete the other items? Okay, that is still 25 or 28 selected answers in one minute (somewhat more than 2 seconds per answer). Maybe there was some issue with the timer or the form itself? But then, they didn’t mention anything of that kind. And so on.

      Basically, there are a lot more ifs and buts involved, compared to straight-forward age data. I still wonder how they managed to complete the survey in such a small timespan though..

      What is clear is that the documentation is rather sloppy. Not only the rather strange wording of the methodology of the survey of experiment 2 in the paper (which led me to these super fast survey takers), but also the table with the description of the items. It doesn’t catch the eye at first glance. I went through the table several time, but only found these inconsistencies when I tried to recreate the survey with the data from that table.

      Reply
  2. Hunter

    Do you recall a few years ago that the climate hypesters kept pushing “communication” of the “climate risk” to “educate” us poor deplorables? We laughed at how much time they wasted on conferences about communication instead of actual science.
    Well these worthless deceptive surveys and articles flooding media are the result.
    It is always amazing to see just how relentless the climate hype machine really and how corrupt they are. They seemingly compete to come up with scarier stories that are consistently deceptive.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s