Pakistani Bloggers

August 19, 2012

Please Fill out these Forms in Triplicate, Sir.

Heisenberg's Uncertainty principle applies to the world of physics, the jist of it being that any measurement taken of a physical quantity affects its absolute value (basically the act of measuring something will affect your reading, e.g. using a ruler to measure length will affect the length. You can ignore this effect if you're measuring stuff in micrometres, but it becomes significant when you measure tiny stuff like atoms). I think we can extrapolate this idea to sociological studies. Making subjects aware that they are part of a study could possibly prejudice the results. Very few people consent to be part of a study for the 'greater good of science', but have some sort of vested interest, or lack thereof. Some are being paid and so may want to give the 'right' answers instead of being objective, in the hope that they 'please' the hands that feed them. On the other hand, some people are 'forced' to become lab rats. They may be a part of an organization say, a students at a university, who are required to fill feedback forms at the end of lectures, rating lecturers. Being forced to fill these at the end of every lecture, they naturally get tired of doing so and stop. Or even if they do, it's just a formality and they put in random numbers. Whichever white-collar yuppy is analyzing them will see that lots of people are rating professor X 5/5 and will falsely think that he/she is a great lecturer, whereas in reality, most of the kids are filling the numbers randomly. Hence, the data gathered is not truly representative. So what will happen is that if the lecturer is brilliant or crap, people will fill forms. If so-so, forms are returned empty. Which means that in the subsequent analysis, the faculty will see that the university has teachers at opposite ends of the spectrum, whereas MOST of the teachers are inbetweeners (or what statisticians like to call a normal distribution).
So what do we do about this? Beats me. I'm too lazy to think of ideas. Besides I'll falsely pass this off as trying to engage my readers. Your thoughts?
Come to think of it, I'm sure there's a sociological term for the content of this post. Again, I'm pulling the lazy card.

3 comments:

Lubaina E. said...

Whoa! :O
All that Physics stuff in the beginning: I didn't get it :p

Buuuuut we apply to Uni's after checking the parent recommendation and student satisfaction stats n bla bla. So, that all could be a fake ? :/

This sucks.

Unknown said...

Demand characteristics and social desirability bias, mate. Now that you can put a name to the ideas, wiki and google as you deem fit :D

There are a number of options available to reduce these, depending on the type of methodology. In observation, you can skirt these issues by going covert so as to observe the subjects in their natural environment. No researcher around, no tension! Obviously, this means the subjects did not consent to the research and a plethora of ethical issues arise.

More relevant to the methodology in your example is the device of having a "reversed question" (I too have forgot the scientific name for this one) in the questionnaire for the lecture evaluation survey.

IBA uses this device in our mandatory faculty evaluations.

Say, the survey consists of 20 statements, all of them to be answered on a 5-point Likert Scale (Strongly Agree, Agree, Neutral, Disagree, Strongly Disagree). Statement 2 is "The instructor is readily available to discuss questions after class". If question number 17 is "The instructor is not readily available to discuss questions after class", then a correctly answered survey should have directly opposing answers for these questions. 1,5 or 2,4 or 3,3.
Hence, the reversed questions.

That way, if someone randomly fills out the survey by rating 5 everywhere or 1 everywhere, the two questions will not be answered consistently and the survey will be trashed. If someone randomly assigns numbers, the probability of the questions being answered consistently is quite low.

Obviously, this is not absolutely perfect a device. IBA's way of implementing this gets quite a lot of stuff wrong. First, they use the same survey so after 3 years, we all know which questions are the "reverse questions". So you can rank all questions 1 or 5 except those specific questions, and skew the results either way.

Ideally, the reverse questions should change with each survey. And to reduce the probability of random numbers making the answers consistent, have a couple of such questions.

I'm no researcher, but I hope that some of the remnants of A Level Psych and IBA Methods in Business Research finally came in use!

Usman said...

shutup!

Post a Comment

NO flaming or any vulgar comments

 
Copyright © 2010 Faysy's blog. All rights reserved.
Blogger Template by