⫸STICKY⫷ What makes you complete an online survey for research?

Yeah, I'm all for helping students (having been through it myself!). On the other hand, when you get students advertising a web survey that is poorly thought through and poorly designed, it can be infuriating... because you wonder whether they were just doing an online survey to get an easy (easier?) path through their research component. I always give the benefit of the doubt though - and have at times emailed the student to discuss the issues - they take well to that, usually!

The topic is important to me - when I put on the hat of respondent, rather than designer. I'm similar to FractalDancer; it's interesting to make you think differently about an aspect of your life (be in drug use, other things) and also see what people are researching. Eg. the recent surveys re how drug use affects circadian rhythms I found interesting. I often thought that the adverse effects of stimulant drug use were a lot to do with the lost sleep and messed up life rhythms associated with the lifestyle!
 
well I like doing surveys if there for Aussies.
If i'm high and bored I do surveys, I like knowing that my drug use is being used for research on drugs.
and of course the incentive of a reward will always make me do a survey.
a $20 gift voucher for half an hour of my time always goes down well.
 
If i'm high and bored I do surveys, I like knowing that my drug use is being used for research on drugs

Thanks blode. This particular comment is interesting, as I think most researchers are hoping that participants aren't high when they complete surveys, especially those that require a fair bit of mental effort! =D
 
I must admit, it's usually out of boredom or not having anything else to do.

And I only do surveys that interest and apply to me, and don't take too long to complete!
 
Good topic. We should make quite visible our own selection biases in our research. I choose by:

1. Topic is interesting/I am interested in what answers it seeks.
2. I have qualities that speak to the topic of the study.
3. Study fits within how much leisure time I wish to donate.
4. If there's compensation! (particularly that which is not probabilistic. Having a tiny chance of winning something does little for me.)

ebola
 
apparently these days, design your survey with a line from totally disagree to totally agree. On that line is a cursor which the respondent can drag to any point on the line, to indicate their level of agreement. That way you can measure someone's agreement in a very fine grained way

I'd be skeptical about whether this could truly be treated as a proper interval or ratio datum though. How one approaches such scaling (likely distortingly) would likely depend on characteristics of the participant, aspects of the probe questions, context in the survey at large, etc.

ebola
 
In 2007 I looked into whether there were major advantaged to using these visual scales, and no evidence favouring them had emerged. Mostly, people found them more difficult to use and they took longer to load. I think that will change as they are used more often and people's internet connections and computers become faster.

I'd be skeptical about whether this could truly be treated as a proper interval or ratio datum though. How one approaches such scaling (likely distortingly) would likely depend on characteristics of the participant, aspects of the probe questions, context in the survey at large, etc.

I agree. These are all sources of error that would arise, but also from Likert scales that get used a lot for attitudinal survey research. That doesn't make them 'right', but 5 or 7-point agree-disagree scales are fairly entrenched in some of these areas of inquiry...
 
Thanks blode. This particular comment is interesting, as I think most researchers are hoping that participants aren't high when they complete surveys, especially those that require a fair bit of mental effort! =D

oh wow, I feel embarrassed, sorry about that.
 
I complete drug surveys with hopes that it could be used to take the stigma off of drug users.
 
I agree. These are all sources of error that would arise, but also from Likert scales that get used a lot for attitudinal survey research.

Indeed (I shoulda noted so)...I'm not even sure if treating them quantitatively as a single ordinal variable makes sense....although in the general linear model, Likert scales are often coded into dichotomous dummy vars, right?

ebola
 
oh wow, I feel embarrassed, sorry about that.

Nah, no need to be embarrassed. I think this thread is interesting because some of the reasons people give are not the reasons researchers think about when they design surveys and analyse results. Survey designers usually try and get an idea in their head of the different types of people who might complete the survey and the different motivations for doing so; as it helps get the design right!

I complete drug surveys with hopes that it could be used to take the stigma off of drug users.

That's an interesting reason. Does that mean you will scan a survey first to see whether it might portray drug users in a good or bad light before completing? (Not that this is ever immediately obvious, but there are usually hints in the way the survey is written, I find.)

Indeed (I shoulda noted so)...I'm not even sure if treating them quantitatively as a single ordinal variable makes sense....although in the general linear model, Likert scales are often coded into dichotomous dummy vars, right?

I think the debate about how to treat Likert scales revolves around ordinal versus interval, but yes, in linear models, people can get around it by dichotomising and creating dummy variables... both of which decrease power of models. I've also used non-parametric tests with Likert items for testing differences between groups and relationships between ordinal items. I'm no expert though... yet ;)
 
That's an interesting reason. Does that mean you will scan a survey first to see whether it might portray drug users in a good or bad light before completing? (Not that this is ever immediately obvious, but there are usually hints in the way the survey is written, I find.)

If I feel that it is obvious the research/survey is being done with the aim of ridiculing or making drug users personally look bad I would stay away from it. Being a pretty responsible drug user myself I would be glad to provide even a slightly bias survey with honest answers so long as those answers can't be twisted to mean or represent something else.
 
I used to do surveys over the phone. These people did not get paid to take them. Some did far better than others.

Bad:
---Unless absolutely fascinating, too long. Long long surveys were either terminated early or as they grew weary they'd give any old answer to move it along.
---While I do understand that when gathing data, you must ideally be able to pool answers in a way that allows you to deduce themes. Meaning, while multiple choice answers can be hard fortherespondent to choose should theynot quite like any totally, this can besomewhatmitigated by sub questions. So, initial multiple choice question gets the general idea, sub/follow up clarifies reasons. The what and why.
---Overly personal questions, UNLESS FOR PURPOSES OTHER THAN SAMPLE GATHERING. In other words, I've actually been instructed to, at the end ofsurveys gather their demographics like age, gender, income etc. Done anonymously, this simply ensures a balances sample. However, some would ask about fucking sexual preference! And for no obvious reason to the respondent who was often disgusted at such prying. "Thanks....so, what do you like to stick your dick in, sir,?" Ummm no. Unless the study is trying to, for example, understand how specific populations are impacted or affected, and THIS IS CLEARLY EXPLAINED, then hell no. Mind your business! I'll admit, I'd often either profusely apologize and get in trouble at times. Worth it.
---Asking their name or anything that could identify them specifically. I don't care if study claims to discard that info. Might not be an issue here, but set my job they'd ask for people by name, which understandably made some worry about their anonymity even if told identifying info was discarded.
---My most hated surveys to give were when the questions were so obviously worded in a manipulative fashion, and designed to twist responses. Some clever respondents world catch on and I'd smile on the other end of the phone, happy for them. Other poor souls - not even necessarily dim, just tired or busy or too young or idealistic or trusting - wouldn't notice. And I'm there knowing they want to get one view across but their answers given, the survey was designed to twist them into something else entirely and that's just shady. God I hated that job lol.
If I think of others I'll edit.

Good:
---When at the end respondents could give their opinions, just written, no multiple choice, whether about their feelings on the survey itself in some way, or to add something they felt important to cover that wasn't coveted in survey.
---Respectful wording on more personal subject matters. People respond better despite being anonymous respondents when something is worded in a way that is neutral and not judgemental or biased. But this ties in with manipulatively worded surveys that totally skew your actual answers.
---a space to note of anything should have been included or left out, to improve future surveys
---letting people know what the fruits ofthesurvey results are... What are they trying to ddiscern and WHY, wwhat's the goal?
---Wording questions in a way normal people speak. I cannot tell you how many questions were clearly written by people who were just not good communicators. They were either ty oo long winded, boring, unclear, or cold. A skilled writer can still achieve the end goal without boring, confusing, or offending.
---Don't sound judgemental
---remind respondents of what important work you are stunning fir and thank them for their help. People aren't inclined, if not being paid or compensated, to take tons of surveys that don't even express gratitude or emphasize the importance of gathering thus info
---don't disguise surveys that are hateful of drug takers as not. Fuck that. I am careful when I read exactly how things are written and whether the multiple choices are fair. If I see any fake, lying, manipulative bull, I'm out. I'm not willing to participate in a study that wants to demonize us or even most drugs and especially not all drugs, and I promise you I will notice of yore trying to hide those intentions in a survey I will voice my disgust after refusing to complete it. Be genuine, interested, and fascinated by your research with good intentions based on facts, and I'm all good.
---keep it truly anonymous. Don't lie.
 
@ABetterWay - I love your post. So many things that researchers should keep in mind.

One query, when you say that asking about sexual orientation is too personal - I agree it is personal. I always include a 'I don't want to answer' or at least ensure that people can skip questions so if someone chooses not to respond, no big deal, and they are not forced to do so.

Having said that, I've been criticised by people who are GLTBI for not including the sexual orientation or identity questions - because people want to be able to see whether sexual orientation matters when it comes to drug use and issues. So I lean towards including it but ensuring it isn't a 'forced' question.
 
-because I design them and feel sorry for others
- to see the bias in questions (more from the "writing my own" viewpoint)
- learn new tricks (internet v. paper, providing date format dd/mm/yyyy for internationally distributed surveys)
- learn what not to do (ambiguous wording, no choice I agree with)
:) That's about it!
 
Thanks, Tronica. I enjoyed yours as well. Unfortunately I had one of the better speaking voices and professionalism so was a monitor - the client listened in on you reading their survey. Oh God I hated those sessions. So much wanting to scream I'm so sorry!!! Hoping the subtle nuance of my voice conveyed my empathy and that I felt awful, ugh! But when not being monitored, Id skip or ease the wording and choices depending :)
 
1. Incentive. (you already knew that) 2. Knowing the aim/purpose/ of the survey 3. whether the results are geared towards change.

Things that make me stop half way through, any uncomfortable question. (sorry information is worth too much now days). Poor phrasing that fairly glaringly is leading the question, any obvious bias. But mainly if you are SAMSHA I will never complete your survey.
 
Top