What are the best methods for this type of survey?
October 14, 2011 1:38 AM Subscribe
What sort of survey are my colleagues planning? It would involve questionnaires dropped off at a variety of businesses (tour agencies and hotels, mostly), aimed at capturing details about their clients. This doesn't seem to fit the examples I see discussed in online resources on survey methods. To top it off, both the time frame and budget are limited. Where can I find best practices for this sort of survey--and how can I determine which best practices should be given highest priority?
I'm helping some colleagues design a survey looking at visitors to a particular geographical region. We all have scientific training, but not necessarily in survey methods (or social science).
The plan seems to be to write a questionnaire, drop off packets of these questionnaires with (hopefully cooperative) business owners for their clients to fill out, then follow up with these business owners a couple of weeks later.
I can see all sorts of problems with this: how to tell if we have picked the right, representative businesses, how to tell what our response rate is, how to know if the sample is representative (let alone random).
When I look up survey methods resources online, I mostly find tutorials or lessons that assume the possibility of constructing a sample frame from a pre-existing index of some sort (a phone book, a directory) or using demographic information to gauge the representativeness of a sample (ie knowing that X% of households are female-headed and then stratifying the sample accordingly). This sort of information is likely not available for the study in question, although I will certainly look. The survey method--dropping off surveys to be self-administered by an unknown proportion of an unknown number of tourists--also doesn't match most of the standard cases (face-to-face, drop-off with introduction, mail, internet).
Where can I find guides/best practices for similar studies? And given limited resources, which best practices (a rock solid sample frame? allocating enumerators to increase response rate?) should be given the highest priority?
I'm helping some colleagues design a survey looking at visitors to a particular geographical region. We all have scientific training, but not necessarily in survey methods (or social science).
The plan seems to be to write a questionnaire, drop off packets of these questionnaires with (hopefully cooperative) business owners for their clients to fill out, then follow up with these business owners a couple of weeks later.
I can see all sorts of problems with this: how to tell if we have picked the right, representative businesses, how to tell what our response rate is, how to know if the sample is representative (let alone random).
When I look up survey methods resources online, I mostly find tutorials or lessons that assume the possibility of constructing a sample frame from a pre-existing index of some sort (a phone book, a directory) or using demographic information to gauge the representativeness of a sample (ie knowing that X% of households are female-headed and then stratifying the sample accordingly). This sort of information is likely not available for the study in question, although I will certainly look. The survey method--dropping off surveys to be self-administered by an unknown proportion of an unknown number of tourists--also doesn't match most of the standard cases (face-to-face, drop-off with introduction, mail, internet).
Where can I find guides/best practices for similar studies? And given limited resources, which best practices (a rock solid sample frame? allocating enumerators to increase response rate?) should be given the highest priority?
Best answer: Sociology major here.
Surveys need to be administered by people familiar with/trained in the particular instrument. Participants need to be able to ask questions -
posted by bilabial at 6:15 AM on October 14, 2011
Surveys need to be administered by people familiar with/trained in the particular instrument. Participants need to be able to ask questions -
- what does this mean? (respondents ask this about some of the strangest items on surveys.)
- how is this information going to be used? ("to sell people things!" is different from "to learn how to better serve visitors to this area." or "to investigate how matters of public policy impact visitors to our area.")
- who will have access to my answers and my personal information? (confidentiality impacts response rate and actual answers) Pre-testing of an instrument is very important (I wish I had known that before I collected 10,000 nearly worthless online surveys as an undergrad), so you shouldn't just have 1,000 of these printed and dropped off. While survey response rate is important to know, I've been involved in a project where the PI just wanted the response rate to look good. So, our first week of survey taking, we got 6 or 7/10, across the class. Second week the number magically was 9/10 or 10/10 for almost all group members. Time and money are the ultimate constraints in research methods. Sometimes you have lots of money and no time, other times lots of time and no money. Both of those are better than having no money and no time. My advice for your group going forward is this: Hammer out exactly what you want to know from your survey. Do you need to know how much money people spent in restaurants while on vacation? Do you need to know which cultural attractions visitors visited? Do you need to know if people spent money at retail stores? Do you want visitors to compare this trip to their last vacation? Having everybody on board about what needs to be known will make designing the instrument better. Good luck. Feel free to drop me a line if you have any more questions. Survey design and analysis (large scale) is actually my career goal.
posted by bilabial at 6:15 AM on October 14, 2011
Best answer: There is a huge body of literature dealing with survey research methodology. Key names you can look for are Robert Groves (currently head of the census bureau), Don Dillman, and many others. There's also a national association for public opinion research (AAPOR). They publish a regular journal (Public Opinion Quarterly) and an e-journal, called Survey Practice.
The methodology you're describing is clearly using a convenience sample which, while it might be useful to you, is not at all a random sample. You'll have no way of determining representativeness, and calculating response rate won't really be meaningful. If you want to do a random survey of visitors to a geographical area, you need to think about the places where those visitors are most likely to go. Like if the businesses were all hotels and most of the visitors who are of interest to you will be staying at a hotel. Maybe if many of the people who visit go to particular tourist-y areas, you could focus on surveying those areas (e.g., Fisherman's Wharf in San Francisco, Times Square in NYC), or if a lot of visitors of interest are coming to a particular convention or some such thing.
If you're looking for a random sampling approach, you need to consider having people who can hand out the surveys (so they can count who and how many accept it and complete it), stationing them at randomly chosen locations, at randomly chosen times. You should probably consider the use of incentives as well, if response rate is important to you.
posted by jasper411 at 8:44 AM on October 14, 2011
The methodology you're describing is clearly using a convenience sample which, while it might be useful to you, is not at all a random sample. You'll have no way of determining representativeness, and calculating response rate won't really be meaningful. If you want to do a random survey of visitors to a geographical area, you need to think about the places where those visitors are most likely to go. Like if the businesses were all hotels and most of the visitors who are of interest to you will be staying at a hotel. Maybe if many of the people who visit go to particular tourist-y areas, you could focus on surveying those areas (e.g., Fisherman's Wharf in San Francisco, Times Square in NYC), or if a lot of visitors of interest are coming to a particular convention or some such thing.
If you're looking for a random sampling approach, you need to consider having people who can hand out the surveys (so they can count who and how many accept it and complete it), stationing them at randomly chosen locations, at randomly chosen times. You should probably consider the use of incentives as well, if response rate is important to you.
posted by jasper411 at 8:44 AM on October 14, 2011
Best answer: What is the population you are attempting to describe with the survey, and what is the subject? Right now you have laid out a classic convenience/opportunity/availability sampling model, which by it's nature is non-random and non-representative of most populations. This isn't a deal breaker - it just changes the scope of what you can describe with any statistical power. The subject is also a key issue because if it salient (important/interesting) then respondents will be more willing to provide information.
Here's what I would do, assuming time and cash are of limited supply:
First lets redefine the population you are aiming for down to something more manageable, so I would only aim to describe the very clients of these tour agencies and hotels that I am surveying, who are fluent in the language the survey is in, who were visiting in the time period surveying. This means that you could conclude things like "80% of respondents love orange helicopters" and NOT "80% of Canadians love blue tug boats". If you need to speak about a large/diverse population (ie Canadians, or tourists, or female snowboarders) you'll need to re-evaluate the scope of your project, and I would suggest seeking professional assistance. jasper411's suggestions* are good directions to head in.
Back to a more focused survey:
The second thing I would focus on getting a random sample of these clients to fill out your survey. The most effective method would be to field the survey yourself, relying on others will significantly reduce survey response rates, as jasper411 noted. If you have to rely on others I would set up a reward for the folks actually putting the survey in people's hands - do NOT reward each filled out survey, reward adequate participation through some sort of random drawing. Transparency is key, Apple gear is a perennial fav. Describe this in a special cover letter for the business staff. I would be pleased with a 15% response rate to a paper survey handed out by business staff on my behalf on a random subject, so plan accordingly. Every fourth client perhaps, or every third. Do not overburden your "volunteers" who a fielding your survey. This also means you supply the pens to fill out the survey, and clearly labeled envelopes to return the data. You may also want to encourage client participation with a discount voucher or some other small token of appreciation.
Why should this step be random if the survey isn't going to be statistically significant? There can still be big differences between the ten folks surveyed by Eager beaver staff in the AM and the 10 folks not surveyed by the lazy evening staff.
Third:
Test the survey. Seriously. Find someone who didn't write it to go over it. Here are the issues to look for:
1. Confusing questions. The language has to be clear to some random client off the street. You have to estimate the target clients education level, Grade 9 would be the high end, Grade 4 would be the low, use the appropriate level of language. If the person testing it is very smart remind them that you are aiming for the general public - I have drafted questions that knowingly contained poor grammar in the service of question clarity.
2. The type of question dictates the type of answers: exclusive answers have to be exclusive NO OVERLAP (so common), ranking like things with like things, or feelings or whathaveyou. People like to answer in the middle, if it is given as an option. Answers do not necessarily have to be an exhaustive list: it can be the three things you are interested in and then an other. NOTE: the answers you provide will bias the clients! Asking if they are democratic OR republican gets you different results from allowing "independent" or other or including "leaners". Remember that "other" is an easy out, and often not useful information.
3. Time is money. How long did your tester take? More than 15 minutes? Not gonna happen in real life. Aim for 2 minutes. TWO. Pro's aim for no more than 6-8 minutes. Basic demographic questions are quickly answered - I would suggest for an average topic 6 to 10 questions. Remember - there will be lots of info out there floating around. The government does censuses. Business know you their clients are. Studying tourists? Local tourist board ( or equivalent) will be able and possibly willing to tell you the average tourist demo info for your region. Rely on these sources for data.
Things you should also do with the survey instrument:
4. Closed ended questions only. Avoid fishing for a short story whenever possible.
5. Every question needs to be evaluated for if it actually provides useful information. A question everyone agrees with is useless. Is it important that we know the client's gender?
6. Number the questions, id the answers. ie: Q6B is easy to code.
Last: Client Cover letter/blurb. It should clearly outline who you are, what you are going on about, and how to contact you. You should include your privacy policy and general comments about how the data will be handled. Letterhead is good. Professional. Can be a box at the top of page one. Or full page. This section counts towards the overall time of the survey so make it brief.
* Yay Dillman!
posted by zenon at 9:23 AM on October 14, 2011
Here's what I would do, assuming time and cash are of limited supply:
First lets redefine the population you are aiming for down to something more manageable, so I would only aim to describe the very clients of these tour agencies and hotels that I am surveying, who are fluent in the language the survey is in, who were visiting in the time period surveying. This means that you could conclude things like "80% of respondents love orange helicopters" and NOT "80% of Canadians love blue tug boats". If you need to speak about a large/diverse population (ie Canadians, or tourists, or female snowboarders) you'll need to re-evaluate the scope of your project, and I would suggest seeking professional assistance. jasper411's suggestions* are good directions to head in.
Back to a more focused survey:
The second thing I would focus on getting a random sample of these clients to fill out your survey. The most effective method would be to field the survey yourself, relying on others will significantly reduce survey response rates, as jasper411 noted. If you have to rely on others I would set up a reward for the folks actually putting the survey in people's hands - do NOT reward each filled out survey, reward adequate participation through some sort of random drawing. Transparency is key, Apple gear is a perennial fav. Describe this in a special cover letter for the business staff. I would be pleased with a 15% response rate to a paper survey handed out by business staff on my behalf on a random subject, so plan accordingly. Every fourth client perhaps, or every third. Do not overburden your "volunteers" who a fielding your survey. This also means you supply the pens to fill out the survey, and clearly labeled envelopes to return the data. You may also want to encourage client participation with a discount voucher or some other small token of appreciation.
Why should this step be random if the survey isn't going to be statistically significant? There can still be big differences between the ten folks surveyed by Eager beaver staff in the AM and the 10 folks not surveyed by the lazy evening staff.
Third:
Test the survey. Seriously. Find someone who didn't write it to go over it. Here are the issues to look for:
1. Confusing questions. The language has to be clear to some random client off the street. You have to estimate the target clients education level, Grade 9 would be the high end, Grade 4 would be the low, use the appropriate level of language. If the person testing it is very smart remind them that you are aiming for the general public - I have drafted questions that knowingly contained poor grammar in the service of question clarity.
2. The type of question dictates the type of answers: exclusive answers have to be exclusive NO OVERLAP (so common), ranking like things with like things, or feelings or whathaveyou. People like to answer in the middle, if it is given as an option. Answers do not necessarily have to be an exhaustive list: it can be the three things you are interested in and then an other. NOTE: the answers you provide will bias the clients! Asking if they are democratic OR republican gets you different results from allowing "independent" or other or including "leaners". Remember that "other" is an easy out, and often not useful information.
3. Time is money. How long did your tester take? More than 15 minutes? Not gonna happen in real life. Aim for 2 minutes. TWO. Pro's aim for no more than 6-8 minutes. Basic demographic questions are quickly answered - I would suggest for an average topic 6 to 10 questions. Remember - there will be lots of info out there floating around. The government does censuses. Business know you their clients are. Studying tourists? Local tourist board ( or equivalent) will be able and possibly willing to tell you the average tourist demo info for your region. Rely on these sources for data.
Things you should also do with the survey instrument:
4. Closed ended questions only. Avoid fishing for a short story whenever possible.
5. Every question needs to be evaluated for if it actually provides useful information. A question everyone agrees with is useless. Is it important that we know the client's gender?
6. Number the questions, id the answers. ie: Q6B is easy to code.
Last: Client Cover letter/blurb. It should clearly outline who you are, what you are going on about, and how to contact you. You should include your privacy policy and general comments about how the data will be handled. Letterhead is good. Professional. Can be a box at the top of page one. Or full page. This section counts towards the overall time of the survey so make it brief.
* Yay Dillman!
posted by zenon at 9:23 AM on October 14, 2011
Also, if you're not doing the survey yourself and you're providing incentives, you'll probably want to get contact information from respondents and randomly audit a certain number of surveys (just call them back and ask them if they participated in a survey on such and such a date).
posted by empath at 9:43 AM on October 14, 2011
posted by empath at 9:43 AM on October 14, 2011
Response by poster: Thanks! These are some amazing answers. I am off to read up on convenience sampling and, with any luck, what it means for the analysis stage of the project. All these answers give very helpful advice, though.
posted by col_pogo at 12:02 AM on October 15, 2011
posted by col_pogo at 12:02 AM on October 15, 2011
« Older How did pre-modern Inuit stay warm all winter? | How to deal with Chinese in a Perl program on a... Newer »
This thread is closed to new comments.
I used to do market research, but it was pre-internet, so I don't know how much it's changed. At the time, we'd have either gone directly on site ourselves and approached people (with the business permission), or we'd have cold called their customers from a contact list provided by the business. I guess these days, you can probably just email them and screen online.
posted by empath at 5:49 AM on October 14, 2011