Getting Good Responses



  •   Choose rating scales over dichotomous (e.g., yes/no) items whenever possible to elicit more information and to increase reliability.

    Knowing the degree to which respondents agree or disagree with a statement provides more information to guide decision-making and program improvement than simply knowing the percent of respondents who agree or disagree with it. More often than not, respondents hold views that fall somewhere on a continuum rather than into distinct categories.

      When a numerical (i.e., Likert) scale is used, provide an adjective to describe each point on the scale so that every respondent assigns the same definition to each number.

    For example:

    To what degree do you agree with the following statement?(Circle the response that best applies.)
    My child's school is safe.

    Instead of:

    1_______ _______2______ ______3______ ______4______ ______5
    Strongly
    Disagree
          Strongly
    Agree

    Use:

    1_______ _______2______ ______3______ ______4______ ______5
    Strongly
    Disagree
    Disagree Neither
    Agree Nor
    Disagree
    Agree Strongly
    Agree

      Balance the response categories.

    Keep in mind that if there is a 'strongly disagree' option, there needs to be a 'strongly agree' option, too. For example:

    How would you rate the class overall?

    Instead of:

      Outstanding
      Excellent
      Very Good
      Good
      Poor

     

    Use:

      Very Good
      Good
      Average
      Bad
      Very Bad

     

      Offer response options that are clearly defined and are mutually exclusive (non-overlapping).

    As with double-barreled questions, it is best to avoid words like and, or, but, with, except, if, and when in the list of response options. To minimize any confusion, do not list the same response option in more than one category. Make sure that there are clear distinctions among each of the response options. When offering response options that represent ranges, make sure that no one number appears on the list multiple times. Here are two examples:

    Which leadership style best describes your building administrator or principal?

    Instead of:

      Charismatic and Cooperative
      Comtemplative and Collaborative
      Collaborative and Charismatic
      Contemplative and Charismatic
      Controlling and Critical

     

    Use:

      Charismatic
      Critical
      Contemplative
      Collaborative
      Controlling

     


    Which of the following categories best describes your total family income in 2008?

    Instead of:

      $0 to $5,000
      $5,000 to $10,000
      $10,000 to $20,000
      $20,000 OR MORE

     

    Use:

      LESS THAN $5,000
      $5,000 to $9,999
      $10,000 to $19,999
      $20,000 OR MORE

     

      On attitude or opinion questions, limit use of "N/A" (not applicable or not available), "no opinion," or "other" option. However, make sure to provide one of these options when a question is legitimately not applicable to all respondents.

    Do not make it too easy for respondents to opt out of answering the questions. You will get very limited information from surveys that are returned with "no opinion" marked throughout. On the other hand, if a question is truly not applicable to a segment of the surveyed poopulation, it is important that respondents are able to indicate this on the survey or they will be forced to guess or provide a less than truthful response. For example, if you ask parents to rate their site council meetings, those parents who have never attended a meeting should be able to check "N/A" rather than rate something with which they have had no direct experience.

      On factual or behavior questions, include an "other (please specify):_________" choice unless you are certain that you have provided a comprehensive list of response options.

    Survey results are much easier to summarize when "other" is not used, but there are times when it is necessary to provide it as an option. Fill-in-the-blank options are especially useful during the pilot testing phase of survey development in order to identify all possible categories of responses.

      Keep open-ended questions to a minimum.

    Open-ended questions are burdensome for the respondent and over-using them will decrease the response rate. They are also difficult to code and analyze; consider the time and effort to analyze even one open-ended question from 500 respondents! At the same time, always include at least one open-ended question at the end of the survey to allow respondents to provide information that they were not given an opportunity to share in the closed-response portion of the survey.

    Back Back      Continue Next