1999-2003 Evaluation Handbook - Section 2

Tips on Preparing Evaluation Instruments

If you want people to take time to answer your questions, keep two things in mind - simplicity and good appearance. The instrument must be easy to understand, answer, administer, tabulate and summarize. It should be arranged on good paper with large print and wide spaces. Specifically:

  • Make instructions clear and concise.
  • Use a format, which is easy to read and answer.
  • Leave plenty of space.
  • Cut verbiage to the bone. Design statements or questions, which are no longer than ten words.
  • Take out any unneeded questions and shorten, shorten, shorten.
  • Make first questions easy and un-provoking.
  • Organizing items in a logical manner by trying to figure out the thought pattern of respondents.
  • Limit the number of questions that require written responses. Have places for checks instead.
  • Make sure questions can be interpreted in only one way. (How do you put your pickles up?
  • Avoid indicating the type of answer you expect to get. (Most people use a shopping list, do you?)
  • Ask only personal characteristics needed to help explain answers.
  • Limit length of questionnaire to one or two pages. However, length is not as important as being well spaced and easy to read and answer.
  • Place spaces for checks in vertical columns along one edge of the sheet to make tabulation easier.
  • Use the same scale for all questions; people find it easier to respond.
  • Sales with five or more points allow more insights into participants' responses than yes or no answers.
  • When rating people, especially if the purpose is to help people improve, words are less threatening and more encouraging than numbers.
  • If a questions is asking about two things, divide it into two questions.

Case Study

1. Things to keep in mind:

A case study is an approach to organizing information to get a more complete picture of a small number of individuals or groups.

  • Case study is used in Conjunction with other methods of collecting information.
  • For program evaluation, one to 12 individuals or groups may be an appropriate number of cases to include in the study.
  • This small number of cases is studied in-depth, with as many different types of information collected as is appropriate.
  • The cases are usually studied over a period of time (a week, month, year).

A key to a successful case study is to conduct the study in an organized and systematic fashion. Cases to be included, types of information to be collected, time frame and uses for the study information should be determined before the study begins. A danger in conducting a case study is the response of the person conducting the study. Care must be taken to try to remain objective, to avoid speculating or interpreting for the cases, and not to generalize the findings to inappropriate groups.

2. When a case study is appropriate:

  • To get an example or examples of specific program impact (success story).
  • When a description of long-term, far-reaching impacts is needed.
  • When a story of individual impacts of a program is the most understandable type of results for the user of the evaluation.
  • To explain the results of a larger study (such as a survey) which has findings in terms of group characteristics.
  • To broaden the insights or experience of the person conducting the study for further program planning and development.

To get a preliminary indication of critical factors related to program impacts, which can then be applied to a broader, more generalized type of study. The case study is an evaluation method that can provide in-depth information on a single program or thrust. Through personal discussion, mutual interaction, observation or review of existing documents, the reporter captures the total sense of the situation. The outcome should tell a story or convey a picture about what is occurring.

Collecting a variety of information gives you the advantage of being able to report thoroughly on the program. It also gives you a good chance of finding indicators of significant program impacts that you may not have anticipated. Case studies can provide greater understanding of program depth and complexity than can more impersonal, formal methods. This method is generally utilized with a small number of individuals or groups.


  • Procedure evolves as work progresses; therefore, no confining categories or classifications.
  • Allows depth of insight into causal relationships and personal feelings.
  • Applied effectively in combination with other methods, such as survey or participant observation.
  • Offers unique opportunity to study an organization, a business, an agency, particular types of families, individual differences, ideas or principles.
  • Can be tailored to specific situations.


  • Requires absolute accuracy: "improvements" on facts spoil the study.
  • Can be very subjective temptation to tell more than the facts.
  • Can be time consuming; extensive amounts of data needed.
  • Focus is on a limited number of cases; cannot necessarily be generalized to larger community.
  • Not suitable as method in isolation, best for background or as a guide to further study.
  • For best analysis, several cases are needed.


Observation is the systematic gathering of information about behavioral actions and reactions through the use of specific instruments or professional impressions. Participant observation requires the evaluator to immerse him / herself in the program being studied. The aim is to see the world through the eyes of the subjects. Information can be collected in a number of ways - by watching, listening and documenting what is seen and heard; by asking questions; by sharing activities and noting comments, behaviors and reactions; or a combination of these. It is essential, however, that observation and documentation be carried out systematically so that the information obtained is as free of of bias as possible and relevant to the focus of the study. When conducted with care, the observations of others also can provide valuable additional information. These insights can then be communicated to the principal observer.


  • Setting is natural, unstructured and flexible.
  • Evaluator may make his / her identity known to the subjects or researcher may keep identity a secret.
  • Evaluator may choose to actively participate in activities of those being observed or may observe passively.
  • Can be combined with other techniques such as survey or testing, and secondary information, thus adding to data quality.
  • Useful for studying a "small unit" such as a neighborhood, an Extension Council, a classroom, etc.


  • Natural environment means the investigator has less control over the situation.
  • If group is aware they are being observed. Resulting behavior may be affected.
  • Observations may not be valid for entire populations unless a plan for representative-ness is developed.
  • Observer may lose objectivity as a result of becoming involved.
  • Not realistic with a large group.

Collecting Observational Information

  1. Field notes. Observer records almost anything he / she wants at any time. Anecdotal or organized in categories. Written or dictated. Can be reorganized later.
  2. Field experience log or diary. Written after the fact (at the end of the day). Can be used in conjunction with field notes.
  3. Category notes. Important categories of behavior or information are determined before observation. Then more detailed notes are taken that fit into categories.
  4. Episodes. Subjects or groups behavior recorded in time sequence.
  5. Context maps. Sketches or diagrams of the context within which the observation takes place, such as room or field layout. Used as a reference supplement.
  6. Relational diagrams. Notes or diagrams about which individuals or groups interact with other individuals or groups.
  7. Panel. Observing the same person or group at different points over time, such as once a week, once a month, etc.
  8. Debriefing questionnaires. Questionnaires for observers, not subjects. Important questions determined before observation, then observer fills out questionnaire after observation.
  9. Rating scales. Predetermined behaviors are given a rating during observation.
  10. Checklists. Lists of things to look for and check off as they are found or observed.

Group Assessment

The group assessment method consists of a systematic discussion of a program area by a group of persons who are particularly knowledgeable about the subject. This group could be an advisory council, committee, group of leaders, commodity group, program participants, community leaders or any combination of these. A discussion leader needs to have in mind the type of input the group can provide and an idea of the type of decision the group needs to reach.

Group assessment allows an in-depth consideration of a subject. It is a relatively quick way to analyze a topic with persons who can provide a variety of insights. It also provides an opportunity to obtain program planning suggestions based on the evaluation of the group. The key to this method is an effective discussion leader who is prepared before the meeting begins.


  • Provides an in-depth analysis.
  • Takes advantage of experts on the topic.
  • Is relatively efficient in terms of time and other costs.
  • Can be incorporated with an ongoing group or meeting.
  • Can provide insight within the context of the local situation.
  • May be seen as credible to local observers.


  • Those not represented in the group have no input.
  • Minority opinions may not be expressed in the group.
  • May need more supplemental information of other types.
  • Hard to implement with a large group.
  • Can be a biased expression depending upon the makeup of the group.
  • Assumes an ability to bring all appropriate individuals together in one location.

Ways to Conduct a Group Assessment

(These types of group assessments may be used separately or in combination.)

  1. General, open-ended discussion questions. The discussion leader prepares a few general questions for discussion. This can work well in a smaller, relatively uninhibited group. This may not work so well in a larger group or with a group that feels less free to discuss opinions openly. The discussion leader must know and tell the group what kinds of decisions are to be reached as a result of the discussion.
  2. Questionnaire for discussion. A questionnaire with category responses about analyzing the situation or rating a program is given individually and privately at the beginning of the meeting. Results are then tabulated, not as the final evaluation by the group, but as a means around which to structure the discussion. Does the group agree or disagree with the results and why?
  3. Voting on or ranking a list. The discussion leader prepares a list of relevant parts of a program or important factors affecting the degree of success of a program. The group ranks the importance of this list, either by individual ballot or by a show of hands. This can be used either at the beginning of the meeting to generate discussion, or at the end of the meeting to represent the final group decision.
  4. Small group discussion. Especially for larger groups, it is often helpful to divide the group into two or more smaller groups. Each small group discusses and evaluates the total program or the groups can each be assigned a different topic to discuss and evaluate. Attar the small group meeting, results are reported back to the total group. Questions, comments and suggestions can be made by the total group after each small group report.
  5. Individual reactions. At the beginning of the meeting, each person writes down his / her own thoughts, reactions, evaluations, or answers to discussion leader's questions. Each person reports to the group with no criticism allowed and not discussion until everyone has reported. An alternative is that the individual reports can be handed in anonymously to the discussion leader. This provides a wide range of initial reactions about the program being evaluated, and thus provides a basis for total group discussion.
  6. Project or program report. Begin the meeting with a report about the project or program by a person or committee who has been closely involved or responsible for the project or program. After the report, the total group asks questions, makes comments, suggestions and evaluates the status of the project or programs. The key to this method is to make sure that the total group is the appropriate group to react to and evaluate the report.
  7. Presentation of available information. Decide what other types of information should be collected to complete the evaluation. The discussion leader or someone appointed by the discussion leader presents all the information already available about the project or program being evaluated. This could be secondary data, enrollment or participation figures or letters to the agent. The group then evaluates the status of the program based on this information.

Surveys / Questionnaires

The survey technique asks individuals to supply attitudes, beliefs, behaviors, reactions and attributes in response to specific questions. It is a relatively inexpensive way to gather information from a large number of people.

The survey is often based on information collected from a sample of the population or of specific subgroups. On the other hand, a survey can be administered to all people in a community or organization to provide everyone with an equal opportunity to express themselves. The most commonly used survey methods are personal interviews, group-administered surveys, drop-off and pick-up questionnaires, mail questionnaires and telephone interviews. While each approach is somewhat different, the format is similarly.

Survey design offers flexibility in the types of questions that may be asked, ranging from structured yes-no-undecided responses to unstructured, open-ended responses. Therefore, the survey can be sensitive to psychological barriers, such as length of survey, wording, type of person administering it and confidentiality that might affect response.


  • Can be inexpensive - especially if volunteers are available to conduct the survey
  • A sample can provide much information about a population.
  • Can be used to survey an entire population and providean opportunity for many persons to feel involved in the decision-making process.
  • Can be used to record behaviors as well as opinions, attitudes, beliefs and attributes.
  • Useful if combined with other Methods such as participant observation or case study that will provide an interpretive perspective.


  • To assure statistical meaning, samples must be carefully selected.
  • Limited by insight of content of survey instrument.
  • Subject to misinterpretation depending on how questions and response categories are designed.
  • Tendency for scope of data to be limited; omission of underlying reasons and actual behavioral patterns.
  • Time consuming compared with less formal methods.
  • Requires skill in construction of questions.

Survey Methods

Mail Surveys

Efficient for the volume of information collected but becoming more expensive.

Low response rates can be a problem. This is less a problem with captive audiences (Extension program participants). Good follow-up management can produce responses in excess of 70%.

Good planning can lower the cost and enable us to conduct a survey of clientele even during our busiest times. (The instruments can be designed when the program is being planned, the envelopes addressed as clientele register for the program, etc.) The survey can then be placed in the mail at the most appropriate time for the audience to respond.

People are more likely to provide frank, thoughtful responses to mail questionnaires. The mail survey gives them more time to complete the questionnaire.

Questionnaires must be simple and easy to understand.

Telephone Surveys

  • Response rate is generally high.
  • Can be time consuming but often existing staff, student workers or volunteers can be trained to handle it
  • Cost per response is competitive with mail surveys.
  • Telephone numbers are needed. This need not be a problem with Extension program participants if we incorporate telephone numbers into our regular program registration procedures.
  • Telephone directories are a good source of numbers for the general public. The proportion of non-published numbers (unlisted, new numbers, etc.) may present a problem but this problem can be minimized through random digit dialing to access all working numbers.
  • Some limitation on the kinds of questions. Generally should have no more than five response categories per question.
  • Talking about the forms - not the questions.) You can use more complex skip patterns in a telephone interview than with a mail questionnaire. For example, you might have two separate sets of questions in part of the interview - one set for those who own their own home and one set for those who rent.
  • The telephone interviewer's voice or identity may lead to some biasing.
  • The telephone survey can provide a speedy and efficient source of data. With a mail survey, it may take a month to six weeks or more to get the results.

Personal Interviews

  • Likely to be the most expensive method.
  • Can frequently reach the unreachable (the poor, mobile, high status individuals, etc.)
  • It may be easier to interview specific individuals (i.e. community leaders, small farmers, teachers, etc.).
  • Response rate is likely to be very high.
  • Slowest method of data collection.
  • Responses may be less frank and more thoughtful.
  • Easier to ask open-ended questions, use probes, pickup on non-verbal cues.
  • Interviewer's presence and personal characteristics may be biasing.
  • It is generally necessary to go to the location of the respondent. Therefore, addresses or directions are necessary.

Group Administered Surveys

  • Can collect a lot of data very cheaply by having everyone at a meeting, class, etc., complete a survey form.
  • May be easier to clarify items which present difficulty.
  • May require the cooperation of others (school administrators, etc.).
  • You only reach those who are present, may be very biased group.
  • Group tenor or setting may affect individual responses.
  • Provides the greatest sense of respondent anonymity.
  • Very high response rate.
  • Quick way to reach large numbers of people.

Survey Rules of Thumb

Before you begin

Decide exactly what you want to evaluate or to learn. Decide why you want to evaluate the program or project, and what you will do with the information (reports, plans, etc.). Consider other methods of gathering the information.


  • Ask exactly what you need to know, and no more.
  • Include only one idea per question.
  • Ask about attitudes, beliefs, behavior or attributes; avoid testing knowledge. There should be no "correct" answer.
  • Avoid unclear terms and abbreviations.
  • Avoid leading questions which may bias the response.
  • Give answer categories which are spelled out; avoid using numbers such as 1-5.
  • Make sure the answer categories are mutually exclusive.
  • Make sure the answer categories follow from the question.
  • Include directions for answering questions.
  • Avoid open-ended questions whenever possible.
  • Avoid having the respondents rank items.
  • Line up answer categories on the same side of the page for all questions so respondent doesn't have to find the answers and move his / her pencil back and forth across the page.
  • Put easiest questions to answer at the beginning.
  • Place personal questions (age, sex, education, income, etc.) at the end, if needed.
  • State the purpose to respondents by letter or with a verbal explanation.
  • Take care in keeping results confidential. You don't need to ask names, but you can use ID numbers, if needed.
  • Keep it short.
  • Have the questionnaire reviewed or pre-tested by someone else before it is used.


  • Decide who is included as relevant and appropriate respondents.
  • Can administer by mail, telephone or in person (example: at a meeting).
  • If you use mail or phone, try to get 70% to respond. Be prepared to do follow-up work.
  • Sample size is not a fixed number but varies with relevant population size and degree of accuracy desired.

Desired sample size is a function of statistical accuracy and credibility with the ultimate users of the results. Fewer than 100 responses makes analysis difficult and threatens confidentiality.


  • Tabulate the results on a blank survey form or use a computer, if available.
  • Use percentages instead of numbers as much as possible.
  • Compare different groups if relevant such as male-female, rural-urban, leader-non leader or different project or program groups.
  • Let the respondents know the overall results.
  • Include results (statistics) in your Plan of Work and reports.
  • Write a short narrative for use by appropriate Extension personnel, council, committees, leaders, etc.
  • Summarize "newsworthy" findings for use with popular press.
  • Give answer categories which are spelled out; avoid using numbers such as 1-5
  • Make sure the answer categories are mutually exclusive.
  • Make sure the answer categories follow from the question.
  • Include directions for answering questions.
  • Avoid having the respondents rank items.
  • Line up answer categories on the same side of the page for all questions so respondent doesn't have to find the answers and move his / her pencil back and forth across the page.
  • Put easiest questions to answer at the beginning.* Place personal questions (age, sex, education, income, etc.) at the end, if needed.
  • State the purpose to respondents by letter or with a verbal explanation.
  • State the purpose to respondents by letter or with a verbal explanation.
  • Take care in keeping results confidential. You don't need to ask names, but you can use ID numbers if needed.
  • Keep it short.
  • Have the questionnaire reviewed or pre-tested by someone else before it is used.

Personal Interviews

As an evaluation technique, personal interviews are a qualitative method. They permit you to prove in-depth to the "whys'' behind responses, you can ask follow-up questions to clarify a response, and you can establish a rapport with respondents that emphasizes the importance of the evaluation process. Disadvantages of personal interviews are: they are time consuming and the "acquiescence syndrome" can develop, i.e., the person being interviewed tries to say what they think you want to hear. This is particularly true when you are interviewing clientele that have been long-time CES program participants.

As a precaution, if you are interviewing others to determine their REACTION to a program, they may be reluctant to tell you anything negative. The same may be true of PRACTICES. If they think you want information that says they have implemented recommended practices, they will say that whether they have implemented the practice or not.

The purpose of interviewing is to find out what is in and on someone else's mind. You can choose one of three styles for interviewing:

  1. Background / demo - Informal conversational - unstructured and spontaneous. You have a topic in mind, but haven't worked out any specific questions about it.

  2. General interview guide - a basic outline is developed of questions or areas that you want to inquire about. However, depending on the responses, you still have the latitude to probe.

  3. Standardized Open-ended - the interview is conducted with a set of questions, carefully worded and arranged. Each respondent is asked the same questions in the same way.

  Past Present Future
Experience / Behavior What a person has done. What a person is doing. What a person plans to do.
Opinion / Values What a person about a program. What a person's opinion of a program is now. What a person would like to see happen.
Feeling How a person felt about an experience or thought. Feelings in the present. What a person thinks they will fell about an experience in the future.
Knowledge Factual information from the past. Current knowledge. What a person would like to learn in the future.
Sensory What a person saw when they entered a program. What a person hears or sees others do. What a person would like to see, hear or experience.
Background / Deiaographics      

Wording Questions

Interview questions should be truly open-ended, and have only one idea per question. They should also be clear in terms of what is being asked. As the interviewer you should know what terms respondents usually use for programs, and what language respondents use among themselves when referring to programs and activities. Avoid "why" questions and assume a clear cut casual relationship, for example, "What was it about the program that attracted you to it?'

One of the difficult tasks of interviewing is establishing a neutral or non-judgmental position. You have to establish a climate whereby whatever the respondent said is not judged to be either good or bad. Illustrative examples of extremes may help you to accomplish this.

An advantage of personal interviews is the ability to probe. Probes should be detail oriented ("Can you tell Me more about how you implemented that practice?") and provide both elaboration and clarification. ("Can you clarify that response by telling me what you mean by...in the opening statement, the interview should tell the respondent why the information is important. You should be willing to explain the purpose of the interview. Further, you should be in control of the interview by knowing what you want to find out, asking the right questions to get the right answers, and giving appropriate verbal and non-verbal feedback to the person being interviewed.

Always keep track of questions asked and answers received. An advantage of the personal interview is that you can assess non-verbal feedback of the respondent, but it also applies to you. Non-verbal feedback can very quickly express a judgmental response to a question, and destroy all neutrality that has been established. Be aware of non-verbal feedback for both the respondent and interviewer.

Note Taking

Note taking can be accomplished by carefully writing down responses at the time, waiting and writing down responses afterwards (best accomplished if the interview is short and the questions very specific) or using a tape recorder. All of the above have obvious advantages and disadvantages. One of the strengths of a personal interview is the direct statement from the respondent. If you are taking notes, use quotation marks only to indicate full and actual quotes. (if you plan to use these quotes in a report or other documentation, be sure to ask permission from the respondent.)

The major advantage of a personal interview as an evaluation process, is that it can provide you with qualitative information-information that is not quantifiable, and yet provides a feeling and substance. It is best used in situations when experiences and / or outcomes of a program effort are highly individualized; when it is important to understand some phenomena in great depth or when you are not sure what you want to find out and you want to explore more about what's happening. In this sense, a personal interview is an ideal tool to use in a needs assessment as you determine some of the topics or programs that you should be addressing in your county extension program.

Telephone Surveys

Many of the same rules of question development that apply to mail questionnaires also apply to telephone surveys. However, telephone surveys can gather more information than a mail survey, because the interviewer can probe and gather more qualitative data. The telephone survey is also more flexible because it can include "skip outs" for a heterogeneous population. This is limited with a mail questionnaire and can be confusing to the respondent. (Skip outs are questions that are bypassed it a respondent did not meet some criteria. For example, did you attend the October 10th meeting on Agricultural Profitability? If yes, then the interviewer proceeds to more questions about that meeting. If no, then the interviewer skips to the next major question.)

Telephone surveys are especially appropriate when:

  • Questions don't require visuals. Only short lists or response categories will be included. Respondents can give a quick reaction without contemplative thought. Privacy is not necessary. Responses are needed from a specific individual.

Advantages and disadvantages of the telephone survey include:

  • Costs include telephone charges (if any), interview time, some printing. Interviewers require special training in interviewing skills, purpose of the survey, etc.
  • Depending on the number of interviews, may take several days or weeks. (One interviewer can do approximately 15 interviews per week. Beyond that, boredom becomes a factor.
  • Clear, simple questions are needed as the respondent can't "see" what's being asked. There is usually a high rate of response, although several calls may be needed.
  • Complete answers to all questions can usually be obtained, contributing to statistical accuracy. May be biased in favor of households with telephones.
  • Interviewer bias as well as questionnaire bias could be a factor.
  • Establishes a personal contact with the respondent and may make him / her feel important to be included.
  • One strong advantage of the telephone survey is that you can utilize trained volunteers to conduct it. As long as the questions are not personal in nature, a volunteer can readily gather information to follow-up the impact of a meeting. They can ask if a practice has been implemented as a result of a meeting, or quantify how much or how many in terms of production of a product.
  • It there are a number of long distance telephone exchanges in the county, volunteers in those communities can make the calls and cut down on the expense of the survey.

Guidelines For Implementing Telephone Surveys

  1. Make a list of program participants.
  2. Draw sample (if used).
  3. Develop questionnaire.
  4. Pilot test questionnaire and modify.
  5. Develop introduction
    • Identify caller by full name
    • Identify organization the caller represents. (CES)
    • Establish that the correct respondent is on the phone.
    • Assure confidentiality
  6. Duplicate questionnaires.
  7. Write telephone numbers on questionnaires.
  8. Train interviewers.
    • Consistent
    • Receptive
    • Nonjudgmental
    • Persistent
    • Knows what to say "if"
  9. Collect Data - Best times: between 4 and 9 p.m. weekdays, 10 a.m. and 4 p.m. Saturdays. (No Sundays) - Avoid: popular sports events, blockbuster TV shows, other local events including Church and school activities.
  10. Summarize response rate, no answers, busy signals, wrong numbers, disconnected Numbers, refusals, incomplete interviews, completed interviews.
  11. Code and tabulate questionnaires.
  12. Analyze and interpret data.
  13. Report the findings.

Make every question count.

(Adapted from: Sewer, Barbara. EVALUATING FOR ACCOUNTABILITY. A Practical Guide for the Inexperienced Evaluator. Oregon State University Extension Service. Corvallis, July 1984.)

Pre / Post Tests Telling Information and Knowledge

One of the indicators of success of an educational program is the increase in information and knowledge of a particular subject matter. In other words, did the person learn anything? A distinction needs to be made between disseminating information and increasing knowledge. Information gain registers the success of the communication of facts and data; whereas, knowledge gain suggests that the person has understood the information and can utilize it in a problem-solving situation. The measurement of changes in information and knowledge can be accomplished through the use of tests or examinations.

Testing is an appropriate procedure for measuring knowledge, but it cannot be used for indicating beliefs, attitudes and behavior. Testing is based on the assumption that there are right and wrong answers on a subject


  • Can provide an indication of level of information and / or knowledge.
  • Can be used to indicate changes in information and knowledge related to a particular program.
  • Relatively easy to implement.
  • Can be carried out in a group setting, especially in classrooms.
  • Can test the accomplishment of certain learning objectives.


  • Adults often resist attempts to test their knowledge.
  • Knowledge gain may be unrelated to behavior.
  • May measure information gain but not knowledge gain.
  • Changes in behavior may depend more upon attitudes and beliefs than on knowledge.


  1. Formal setting and method

    a. Construct written questions that test the subject matter given.

    b. Have each person respond to the questions.

    c. Grade the responses against the right answers.

    d. Summarize the degree of comprehension achieved in written form.

  2. Semi-formal method

    a. Construct a written list of questions for your use.

    b. Informally solicit responses to the questions in a discussion or verbal way.

    c. Phrase questions in a non-threatening way.


    • What problems would you have doing your own project?

    • What is the easiest part to understand?

      a. Summarize your responses in writing.

      b. Be careful to elicit the full range of responses, especially if conducted in a group.

  3. Practical use of information / knowledge

    a. The test is the use of the newly acquired knowledge.

    b. An appropriate exercise needs to be selected to provide an adequate test.

    c. Observe the performance in conducting the task. d. Examine the final product.


    • Can the 4-Her give a good demonstration?
    • Can the homemaker use a pressure canner?
    • Can the person operate a computer?
    • Are committees used effectively?

Focus Group Interview

What is it?

The basic idea of a Focus Group Interview (FGI) is simple. A group discusses certain topics, commonly for one to two hours. The interview / leader / moderator raises various issues, focusing the discussion on matters of interest to the researcher according to an outline or general guide. Analysis of gathered information attempts to discern patterns and trends that develop among participants, also across focus groups.

When Should the Focus Group Method Be Used?

  • When existing clients or potential clients think about a new proposal or program.
  • When examining the strengths / weaknesses of a proposed program.
  • When wondering if a new plan or program will work.
  • When wondering how a new program should be promoted.
  • When wondering how well a current program is working.

Focus Group interviewing uncovers information on human perceptions, feelings, opinions and thoughts. It is not effective for discovering technical solutions, but so often it is forgotten that what seems to be a technical problem is really both a technical and a human problem


  1. Fast and relatively cheap.
  2. Great for generating hypotheses when little is known.
  3. Drastically reduces distance between the respondent who produces information and the stake-holder who uses it.
  4. Flexibility.
  5. Great ability to handle contingencies.
  6. Group interview respondents stimulate dialogue among the group.
  7. Findings emerge in a form that most users fully understand.


  1. Focus groups are very easily misused. They are easy to set up, but require skill to moderate and systematic and tedious procedures are required to interpret the data.
  2. Projections of results to a wider group requires much caution. Focus groups are not intended to obtain numbers that represent a population.
  3. Groups can vary considerably. Therefore, many focus groups might be needed to balance the idiosyncrasies of individual sessions.

Steps In Conduction Focus Group Interviews

  1. Consider your purpose. Begin by writing. Determine whom to study.
  2. Consider the information users: know who they are, what they want, and why they want the information.
  3. Develop a tentative plan and estimate resources needed. Make both a chronological and fiscal plan.
  4. Identify the questions to be asked in the Interview. Identify potential questions. Highlight key questions. Establish the context of the questions. Arrange the questions in a logical sequence. Begin with unstructured questions Carefully use structured and semi-structured questions.
  5. There is an art to moderating the group interview. Moderators must be mentally alert and free from distractions.
  6. Small talk just before the focus group helps create a warm and friendly environment, puts the participants at ease and purposefully avoids the key issues to be discussed later.
  7. Individuals who talk a great deal and might dominate the discussion should be seated to the moderator's side; shy and quiet participants are best placed immediately across from the moderator.
  8. The location should be easy to find, relatively free from outside distractions, neutral and have tables and chairs arranged with participants facing each other.
  9. The moderator should direct and keep the discussion flowing, and take few notes. The notes of the moderator are not so much to capture the total interview, but to identify future questions that need to be asked.
    • Two essential techniques for the moderator are: (1) The Five Second Pause and (2) The Probe. The short pause often prompts additional points of view or agreement with previously mentioned positions, especially when coupled with eye contact from the moderator.
    • The probe is a request for additional information. Examples of probes are: "Would you give me an example of what you mean?"; "Would you say more?"; "Is there anything else?"; "Please describe what you mean."; or "I don't understand."
  10. Focus groups should be recorded in two ways: by an audio tape recorder and with written notes taken by an assistant moderator.
  11. The moderator must create a thoughtful, permissive atmosphere.

Tips For Your Focus Group Interview

Equipment / Tapes

  1. Use a pressure zone microphone (PZM) - Anything that the human ear can be accurately picked up by the human ear can be accurately picked up by the PZM. No need for concern over the angle between sound source and PZM. The distance between the PZM and the sound source has no effect on quality of sound reproduction.
  2. Use 90-minute tapes. Sixty-minute tapes are too short and 120-minute tapes are prone to break and jam. Before the session, use fast forward and rewind on new tapes to ensure that they do not stick or jam.

Systematic Notification Procedure

After meeting times for the focus groups are set, the following steps are in order.

  1. Approximately 10 to 14 days before the meeting, begin making telephone invitations to the session. It is best to over-recruit and later cancel some invitations if it appears too many will attend.
  2. One week before the meeting, send a personalized follow-up letter of invitation to those who have consented to participate.
  3. The day before the meeting, make another phone call to each person as a reminder of the session and ask about intentions to attend.

Optimum Moderator Characteristics

  1. Well-rested and alert for the focus group session.
  2. Prepared to give a standard introduction without referring to notes.
  3. Can remember questions / questioning route without referring to notes.
  4. Responsible for advance arrangement of meeting room, extra tapes, batteries, an extension cord.
  5. Avoids head nodding or other responsive body language.
  6. Avoids comments that signal approval.
  7. Avoids stating own opinions.

Duties of Assistant Moderator

  1. Maintains tape recorder.
  2. Takes detailed notes.
  3. Handles unexpected interruptions.
  4. Asks questions where important and relevant.
  5. Leads the analysis process.

Focus Group Questions

  1. All questions have a "stimulus" and a "response." The stimulus is the topic of discussion and the response provides clues to how people are expected to answer.
  2. An unstructured question is free of both stimulus and response. Semi-structured questions narrow the inquiry by limiting either the stimulus or the response.
  3. The "focus" of a focus group is achieved by careful use of unstructured to semi-structured questions.

What About Analysis

  1. Immediately following the focus group, the moderator and assistant moderator should check the tape to see that it worked and briefly identify the common experiences and perceptions that surfaced during the interview.
  2. Within 24 hours following the focus group meeting, and definitely before another focus group is conducted, the moderator and assistant moderator should review the tape, capturing comments using the questioning route as an outline.
  3. In final summary, state the questions. Describe responses in one or two paragraphs. Add three to eight relevant quotations to illustrate. Following quotations include a paragraph to interpret findings.


Good evening and welcome to our session. Thank you for taking the time to join our discussion of XXXXXXXXXXX. My name is XXXXXX and assisting me tonight is XXXXX. We are gathering information about XXXXXXXXXXXXX.

We have invited people with similar experiences to share their perceptions and ideas on this topic. You were selected because you have certain things in common that interest us. You are all XXXXXX. We are particularly interested in your views because you are representative of others in (the county, committee, etc.).

Tonight we will be discussing XXXXXXX. This includes _______________________.

There are no right or wrong answers, but rather differing points of view. Please feel free to share your point of view, even if it differs from what others have said.

Before we begin, let me remind you of some ground rules. (This is a research project and there are no sales involved. You will not be requested to volunteer or attend any future events or programs.) Please speak up with only one person speaking at a time.

Mail Questionnaires

We each receive numerous surveys and questionnaires in the mail, and may answer some and toss others. What has been your personal reaction to some of these questionnaires? Keep these reactions in mind as you prepare questionnaires for collecting evidence of program impact but don't automatically decide questionnaires that are mailed, are all bad.

Properly constructed, a mail questionnaire is a good tool for collecting accountability data. Individuals are generally more cooperative if the questionnaire (1) deals with something they're keenly interested in, (2) helps them review what they know or express what they feel, and (3) offers potential benefits to them - for example, gives them confidence that you'll really consider their opinions and use them to improve programs.

The major principles to consider when designing a questionnaire include question construction, "look" of the questionnaire, questionnaire distribution and follow-up. A good book on designing focus group interviews can help you with these details.

Question Design

No matter what kind of evaluation you are designing, question construction is the most important and most difficult step. You need to ask - what is the purpose of the evaluation, and what is the most important thing to ask. Know exactly what you want and how you are going to use it. Also consider your respondent's viewpoint. How well do you know the people you are sending the questionnaire to, and what will they think of the questions being asked? Will they consider it an invasion of privacy? From your original ideas eliminate the "nice to knows", the inappropriate items and the ambiguous items.

  • Make sure questions can only be interpreted one way.
  • Avoid indicating type of answer you expect to get. Ex: "Most farmers use X, do you?" or "You use a shopping list, don't you?"
  • Don't back a respondent into a corner, give the choice of saying, I don't know" or "I can't say".
  • Check for consistency in answers when it's logical.
  • Make the questionnaire easy to tabulate.
  • Keep the survey small.

You can make your questionnaire easier to answer by doing the following:

  • Make directions clear and concise.
  • Cut verbiage to the bare bones.
  • Use format that's easy to read and answer.
  • Get right to the topic of the questionnaire.
  • Make questionnaires easy and unprovoking.
  • Organize items in a logical manner. (Both in content and question format.)
  • Limit the number of questions requiring a written response.

Always pre-test the questionnaire - use your colleagues as well as several people who are as much like your respondents, as possible. Have them react to the questionnaire itself, as well as provide answers that you can examine. Also find out how long it took to complete each pre-test.

The pre-test can also provide you some data for tabulation and analysis, and interpretation. Is the questionnaire really going to provide you with the information you need, or should changes be made?

One of the limiting factors to mail questionnaires has always been the response rate. A specific technique for design of the questionnaire and cover letter, mailing and follow-up has been developed by Don Dillman of Washington State University. Use of his technique has produced an average response rate of 74% in 48 different surveys.

Mail questionnaires can be costly, particularly if you are sampling a large group of program participants. By the time you add in postage for the first mailing, a stamped return envelope (maybe two), a follow-up postcard reminder, and a second and third reminder with enclosures, you have spent a minimum of $2 -3.00 per person. However, if you only do one good evaluation a year with impact data, this is still worth the investment.

End-of-Meeting Questionnaires

The end-of-meeting questionnaire is most frequently used to collect evidence of Reactions to activities by the participants. The evidence collected can be used to determine if you are on the right track with subject matter content, and if you have adequately thought through the conditions of the environment for good teaching and learning. It can also be used for program planning purposes to determine content for future meetings.

The end-of-meeting questionnaire should be short, 1-2 page maximum, and include some close-ended questions with ordered choice answers, and some open-ended questions. The first question should focus on the most important thing you want to know. Did the activity meet the expectations of the clientele? Was the subject matter adequate in quantity and quality? What changes could be made for future meetings? What additional information would clientele like? Was the activity organized in a logical manner? Did the environment take into consideration the physical needs of the learners?

If you set-up an end-of-meeting questionnaire that you like, you can use it for a number of activities in a program or subject area and aggregate the results. By using the same instrument, you can also determine why clientele liked one program but not another.

Question Construction

Designing questions that are understood the same way by everyone, is probably the most difficult task when developing a questionnaire. For an end-of-meeting questionnaire, you probably only want a few open-ended questions and close-ended questions with ordered choice answers.

Open-Ended Questions

Stimulate free thought, solicit creative suggestions, or recall information learned. They can be a source of meaningful quotations to use in reporting. They do require time, thought and effort on the part of clientele to answer, and tabulating and analyzing responses can be difficult.

Close-Ended Questions

With ordered choice answers are not as demanding and are easier to tabulate and analyze. The response categories are usually intended to measure degree or intensity and are part of a gradient or continuum. Be sure the categories reflect balance, i.e., the same number of positive as negative choices, and measure the same thing. You need to decide whether or not to include a neutral or middle position such as "uncertain." Instead of descriptive word choices for responding, you can develop a numerical continuum. Five to seven positions in the continuum are usually enough choices.


If you have a group of over 30 people, you can randomly give the questionnaire to people when they arrive and tell them that they are one of a select group being asked to respond, to encourage them to complete the questionnaire. If questionnaires are put in program packets, participants need to be reminded to complete and turn them in at the end of the meeting. For convenience, have a box or two at the meeting room door to put the questionnaires in, or instruct participants to leave them in the middle of tables. Following are examples of questions you can use in end-of-meeting questionnaires follow.

Sample Questions

  1. Overall, do you feel this meeting contributed to the improvement of your skills (or knowledge in (Subject)

    4. NOT AT ALL
  2. In terms of learning more about ____________ do you feel this meeting was-

  3. What did you like best about the workshop?

  4. What did you like least about the workshop?

  5. What main idea did you learn or relearn during the workshop?

  6. I'd like to learn more about_________________

  7. Rate today's meeting for each of the following items. (Circle the appropriate number)

Physical arrangements 1 2 3 4 5

Orientation 1 2 3 4 5

Group atmosphere 1 2 3 4 5

Group participation 1 2 3 4 5

Choice of methods 1 2 3 4 5

  1. Did you feel that the length of the meeting was: -TOO LONG -ABOUT RIGHT -TOO SHORT

  2. Did you feel that the content of the subject matter was: * TOO ELEMENTARY * ABOUT RIGHT * TOO COMPLICATED

    1. How satisfied were you with today's meeting?
      3. UNCERTAIN

Tips For Conducting Questionnaires

  • Use the same scale for all questions; people find it easier to respond.
  • Scales with five or more points allow more insights into participants' responses than yes or no answers.
  • When rating people, especially if the purpose is to help people improve, words are less threatening and more encouraging than numbers.
  • Put questions about age gender, and income at the end. A first question needs to be more interesting than "How old are you?"
  • Take out any unneeded questions and shorten, shorten, shorten.
  • If a question is asking about two things, divide it into two questions.
  • Choose words or phrases with very clear meanings - for example, "Did you increase your net income?" instead of "Did you make more money?"