Analysts may develop a series of questions before a game for which they seek answers (opinions?) from participants before the game, as the game unfolds, at the conclusion of a particular scenario or vignette, or at the conclusion of the whole process.
Substantial guidance already exists for the development of questionnaires. Of course, each questionnaire needs to be tailored to its purpose, so it is difficult to give guidance for developing a specific one.
Multiple choice questions can be used to reduce the time required for completion of a questionnaire. However, given that games can often steer into unanticipated areas, many such questions can become irrelevant soon after the game has begun. For example, before gaming begins the Study Team may anticipate that logistics will be a significant issue and include many multiple-choice questions on this. But as the game and discussion unfolds, it may turn out that logistics is only a passing concern for most players. At this point the questions or choices may seem almost nonsensical in the context of what has transpired in the game. If players answer such questions, their replies may be contradictory and may even seem whimsical -- and not at all what the analyst may have expected.
For many aspects of gaming, open-ended questions that are framed in very general terms may be best. If the players (and other respondents) are small in number, the efficiencies usually associated with multiple-choice questions may not pertain in any case. So free-text fields may be preferred to give respondents maximum latitude in framing their responses.
A form of questionnaire that has been quite successful in games is one that captures some contextual information by tick box or alpha-numeric code, and then has free-format text boxes for observation and recommendation (see top boxes in the figure).
The contextual information may be demographic information on the respondent (rank, military specialty, experience). Or it could specify which scenario, what staff branch (e.g., G1 through G6), and so on. In this example, from Canadian Army Experiment 6B on brigade-level command and control, the observer could put in reference codes for such elements as 'Critical Information Requirements' (CIR); 'Tactics Techniques and Procedures' (TTP); and 'Unit Standard Operating Procedures' (USOP). Codes for all of this had been specified prior to the start of the game. If the respondent had a observation to make on some specific Unit Standing Operating Procedure, say on the use of the C2 system to call for air support, he or she had only to put the relevant paragraph number of the SOP in the grey area at the top and then write out a observation (what is the problem or issue?) and the recommendation (what do you think should change?).
Electronic Questionnaires and Surveys. Questionnaires and surveys can be administered over a network using facilities like SurveyMonkey or LimeSurvey. Such tools have the facility for easily developing and administering very elegant and effective surveys. LimeSurvey is attractive when the game is being conducted inside a company/government firewall or on a local network that is not connected to the Internet; it consists of an open-source software package that can installed on the interior network where it has no contact with the Internet. A potential downside of using only electronic surveys is that each respondent needs to have an email account and web browser: they receive the questionniare by email and go to a web page to respond. This means that common arrangements in something like a command post exercise, which may have shared accounts (e.g., for shift workers in an ops rooms), may be a problem. Also, some vital respondents can be missed if they are not issued with email accounts for the game network, e.g., high-ranking visitors or support personnel who may have no need for a personal email account within the game (other than to respond to the survey).
Questionnaires go directly to the source: the participants provide the information.
Some forms of questionnaire can be quickly analyzed, meaning results may be ready by the time a "quick look report" is needed (Step 13).
Some common problems include:
Getting direct observations from participants has considerable value. There are many modalities for this. The " Observer Data Sheet" above is one means. Another is to have a facilitator conduct a brain-storming session with a flip chart or white board.
It may be appealing to designate participants whose sole role during the game (Step 12) is to observe the conduct of the game and record their findings. However, some participants are likely to look on these observers with some suspicion -- "are they here to evaluate us?"
If there is an independent groups of observers, players can devote all of their energy to the game itself.
When interviewing game participants, two objectives should be adopted: to obtain the subject's special knowledge about the topic, and to obtain the subject's opinion about the topic. The interviewer should remain aware of which objective is the main focus at any time. Sometimes it will be best to draw out special knowledge and, after covering that ground, ask for opinions. Other times an interviewee may express an opinion and should then be obliged to provide the special knowledge that led to it.
A report consisting only of opinions will rarely be useful. A report consisting of special knowledge may lack the conclusions or hypothesis that incorporates the subject's opinions into a useful product.
Be prepared. The interviewer should have a basic knowledge of the subject, including popular jargon terms. A lack of basic knowledge has several disadvantages. First, the interviewee may not be at ease if always interrupted to explain basic issues. Second, the interviewer’s credibility may decline accompanied by a reluctance to open up.
The news reporters five W’s (who, what, when, where, why, with frequent support from how, may suffice as a framework for many interviews. Of course, the interviewer will want to couch the actual questions in more eloquent terms than simply and repeatedly asking: "But why?"
Have a list of questions prepared in advance. It seems obvious, but some people don't think of it. While you should be prepared to improvise and adapt, it makes sense to have a firm list of questions which need to be asked. However, not all questions on the list will have to be asked in every interview -- it will depend on context.
Providing a subject with a list of questions in advance may help them prepare. In interviewing game participants, we generally are not trying to trick interviewees by asking unexpected questions. However it is perfectly acceptable to ask about failures or other potentially embarrassing points -- as long as the discussion remains professional and the objective is to have others avoid similar mistakes in future.
Whether providing the text in advance for the questions a good idea or not depends on the situation. For example, if you will be asking technical questions which might need a researched answer, then it helps to give the subject some warning. On the other hand, if you are looking for spontaneous answers then it may be best to wait until the interview to provide the question to the subject.
Try to avoid being restricted to a preset list of questions as this could inhibit the interviewer from improvising on some emergent topic. However, if you do agree to such a list before the interview (say, for consistency between subjects), stick to it.
Ask the subject if there are any particular questions he or she would like you to ask them. Use the subject as a collaborator; they may have a topic not on your list of questions but that is critical to the game. Then allow them to answer their own proposed question (and consider using it for other subjects too).
Listen. A common mistake is to be thinking about the next question while the subject is answering the current one, to the point that the interviewer misses some important information.
Interviews can be more free ranging that a questionnaire set in advance; the topics can be improvised in response to the game results.
Bias of interviewer. If an interviewer has biases (even if they are inadvertent), this may take the interview in inappropriate directions. It may also result in important topics being missed (if the interviewer thinks he or she already knows the right answer). Many biases may be inadvertent, and the interviewer may be unaware they are affecting the quality of the interview; training and experience with interview techniques should help overcome this.
Dishonesty from subject. Sometimes the subject will provide a dishonest response. When this happens, it is not likely to be malicious. Rather the subject may think the answer is true, although, for example, it could really be some myth from his or her culture. Or, a subject might give the interviewer an answer that the subject thinks the interviewer wants to hear ("no harm done if I am just trying to be nice").
Failure to pursue critical issues. From time to time an interviewer may not tackle critical issues to the extent they need to be covered. For example, the interviewer might defer to rank -- "If a general officer has made a bunder, who am I to try and second guess him?"
Failure to record properly or promptly. Interviews should be recorded in real time if possible. If this is not possible (say due to trying to maintain spontaneity with the subject), the interviewer needs to record the notes from the interview immediately upon conclusion. Having a team of two conduct the interview -- interviewer and scribe -- is also valuable.
Intrusion of electronic means. If an interview is recorded, the recording means (audio or video) may become a distraction. For example, a subject may be intimidated -- e.g., some subjects may be unwilling to cover some topic knowing that a recording could come back to haunt them. If the equipment needs attention -- sound checks, battery replacement -- this could interrupt the natural flow of the interview discussion.
AARs have been effective in both military and civilian applications. AARs are being used in civilian organizations and the procedures that have developed codify the best practices of their military counterparts. The purpose of AARs is more to review with "trainees" what they have learned from the game, so feedback in the context of an OR study should never be limited to an AAR.
After Action Reviews have come to be associated with following up on training or on an actual operation, with a view to improving in the future. However, as a familiar means of reviewing some activity, they are of considerable benefit in the analysis of a game.
Some participants may use the stage of an AAR as a soapbox for some favorite topics. Rather than focusing on findings of a game, they may go on tangents that reach into issues that were never part of the game itself.
The Study Team may fail to record the AAR material (including critical aspects of context). In AARs conducted for training, it may often suffice to have the participants depart with a good understanding in their own minds of what transpired, with no other record of the proceedings. However, in a game within an OR study this all needs to be recorded -- sometimes not a feature of AARs for training. Simply having the players understand what happened is not enough; that knowledge will disperse when the players disperse.