Midwives magazine: May 2005
This second article in a two-part series focuses on designing and conducting successful questionnaire surveys in maternity care. Different aspects of this method are discussed, including planning, design, layout and other practical issues to be considered at the start of any survey.
Decide on information to collect
After ensuring that your research question is answerable (Hundley and van Teijlingen, 2002a), the next crucial stage in planning your questionnaire is to decide what information you want to collect to answer that question. Start by finding out what has already been written – a literature review should help you to develop the aims and objectives of your study. It should also help to identify existing questionnaires – maternity research has a wealth of questionnaires that are readily available. For advice on systematic searching of the literature, see Bruce and Mollison (2004).
It is common practice to use existing instruments, as long as they suit your purpose, can be understood by the study population and you acknowledge that you have done so. A number of validated questionnaires already exist, for example the SF-36 and the Euroquol questionnaires are used widely in healthcare research, while more specific maternity questionnaires include the survey manual by Mason (1989) and national audit tools (Audit Commission and Institute of Child Health, 1996; Hundley et al, 2000). A useful resource folder was published by the College of Health (Craig, 1998). In most cases, it is not necessary to use the whole tool, just part of an existing questionnaire (Punch, 2003). An important reason for using previously validated/standard instruments, such as SF-36, is that it gives information about the study sample that can be compared with other groups. This will give an idea of the representativeness of the sample (Frank-Stromborg and Olsen, 2004). You could also consider contacting researchers that work in your area. Most will be happy to share the results of their work, and may be willing to provide you with their questionnaires. Researchers’ contact details are usually included on published papers.
It might also be worth asking people on an electronic discussion list if anybody is aware of an existing questionnaire in your area of research. Useful email addresses for this purpose are: _ email@example.com _ firstname.lastname@example.org (more sociology focused). Another useful resource is the International Confederation of Midwives’ research advisory network at: www.internationalmidwives.org/index.php?module =pnAddressBook If researching a topic about which very little is known, you may wish to conduct some focus groups with a small number of the intended respondents beforehand.
This process should help to identify relevant lines of enquiry, and appropriate language for use in the questionnaire. It may also help to alert you to any possible confounding factors that could invalidate the results of your study. For example, maternal height and birthweight are related, but a confounding factor that has an influence on both birthweight and maternal height would be social class.
Select key topics
It is important to remember that the study aim and objectives should determine what questions are used, and questions should not be included just because they appear to be interesting. Compiling a list of items should help determine which to include and which to leave out. It may also help to think about how the results will be presented at the end of the study. Think in terms of background, knowledge, attitude and behaviour questions. Background refers to personal characteristics of the respondents, for example, age, gender, social class, ethnicity and parity. Knowledge, attitude and behavioural questions need to be clearly separated, as they can sometimes be contradictory. For example, some pregnant women may continue to smoke during pregnancy, despite indicating they would like to give up and knowing that smoking can harm their health, because they find it difficult to behave differently from their friends and family.
Design individual questions
Having decided what issues to cover, the actual questions to be used in the questionnaire must be considered. Closed or multiple-choice questions offer the respondent a set of predetermined responses. The advantages of closed questions are that they are more likely to be completed by the respondent as they require less effort to answer, are easy to code and analyse and give everyone the same response options. However, they restrict the choice of options, and it is common for respondents to pick neutral options. Respondents can also miss out some options when completing the questionnaire or tick more than the one option asked for. Open-ended questions encourage free text comments from respondents. They are good to use in pilot studies to identify key issues. However, responses to open questions are more difficult to code and quantify, as is analysing the results. Respondents are also less likely to complete open-ended questions, because they are more timeconsuming.
Most questionnaires use a mixture of open and closed questions. Rating scales are used to explore attitudes, opinions and beliefs. Ratings scales (known as Likert-type scales) generally ask the participant to indicate their strength of feeling about an issue (see Box 2). Rating scales are challenging to use for a number of reasons. Bowling (2000) argues that respondents tend to avoid extreme categories, and instead pick responses that reflect the middle ground – the so-called error of central tendency. Therefore, if rating scales are used, the number of scale points to include must be decided. An odd number gives a neutral category or midpoint, while an even number forces a direction of attitude. An alternative is to use ranking responses that are considered to be a slightly more skilful way of getting people to express opinions. Ranking is based on the idea of getting respondents to order their preferences from a list of items (see Box 3).
The major advantage of this technique is that it gives a better idea of the relative qualities the respondents attribute to each item. However, respondents can find them difficult to complete. Lastly, visual analogue or ordinal scales are widely used in health care to assess pain, mood and functional capacity. The basic feature of this scale is that it has clearly defined and fixed end-points (see Box 4). However, problems can arise when using this type of question. In a trial of a midwifemanaged delivery unit, Hundley et al (1995) asked midwives to rate their satisfaction with providing care using an ordinal scale where zero was: ‘Thoroughly unsatisfactory, nothing good to be said about it’ and ten was: ‘An absolutely wonderful experience that could not have been better.’ They found that midwives in the midwife-managed arm of the study had a higher level of satisfaction (7.69) than those in the labour ward arm (7.52). Although this difference in satisfaction (0.17 on the scale) was statistically significant (p < 0.01), it was meaningless in real terms and had no clinical significance.
Decide on wording
Try to keep the questionnaire short and concise, and use language that respondents will recognise and feel comfortable with. of filter questions. For example, women having a first baby should be told to skip questions about previous experience of childbirth (see Box 5). You also need to think about the visual appearance of the questionnaire. Response rates can be enhanced if the questionnaire does not appear too crowded (McColl et al, Avoid using ambiguous terms (for example: ‘Did you feel down or depressed after the baby was born?’), jargon (for example, gestation), acronyms (such as VBAC), abbreviations or technical terms, which are often used in healthcare circles (Oppenheim, 1992). Be aware that people can be put off from completing the questionnaire by reflecting values or assumptions that may offend them (Boynton et al, 2004). It is also important to avoid asking about two issues within the same question, for example: ‘Would you like to speak to someone about contraception before you have your baby, and would you like to speak about it with your midwife?’.
The layout and general appearance of the questionnaire plays a vital role in determining whether or not a potential respondent will fill it out (McColl et al, 2001; Boynton, 2004). It is generally considered best to place neutral questions at the start, keeping more sensitive questions towards the end. At the same time, it is also good practice to adopt the so-called ‘funnel approach’ to question presentation, with general questions placed ahead of any specific ones. Respondents should also be directed around irrelevant questions through the use2001).
Make sure that the tick boxes are in the right place and consistently spaced (down and along the page) to help give the appearance of a tidy instrument. It is also essential to think about those with visual impairment. It is important to think about how the responses will be analysed while still at this design stage.Will they be analysed by hand (this may be appropriate for small numbers of questionnaires) or using a statistical package to help? There are a number of specialist statistical software packages around, for example the statistical package for the social sciences (SPSS) or Epi Info (used for handling epidemiological data and freely available to download from the internet). On the other hand, if the questionnaire is relatively straightforward and you do not think complex analyses are required, Microsoft Office software such as Access or Excel can be used.
Having decided which system to use, it is necessary to think about pre-coding the questionnaire – this is not only essential for analysis, but also for data entry. Coding of closed questions and rating scales involves listing all possible responses and giving each one a numerical value. It is important to include an option for missing responses. For example, in Box 2 the response ‘strongly agree’ would be coded as ‘one’, the response ‘agree’ as ‘two’ through to ‘strongly disagree’ being coded as ‘five’. If the respondent did not answer the question at all, then it is normal to enter a code of ‘nine’.
You could also think about pre-coding open-ended responses by establishing some meaningful categories identified through the literature review, pilot study or qualitative research. For example, other responses to the question in Box 3 might include:
The discussion about pain relief
Knowing that an epidural service was available
Hearing that I could ask for an epidural if I wanted
Seeing that gas and air were available in the delivery room.
Rather than have a separate category for each of these, you might choose to code them together under one category ‘information about options for pain relief ’. Some questions may elicit more than one answer and so each of these needs to be coded separately. For more details on coding, Hardy and Bryman (2004) is recommended.
Piloting is a crucial stage in the preparation of your questionnaire (Hundley and van Teijlingen, 2002b). Despite paying close attention to the design of the questionnaire, there will invariably be flaws that are only highlighted by a pilot study. It is important to use as similar a group for the pilot study as for the main study, and also to make the necessary changes following the pilot and alter the questions accordingly.
Validity and reliability
In order to be scientifically sound, a questionnaire needs to be valid and reliable (Bowling, 1997; Boynton and Greenhalgh, 2004). Validity is usually defined as the extent to which a questionnaire measures what it intends to measure. For example, does the questionnaire measure women’s attitudes towards pain relief, or does it really measure their fear of hospitals? Reliability refers to the consistency of the questionnaire. For example, if you use the same questionnaire on the same people, do you get the same answers? One way of assessing reproducibility is testretest – use the same questionnaire twice on the same population. If smoking prevalence in 400 pregnant women was 14.7% at the time of the first questionnaire survey, and 35.8% three weeks later in the same group, one would question the quality of the questionnaire. If using an existing questionnaire but with some changes to the questions then its validity is potentially compromised. In such cases it should be re-piloted to make sure that the questionnaire as a whole has retained its internal validity.
Other things to think about
This final section covers more practical aspects associated with questionnaire design. A crucial factor that often determines whether a survey is conducted or not, is having the necessary resources to undertake it. Large postal questionnaire surveys may be expensive. There can be substantial financial costs associated with such things as photocopying, postage and stationery. Time is required to complete tasks like constructing mailing lists, mail merges and stuffing envelopes. Once the questionnaires are returned, the data are normally entered either by hand or by optical scanning directly into a computer database for analysis. Both options take time and money. It is normal practice to send at least one reminder to increase response rates. This will have an impact on the time needed to conduct the survey. Telephone and internet-based surveys need some specific considerations regarding the questionnaire (Brindle et al, 2005).
The questionnaire for a telephone interview needs to be designed in such a way that the questions are clear and unambiguous to the respondent on the other end of the line. Internet-based questionnaires used with the appropriate groups (that is, those with access to a computer and with computerliteracy skills) appear to generate similar quality data compared to mailed questionnaires, while requiring slightly less follow-up (Ritter et al, 2004). However, most first-time researchers would need help with the technical design. In order to send out reminders, it is necessary to know who has responded to the questionnaire. It is possible to keep track of respondents by assigning each questionnaire a unique study number. This will enable you to determine if those who did not respond were different from those who did. In addition, by only sending a reminder to those who have yet to respond, you avoid irritating the respondents who have already completed the questionnaire, as well as helping to contain costs.
You will need to create a covering letter to provide concise details about the study (Brindle et al, 2005). This letter is crucial for a number of reasons. It will play a key role in ‘selling’ the questionnaire to potential respondents, but it will also act to communicate the credibility of your survey to the respondents. It should provide information about the agency responsible for the study (the hospital) and contact details of the researchers for the respondents. Also, it should explain how their contact details were obtained and provide respondents with assurances about confidentiality and anonymity.
It is important to give respondents reasons why their information is valued and how long it is likely to take them to complete the questionnaire, and don’t forget to thank them for participating! You should also always ensure that postage is paid, either by using reply-paid or stamped addressed envelopes. It is worth noting that the use of real stamps has been found to increase response rates, possibly because respondents feel an obligation not to waste the stamp that has already been paid for (Moser and Kalton, 1971).
Stages of questionnaire design (Stone, 1993)
Decide what information you need to collect
Find out if it has been done before
Select a list of items to be collected
Design individual questions
Decide on wording
Prepare first draft
Flora Douglas is a lecturer
Edwin van Teijlingen is a reader, Dugald Baird Centre for Research in Women’s Health, University of Aberdeen
Steve Brindle is a teaching fellow
Vanora Hundley is an honorary senior lecturer, Department of Nursing and Midwifery, University of Stirling
Julie Bruce is an MRC research fellow
Nicola Torrance is a research fellow
Audit Commission and Institute of Child Health. (1996) Maternity care survey. Audit Commission and Institute of Child Health: Bristol. Bowling A. (2000) Research methods in health. Open University Press: Buckingham. Boynton PM. (2004) Administering, analysing and reporting on your questionnaire. British Medical Journal 328(7452): 1372-5. Boynton PM, Greenhalgh T. (2004) Selecting, designing and developing your questionnaire. British Medical Journal 328(7451): 1312-5. Boynton PM,Wood GW, Greenhalgh T. (2004) Reaching beyond the white middle classes. British Medical Journal 328(7453): 1433-6. Brindle S, Douglas F, van Teijlingen E, Hundley V. (2005) Midwifery research: questionnaire surveys. RCM Midwives Journal 8(4): 156-8. Bruce J, Mollison J. (2004) Reviewing the literature: adopting a systematic approach. Journal of Family Planning and Reproductive Health Care 30: 13-6. Craig G. (1998) Women’s views count: building responsive maternity services. College of Health: London. Frank-Stromberg M, Olsen S. (Eds.). (2004) Instruments for clinical healthcare research (third edition). Jones and Bartlett: Boston. Hardy M, Bryman M. (2004) Handbook of data analysis. Sage: London . Hundley V, Cruickshank FM, Milne JM, Glazener CM, Lang GD, Turner M, Blyth D, Mollison J. (1995) Satisfaction and continuity of care: staff views of care in a midwife-managed delivery unit. Midwifery 11(4): 163-73. Hundley V, Rennie AM, Fitzmaurice A, Graham W, van Teijlingen E, Penney G. (2000) A national survey of women’s views of their maternity care in Scotland. Midwifery 16(4): 303-13. Hundley V, van Teijlingen E. (2002a) Getting started in research RCM Midwives Journal 5(10): 328-30. Hundley V, van Teijlingen E. (2002b) The role of pilot studies in research RCM Midwives Journal 5(11): 372-4. Mason V. (1989) Women’s experiences of maternity care: a survey manual. Office of Population, Censuses and Surveys: London. McColl E, Jacoby A, Thomas L, Soutter J, Bamford C, Steen N, Thomas R, Harvey E, Garratt A, Bond J. (2001) Design and use of questionnaires: a review of best practice applicable to surveys of health service staff and patients. See: www.ncchta.org/fullmono/mon531.pdf (accessed March 2005). Moser CA, Kalton G. (1971) Survey methods in social investigation. Dartmouth Publishing Company Ltd: Aldershot. Oppenheim AN. (1992. Questionnaire design, interviewing and attitude measurement. Continuum: London. Punch KF. (2003) Survey research: the basics. Sage: London. Ritter P, Lorig K, Laurent D, Matthews K. (2004) Internet versus mailed questionnaires: a randomised comparison. See: www.jmir.org/2004/3/e29 (accessed March 2005). Stone DH. (1993) Design a questionnaire. British Medical Journal 307(6914): 1264-6.