A great deal of midwifery research is done using questionnaires. In the first of two papers, this method of gaining information is evaluated and various questionnaire formats are discussed.
Midwives magazine: April 2005
We offer two research papers on questionnaire surveys, probably the most popular method of investigation used in midwifery research. This first paper highlights the strengths and weaknesses of survey research. The second focuses on questionnaire design. Questionnaires typically ask questions based on a number of variables. These surveys frequently focus on individuals, but may also cover families, general practices or maternity hospitals.
Why use surveys?
Surveys are fairly cheap and fast, and they can incorporate a large sample (Punch, 2003). This is because they can be completed quickly and there is no need to employ interviewers. They can also be standardised, which means they can be repeated elsewhere or in the same population at a later date, allowing the researcher to make comparisons. For example, questionnaires have been used to compare women's experiences of maternity care in different areas of England and Wales (Audit Commission and Institute of Child Health, 1996) and within Scotland (Hundley et al, 2000), and to compare the preferences of women who have access to different types of maternity service (Hundley et al, 2001). Furthermore, because a large sample can be collected via survey research, generalisations about the wider population can be made (Bryman, 2001). The speed of survey research also means data analysis can be done sooner after the start of the study than with most other forms of research. Another advantage is that postal and web-based surveys allow respondents to complete the questionnaire when they are most receptive. For example, a recent study of university students on health promotion used a web-based questionnaire (Douglas et al, 2004). The anonymity offered in surveys may increase response rates and encourage more honest answers from the respondent (Oppenheim, 1992). In addition, because the data from surveys are normally of a `tick box' nature, entering them on to a database is straightforward, as is the subsequent analysis. Spreadsheet packages like `Excel' have a range of analytical tools, while more sophisticated statistical analysis can be conducted using the statistical package for the social sciences (SPSS) (Field, 2000). However, it would be a mistake to think that survey methods are an easy option ± a questionnaire needs to be carefully developed and therefore takes time and effort to construct (Aldridge et al, 2001).
What type of survey?
There are several types of survey designs - some that have been used recently in midwifery research are described below. Cross-sectional surveys This type of survey provides a snapshot of a sample at a single point in time. The sample is randomly selected from a population chosen for its particular characteristics. For example, Hundley et al (2000) surveyed all women who delivered their baby in Scotland within a ten-day survey period. This method is relatively cheap and simple. However, the snapshot nature of this type of research can lead to sample bias when interpreting the results. Comparative surveys These allow responses from different groups to be examined. The groups are usually identified in advance of the study, but in some cases survey data may be used to enable the comparison. Gaffney et al (2004) used professional organisations to identify midwives and obstetricians and in a questionnaire investigation looking at the use of and attitudes towards complementary therapies. Stahl et al (2003) sent questionnaires to antenatal women in Germany asking about the completion of the risk assessment score used in pregnancy. Women copied the score information from their record into the questionnaire and these data were then used to identify them as being labelled high risk or not. The effect of the labelling was examined by comparing the two groups for other variables within the questionnaire. In some cases, questionnaire surveys can be part of studies involving intervention and control groups. Biró et al (2003) evaluated team midwifery in Melbourne using a randomised, controlled trial. As part of the study, women were sent questionnaires asking about their satisfaction with the care they received. Comparisons can also involve a randomised group and a self-selecting group (Peat, 2002). The strength of this method is that the psychological effects of randomisation and self-selection can be considered when analysing the results of the study. Longitudinal surveys A longitudinal survey follows the same population over a period of time. These surveys are less common in midwifery, but an example might be a ten-year follow-up of babies born in a certain maternity unit.Time series surveys These surveys involve studying different groups for a period of time, such as comparing groups using old and new treatments over an extended period. This would allow us to see if the new treatment led to a different outcome than the older one (Bowling, 2002). A midwifery example would be studying attitudes of shoppers towards breastfeeding in public places, and measure changes of views over time.
Who to survey?
Using a sampling frame, a list of people appropriate for the research, may help. This should be as inclusive as possible, but problems can arise. For example, if one uses an electoral register, some people are not registered to vote. Similarly some people will have been missed off patients' lists, while others still on the list have recently moved or died (Gomm et al, 2000). Midwifery research has frequently used such lists, for example, antenatal clinic lists and education classes to identify suitable research participants. However, there are also ethical issues relating to data protection that arise with this practice (van Teijlingen et al, 2004). There are two main methods of sampling ± probability and non-probability, the essential difference being that probability sampling is based on the premise that all possible respondents will have an equal chance of being selected, whereas with non-probability sampling this is not the case. Probability sampling is less prone to sample bias, but is more expensive and time-consuming to undertake (Aldridge et al, 2001). The necessary sample size will vary depending on the research parameters, time and resources. There is a mathematical formula for calculating the appropriate sample size (Sapsford, 1999), however, the general rule is the bigger the sample size the less risk there is of sample bias.
Ways of administering a questionnaire
Questionnaires can be administered (that is, distributed and then collected) face to face, by post, by telephone or over the internet. Face-to-face questionnaires allow a trained person to explain ambivalent questions, ensure that all questions are answered and enable people with poor reading and writing skills to participate. However, training interviewers raises the cost of the study (Aldridge et al, 2001). Postal questionnaires have the advantage of being cheaper and allowing respondents to complete them at their convenience. However, they are impersonal and typically have a much lower response rate. Telephone questionnaires involve no travel costs or the need to arrange for people to be in the same place at the same time. Therefore, they can normally be conducted within a much shorter timeframe. Furthermore, interviewer bias is reduced as there is less body language (de Vaus, 2001). However, with long telephone questionnaire interviews, especially using mobile phones, costs can rise sharply. Email and web-based questionnaires are both very cheap and fast methods, and often gain a higher response rate than postal surveys (Creative Research Systems, 2004). However, the risk of bias is high because of the need for the respondent to have an email address and some level of computer literacy. In the case of email, the format means that the questionnaire must be kept simple. For more complex questionnaire designs, the alternative e-based solution is a web-based survey. This comparatively recent development is still evolving, but the interactive nature is very appealing to respondents (Mentor, 2002). However, different webbrowsers convert formatting differently and while the questionnaire might function properly on one computer, it might not on another. Therefore, it needs to be piloted on as many different web-browsers as possible.
Including a good cover letter with the survey will increase the response rate. It should contain information on the research aims, why it is important, how respondents might benefit, instructions on how to complete the survey itself, any incentives, how to return the questionnaire and the name and telephone number of someone who can answer any questions.
One way to improve response rates is to follow up a questionnaire mailing after a couple of weeks with a card asking people to return it. Alternatively, a second questionnaire could be sent to those who have not yet responded. However, follow-up increases mailing costs and necessitates the inclusion of a unique identifier (number) to track respondents.
Data entry can be manual or electronic. The latter has the advantage of speed, especially where the answers are of a `tick box' type. However, poorly marked questionnaires and questionnaires with text-based answers can create scanning problems. Once the data have been recorded, it is important to check for input errors. Often the best way to do this is to run frequencies to see if any obvious anomalies surface.
Once a questionnaire has been administered, it cannot be altered. Therefore, piloting is a crucial stage in survey preparation (Hundley et al, 2002). It is important to use a similar group of people for the pilot study as for the main study, but not to contaminate the sample. Remember to make the necessary changes following your pilot.
Limitations of surveys
The `tick box' nature of surveys means they are inevitably simplistic, lacking the depth and quality of information of qualitative research. Human beings are reduced to numbers, since the results are examined via data analysis and statistics. There can also be problems with accuracy. Data entry is repetitive and boring and this can lead to mistakes. Surveys can suffer from low response rates. This can lead to biased results, particularly if the non-responders are from a particular ethnic group or age range. One solution is to make sure the respondents are in a particular place and are given the time to complete the questionnaire. Another way of gaining an improved response rate is to motivate the respondents. This might be done by convincing them about the importance of the research and therefore the relevance ofthe questionnaire ± this reiterates the importance of a cover letter. Alternatively, one might be able to motivate the respondents by offering an incentive for returning a completed questionnaire (Creative Research Systems, 2004). Because most forms of survey are conducted out of sight of the researcher, issues about by whom and where the questionnaire is completed arise. In group settings, often a generalised or `acceptable' response, rather than that of the individual for whom the questionnaire is intended, is obtained. It may also be that the respondent lies about their age or substance use. Because of the anonymity of questionnaires, responses cannot normally be checked for accuracy. There is also the risk that the questionnaire is filled in on someone's behalf. For example, women from ethnic minorities who have a limited understanding of English, could have it completed by their partner or child. Another more subtle problem is that the respondent may not have interpreted the question in the way the researcher intended. Additionally, because of the frequent `tick box' nature of survey questionnaires, respondents are likely to `respond' to a question even if they do not really have an answer. If this occurs regularly, there is a high risk of bias. No less serious is the problem that because questionnaires are often sent to the respondent, there is an assumption that they will understand the questions. The written nature of questionnaires means that the words used have a major impact. The meaning of words is both plural and contextual. It may be that the respondent may understand the meanings of the words in the questionnaire differently to the way they were written. Although most questionnaires use as plain and simple language as possible, there is still the risk that the respondent's literacy level will not be adequate. Also, most individuals find it more natural to express themselves verbally, but questionnaires do not allow this.
Other things to think about
Although surveys normally offer a relatively cheap research option per respondent, they can still be expensive. As with other methods of research, various `experts'may be required. Entering data into a database for data analysis is comparatively simple, however, manipulating, analysing and presenting data requires a range of skills. With a postal questionnaire there is the cost of paper-based products - photocopying, postage and envelopes. Similarly, telephone bills and the cost of hiring or training interviewers will affect the research budget for these types of surveys.While there is relatively little cost in email and webbased surveys, hiring an expert to design a web-based questionnaire can be expensive. In many ways, survey research is less timeintensive than other types. However, the time it takes to design and pilot a suitable questionnaire must be considered. A time period then needs to be allowed for the questionnaires to be completed. Data entry, cleaning and analysis can be relatively quick if done by experts. However, if done by nonexperts at least double the amount of time must be allowed. As with any other type of research, surveys must be ethical in that the welfare and rights of the respondents come before the interests of the researcher (Peat, 2002). Care must be taken to ensure that the questions asked are not offensive, unnecessarily intrusive, or cause distress. Finally, one needs to allow time to apply for ethics permission (van Teijlingen et al, 2004).
Steve Brindle is a teaching fellow in the Department of Public Health, University of Aberdeen
Flora Douglas is a lecturer at the Department of Public Health, University of Aberdeen
Edwin van Teijlingen is a reader at the Dugeld Baird Centre for Research in Women's Health and Department of Public Health, University of Aberdeen
Vanora Hundley is an honorary senior lecturer, Department of Nursing and Midwifery, University of Stirling
Aldridge A, Levine K. (2001) Surveying the social world. Open University Press: Buckingham. Audit Commission and Institute of Child Health. (1996) Maternity care survey. Audit Commission and Institute of Child Health: Bristol. Bland M. (1996) An introduction to medical statistics (second edition). Oxford University Press: Oxford. Biró MA,Waldenstrom U, Brown S, Pannifex JH. (2003) Satisfaction with team midwifery care for low- and high-risk women: a randomised controlled trial. Birth 30(1): 1-10. Bowling A. (2002) Research methods in health (second edition). Open University Press: Buckingham. Bryman A. (2001) Social research methods. Oxford University Press: Oxford. Burns N, Grove SK. (2002) Understanding nursing research (third edition).WB Saunders: London Creswell JW. (2002) Research design: qualitative, quantitative and mixed methods approaches (second edition). Sage: London. Creative Research Systems. (2004) The survey system's tutorial: survey design. See: www.surveysystem.com/sdesign.htm (accessed March 2005). David M, Sutton CD. (2004) Social research: the basics. Sage: London. Douglas F, Brindle S, van Teijlingen E, Fearn P, MacKinnon D. (2004) An exploratory study investigating computer screen-based health promotion messages and Scottish university students. International Journal of Health Promotion and Education 42(4): 116-8. Field A. (2000) Discovering statistics using SPSS for Windows. Sage: London. Gaffney L, Smith CA. (2004) Use of complementary therapies in pregnancy: the perceptions of obstetricians and midwives in South Australia. Australian and New Zealand Journal of Obstetrics and Gynaecology 44: 24-9. Gomm R, Needham G, Bullman A. (Eds.). (2000) Evaluating research in health and social care. Open University Press in association with Sage: London. Hundley V, Rennie AM, Fitzmaurice A, Graham W, van Teijlingen E, Penny G. (2000) A national survey of women's views of their maternity care. Midwifery 16(4): 303-13. Hundley V, Ryan M, Graham W. (2001) Assessing women's preferences for intrapartum care. Birth 28(4): 254-63. Hundley V, van Teijlingen E. (2002) The role of pilot studies in research. RCM Midwives Journal 5(11): 372-4. Mentor KW. (2002) Survey research and the internet. See: www.cjstudents.com/surveys_ internet.htm (accessed March 2005). Oppenheim AN. (1992) Questionnaire design, interviewing and attitude measurement. Continuum: London. Peat J. (2002) Health science research. Sage: London. Punch KF. (2003) Survey research: the basics. Sage: London. Sapsford R. (1999) Survey research. Sage: London. Stahl K, Hundley V. (2003) Risk and risk assessment in pregnancy ± do we scare because we care? Midwifery 19: 298-309. van Teijlingen E, Rennie AM, Hundley V, Graham W. (2001) The importance of conducting and reporting pilot studies: the example of the Scottish births survey. Journal of Advanced Nursing 34(3): 289-95. van Teijlingen ER, Cheyne HL. (2004) Ethics in midwifery research. RCM Midwives Journal 7(5): 208-10. de Vaus DA. (Ed.). (2001) Surveys in social research (fifth edition). Routledge: London. Wilson N, McClean S. (1994) Questionnaire design: a practical guide. University of Ulster: Newtownabbey.