Chapter 2: Approaches to Research
Chapter Outline
Essential Questions
What research methods are used to understand behaviour?
How and why do psychologists use particular research methods?
How is research evaluated?
How are conclusions about human behaviour drawn?
After studying this chapter, you should be able to:
Myths and Misconceptions
Experiments are an effective way to 'prove' facts about human behaviour.
Psychology is a young science and though experiments help us to understand human behaviour, theories and conclusions are tentative (tentative = not definite or certain) in nature. The word 'prove' should never be written in connection with research in psychology. Theories and hypotheses are 'supported' or 'demonstrated'; results are 'shown' or 'obtained'.
Psychology is not a real science
Psychology uses scientific methods to understand behaviour. Researchers form hypotheses, manipulate and control variables and use statistics to analyse results.
Psychology is just common sense.
This misconception stems from confusion about the nature of psychology and common sense. Common sense refers to practical knowledge and making sound decisions and judgements. It relies on experience and logical reasoning. On the other hand, psychology is based on theories and scientific methods.
While philosophers ponder life’s big questions and poets play with words to convey their ideas, psychologists carry out scientific research. They have many options: observe people, ask people questions or set up conditions to see how people react. In broad terms, psychologists generate answers to research questions through either quantitative or qualitative methods or by combining both methods.
1. Quantitative Methods
1.1 Experiments
To answer a research question, a psychologist may conduct an experiment. This is often called a 'laboratory' experiment (because of where it usually takes place) or a 'true' experiment. These terms are acceptable, but in the IB Psychology guide, this is an 'experiment'. This setting allows the investigator to control the environment to determine if a change in the Independent Variable (IV) causes a change in the Dependent Variable (DV). To be confident that a cause and effect relationship does exist, all other variables are controlled as best as possible. Variables that interfere with the action of the IV on the DV are called confounding variables.
To help you understand these terms, imagine a researcher is investigating if music (the IV) distracts people from learning (the DV). She manipulates the IV by playing music. She asks half the participants to spend 10 minutes learning 100 words in a classroom with music playing. The other half of the participants study in a quiet classroom. She then tests the participants to see how many words they can recall. The number of words correctly recalled is the DV. She controls possible confounding variables by giving both groups the same number of words to be learnt, by testing them at the same time of the day and giving them the same instructions. The mean scores from the music and no music conditions are then compared.
This is a simple experiment. Though the experiment by Pederson et al. (2006) is more complex, the basic principles are still there.
Focus on Research
Pedersen et al (2006) wanted to understand how the hormone oxytocin influenced mothering behaviour. They used laboratory rats in their experiment as they believed animal studies can inform our understanding of human behaviour. They randomly divided rat mothers and their offspring into three groups: one group of rat mothers received a dose of the hormone oxytocin; one group received a dose of an oxytocin-antagonist, that reduced oxytocin in the brain; and the control group received salty water.
Compared to the control group, the mothers with the reduced oxytocin did not spend as much time grooming their babies and instead spent more time grooming themselves. These mothers did not feed their babies and some mothers even lay on top of their offspring. The mothers with increased oxytocin spent more time grooming and feeding the babies, compared to the control group. The researchers concluded that oxytocin influences mothering behaviour in rats. They hypothesized that oxytocin has a similar influence on human mother-infant bonding, as it is released during childbirth and breastfeeding.
Applications of Experiments
Experiments are particularly useful when studying human brain processes when highly technical and accurate measurements can be taken. They allow the researcher to test a hypothesis, support a theory and apply that the theory to real life.
Strengths of Experiments
Experiments show a cause-and-effect relationship between the IV and the DV. Statistical testing allows for thorough data analysis. The precise nature of the experiment allows for replication (replication = to repeat a study) by other researchers which make the findings more reliability.
Limitations of Experiments
Strict control over possible confounding variables can create an artificial environment. This leads to the criticism that a study lacks ecological validity. Another limitation is that participants can have expectations about what the nature and purpose of the study and that can result in a change in behaviour. The way these expectations influence a participant's response is known as demand characteristics. Experiments may lack internal validity as there may be other alternative explanations for the results.
1.2 Field Experiments
As their name suggests, these experiments are conducted in a natural setting ‘in the field’. For example, if a researcher wanted to determine what factors (IV) cause students to experience stress (DV) when they are at school, then the school becomes the setting for the experiment. As aspects of the setting are harder to control, there is less control over possible confounding variables. As a consequence, the research may be less confident that there is a cause-and-effect relationship between an IV and a DV. The IV is still manipulated, so the researchers would vary factors like the time given for tests, or maybe the length of the breaks between lessons, for one randomly selected group of students and compare their stress (the DV) with that of another (control) group.
Focus on Research
Piliavin et al. (1969) were interested in why people do or do not offer help to a stranger in need. They decided that their field experiment would take place in the New York underground. Four-hundred and fifty men and women were the participants as they travelled on subway trains between 11 am and 3 pm on weekdays from April 15th to June 26th, 1968.
The independent variables the experimenters manipulated were: the type of victim (drunk or ill); the ethnicity of the victim; the size of the bystander group and the presence or absence of a model (someone who offers help first). In each of the 136 trials, a confederate (a person acting a role in an experiment) staggered forward and collapsed shortly after boarding a subway train. He remained motionless on the floor, looking at the ceiling. After a fixed time period, another confederate acted as a model if no one else had offered help. The behaviour of the passengers was observed and recorded.
Four people played the person in need of help. They were all males, aged between 26 and 35. Three were Caucasian and one African American. In some trials, they pretended to be drunk.
The researchers used participant observation to measure some DVs including the speed of helping, the frequency of helping and the ethnicity of the helper.
Piliavin found was an ill person is more likely to receive assistance than a drunk person. Men were more likely to help and people tended to help more often is the person in need was a similar race.
The results led the researchers to develop an Arousal: Cost – Reward model to interpret their findings. This model argues that when people see an emergency they feel upset. They are motivated to act to reduce this unpleasant arousal. People then weigh the costs of helping versus not helping.
For more details, see Piliavin et al., 1969.
Application of Field Experiments
Field experiments are used to study behaviour in its natural setting. They have been used extensively by social psychologists to investigate behaviour such as comparing child and teenager aggression after watching violence on television or engaging in violent video games.
Strengths of Field Experiments
Field experiments are more ecologically valid than laboratory studies because there is less artificiality. Behaviours like street protests, littering, children’s behaviour in school, are best investigated in their natural setting.
Limitations of Field Experiments
The lack of control over variables is the main limitation of field experiments. This can lead to a loss of confidence in the results. Researchers should acknowledge the limitation of field and they need to take care not to make claims that are not supported by the evidence.
1.3 Natural Experiment and Quasi-Experiment
These have been put together, as they have many similarities. Many textbooks put them together, but there are a few differences.
Natural Experiments, like field experiments, take place under natural conditions. However, unlike field experiments, there is no manipulated IV. Instead the IV is a naturally occurring variable, like the introduction of a TV to a remote area. This gives psychology researchers the opportunity to compare a DV such as levels of playground aggression, or teenagers’ eating disorders both before and after the event. The effects of unemployment on mental health or of children attending kindergarten on the mother-child relationship can similarly be studied. All of these would be natural experiments. The IV was not specially manipulated by the researcher, but the effect of it is interesting to psychologists.
Quasi Experiments also often take place under natural conditions. However, this is not the main difference between them and experiments that take place in a laboratory. They have two essential differences from these ‘true’ experiments which means that a quasi-experiment cannot show a cause-and-effect relationship between the IV and DV, just a correlation.
Quasi-experiments do not randomly allocate their participants to groups. Instead, participants are self-selecting, often by gender, age, or ethnicity. However they can also be self-selecting by factors such as ability in maths (high/low, as determined by maths testing), their employment (taxi driver/non-taxi driver, Maguire 2000) or whether or not they suffered childhood abuse (Suderman, et al. 2014). These are called ‘non-equivalent groups’ as the researchers do not expect them to have the same qualities as each other.
The research does not always have full experimental control over the IV. Sometimes they do manipulate an IV and measure the effect on people according to group. So for example, if trying a type of new psychotherapy, it would be possible to try the psychotherapy on men and women (group 1 and group 2), and also try a traditional treatment on men and women (group 3 and group 4) and compare the results. The type of treatment is the manipulated IV and in this case it is controlled.
However, at other times a quasi-experiment is much more like a natural experiment, in that the IV is naturally occurring, but the effect is measured on particular groups. So, in the case of the introduction of TV to a remote region, a quasi-experiment would measure the effect on particular groups, already allocated by prior measures of aggression, or maybe by gender or age.
Focus on Research
Bronzaft and McCarthy (1975) were interested in the importance of quiet environments and whether noise makes learning more difficult. They located a New York City elementary school built close to an elevated train line. The train, which passed at regular intervals throughout the day, ran close by one side of the school building but not the other.
Teachers were assigned to classrooms and children to teachers in a somewhat random way at the start of each school year. This allocation of students to classes resulted in a strong natural experiment involving a treatment group of students on the noisy side of the school and a comparison group on the quiet side. The researchers found that the mean reading scores of classes on the noisy side tended to lag three to four months (based on a 10-month school year) behind their quiet side. Educational officials used the study to justify the implementation of noise reduction initiatives.
Application of Natural and Quasi-Experiments
As shown above, natural experiments can be used to measure the effect of noise, light, location, poverty and many other factors on human behaviour. They are often used in educational and health psychology.
Strengths of Natural and Quasi-Experiments
Natural and quasi-experiments can be used in situations where it would be ethically unacceptable to manipulate the independent variable. For example, studying the impact of drug use. There is also less chance of experimenter bias or demand characteristics interfering with the results. The type of study allows researchers to take advantage of naturally-occurring events to better understand their consequences.
Limitations of Natural Quasi-Experiments
The independent variable is not controlled by the researcher and there is no control over the allocation of participants to groups. Therefore, replication is almost impossible, and reliability is lower than with experiments. In quasi-experiments, even if there is a manipulated IV, the groups are not equivalent and therefore, no cause-and-effect relationship can be established.
1.4 Correlational Studies
Correlational studies test the relationship (the correlation) between two variables of interest, such as self-esteem and exam results. This correlation is expressed as a number between -1 (a negative correlation) and +1 (positive correlation). This number is called a correlation coefficient. A correlation coefficient of 0 means there is no correlation between the two variables. Researchers often gather information through observation of what people already do. Correlation does not necessarily mean causation.
Focus on Research
Lam et al. (2012) investigated the link between how much time parents spent with their children and the children’s self-esteem. They were also interested if a birth order had any impact on how much time parents spend with their children. They focused on children and teenagers aged between eight and 18. Mothers, fathers, first-born and second-born children from 188 white families participated. These participants were interviewed at home or by phone. The researcher found that social time between parents and their children declined across adolescence. Second-born children’s social time decreased more slowly than firstborn children’s. They found that youths who spent more one-on-one time with their fathers, on average, had higher overall self-esteem scores.
Application of Correlational Studies
Examples of correlational studies are investigations into the number of cigarettes smoked and anxiety levels; recognition of letters at age four and ability to read at age seven; intelligence and inheritance (usually using either twin or adoption studies). Kinship studies (also called family or pedigree studies) are often used in psychology to investigate the heritability of behaviours. (Heritability: to what extent a trait or behaviour is inherited).
Strengths of Correlational Studies
These studies are conducted quite easily and produce data that allows for a comparison between two variables. These studies allow researchers to study variables that cannot be manipulated, such as gender and age differences.
Limitations of Correlational Studies
Correlational studies do not show cause and effect relationships. Often a correlation could be explained in several different ways. In the example given above, it could be that adolescent boys with high self-esteem chose to spend more time with their fathers and find it easier to interact socially with them. Or it might be that a third variable, such as a high family income, that allows access to leisure activities the family can do together that is responsible for the behaviour.
2. Qualitative Methods
2.1 Naturalistic Observations
Observations can be used to collect data as a stand-alone method but they can also be used to gather additional data as part of an experiment or case study. Observers usually target a specific behaviour or set of behaviours, and may use a grid called a tally chart to record the data by making check marks in the chart. Observations can be any of the following combinations:
Type of Observation
Role of Researcher
Participant—joins in the activity of the participants while observing them.
Overt - participants are aware of being studied. Covert - participants are not aware of being studied.
Non-participant—watches but does not join in with the activity.
Overt - participants are aware of being studied. Covert - participants are not aware of being studied.
Naturalistic (usually non-participant)—takes place where the target behaviour normally occurs.
Overt - participants are aware of being studied. Covert - participants are not aware of being studied.
Controlled (usually non-participant)— researcher constructs and controls the situation.
Overt - participants are aware of being studied. Covert - participants are not aware of being studied.
Figure 2.6 Types of Observational Studies
Applications of Observations
One area of research where observations are often used is in the field of developmental psychology. They can be utilized easily in classrooms and playgrounds when researchers are interested in the natural behaviour of children. Children are used to having teachers and other adults present in classes and at play, so quickly become used to being observed. Observations may or may not include filming or audio taping of the children’s behaviour. However, observations are also valuable in other areas of psychology such as social psychology.
Focus on Research
Miranda et al. (2002) conducted a study to evaluate the effectiveness of a programme for treating attention-deficit/hyperactivity disorder (ADHD) carried out by teachers in a classroom context. One of the methods used was direct observation of behaviour in the classroom. Fifty children with ADHD participated in the study. The teachers of 29 of the 50 students were trained in ways to reduce the symptoms of ADHD. The other 21 students formed the control group, and their teachers had no special training. The results showed that increased academic scores and better classroom behaviour were observed in the group of children with trained teachers due to the teachers’ improved knowledge about how to respond to the children's educational needs.
Strengths of Observations
Naturalistic observations often give researchers ideas for further research. This type of research also allows the researcher to record and study behaviour in some detail and often in natural circumstances. As a result observations high ecological validity. Observations can also be used as part of a triangulation of methods (using more than one method) to confirm what has been said in interviews on the same topic, for example. By using several different observers, reliability and validity of the observations can be increased.
See Section 5.5 on Triangulation.
Limitations of Observations
There are three main problems with observations: the first is the issue of demand characteristics as participants may try to act according to their own ideas about what the researcher wants. A second problem is researcher bias when observers see what they want to see. One way to avoid this is to use observers who do not know the purpose of the research. Finally, observations run the risk of being unethical. For example, when asking for informed consent might result in demand characteristics, it is usually assumed that there are no ethical problems with observations in shopping malls, in the street and in other public places where others who are not engaged in research also observe each other. However, it is hard to know if people would be angry if they knew that they had been observed. Despite this, ethical guidelines for psychology research may specify that it is acceptable to record data from people who are in a public place, even without their consent.
2.2 Case Studies
Case studies are examples of research into a particular individual, group of people or organisation. In contrast to other research techniques such as experimentation and correlation, case studies aim to provide a more detailed and holistic analysis of the behaviour of the individual or group under investigation and, as a result, require lengthier periods of time to carry out. Historically, the case study has always been an important tool in medicine and therapy but in modern psychology, its use has expanded considerably hence it is now also used extensively in medical, educational and workplace psychology.
Focus on Research
Curtiss (1977) carried out a case study to assess the linguistic development of Genie, a girl who was discovered by the authorities in Los Angeles at the age of thirteen having been cruelly neglected by her parents and subjected to physical and verbal abuse by her father and brother. Genie was confined to a room from the age of one, where she was kept restrained at various times in either a potty chair or a crib. Consequently, upon discovery, Genie walked awkwardly and made very little sound, having been beaten for making a noise. The majority of information for the case study was gathered from observing Genie and working with her in regular sessions. Information about her early life was collected primarily from Genie’s behaviour itself and the few comments she would make. Daily doctor’s medical reports, as well as videotapes and tape recordings, were made and catalogued. Psychological testing was also used with observations and language tests.
Curtiss was one of the psychologists assigned to help Genie and was especially interested in seeing if Genie could learn the language. Genie provided scientists with the chance to attempt to test the Critical Period Hypothesis—a theory proposed by linguist Eric Lenneberg (1967). This is the hypothesis that humans are unable to learn the use of grammar correctly after early childhood because of the lateralisation of the brain.
However, in spite of initial progress, Genie never recovered completely from her privation. Although Genie was able to show some modest progress in her language development, her seeming inability to develop normal language was seen as evidence that the critical period for learning language was from two years to puberty. If that critical time was missed, as it had been for Genie, then it was claimed that it was not possible to develop full use of language.
The research reports on Genie form a richly detailed case study with extensive quantitative and qualitative data. However, caution must be exercised in attempting to generalise from case studies. There was also a strong suggestion that Genie may have had developmental problems in infancy, so it was not possible to say that her subsequent failure to develop normal language was due solely to her experience of deprivation.
Strengths of Case Studies
A well-conducted case study can construct a full and detailed picture of an individual, a group or an organisation in a particular situation. Case studies may also be longitudinal (long-term) and therefore provide a developmental and historical perspective of the behaviour(s) being investigated. They also generate rich data using a triangulation of methods (more than one research method), which can lead to higher validity. Furthermore, case studies can also be invaluable in generating new psychological theories and also in providing evidence for existing psychological theories. For example, case studies on brain-damaged patients with memory deficits have helped to revolutionize theories about how our memories function. In addition, case studies often have high ecological validity and thus reflect more natural real-life behaviour.
Limitations of Case Studies
Case studies often lack academic rigour when compared to controlled experiments where there are more specific guidelines for collecting and analysing data. They can also create an extensive amount of data and this can be unwieldy and time-consuming to analyse. Longitudinal case studies, in particular, can run the risk of over-involvement by the researcher, leading to bias when researching and when analysing data and reporting results. Moreover, case studies have limited generalisability as they frequently involve a small purposive sample and thus the results cannot be assumed to hold true for other populations. A further disadvantage of case studies is that due to their unique nature, they are difficult to replicate hence the reliability of the results is harder to establish.
2.3 Interviews
Interviews are a widely used research technique in psychology and are classified as a self-report method because they usually rely on verbally communicated data from participants. The way an interview is conducted can vary depending on how the researcher wants to obtain his/her data. A researcher could choose an unstructured interview when they wish to have a conversational interview where the conversation topics guide the questions that are asked. In contrast, a researcher may decide to use a semi-structured interview in which a list of pre-set questions is posed to the participants but the opportunity to ask further questions is built into the procedure. Finally, the focus group interview is an additional option for gaining self-report data in a group situation when it is felt that one-to-one interviews may not be as productive in gathering information. These three interview techniques are discussed in more detail below.
Unstructured Interviews
These interviews do not use any pre-arranged questions but the researcher will have in mind which topics they would like to focus on and also what they want to achieve from the interview. As a result, the researcher will guide the conversation to some extent and will pursue aspects of the conversation with further questions if these aspects are relevant to the research hypothesis. Building up a rapport is important in unstructured interviews in order to promote an atmosphere in which the participant feels they can speak openly. It is recommended that such interviews are recorded as long as permission is obtained from participants. This enables the researcher to concentrate on the conversation without having to be distracted by note-taking.
Focus on Research
Pai and Kapur’s (1981) study aimed to construct a suitable interview technique to assess the burden placed on relatives of psychiatric outpatients at a clinic in India. The initial phase in this study involved conducting an unstructured interview with one relative of each of the 40 patients attending the clinic. The purpose of this phase was to identify different categories of burden. Some of the categories identified included financial burdens and family routine burdens. This categorisation was ultimately used for reliability and validity testing to devise an interview schedule that could be used in semi-structured interviews assessing burden levels across different settings and in different psychiatric illnesses.
Strengths of Unstructured Interviews
A major advantage of carrying out an unstructured interview is the flexibility with which questioning can be adapted to the situation. Furthermore, the data gathered can be considered more valid because it can be considered to be more natural: the participant is given the opportunity to expand on their points and clarify them hence the data is likely to have more value as the researcher is less likely to misinterpret the findings. In addition, unstructured interviews give an opportunity for any ambiguities in participants’ responses to be further clarified through the use of additional questioning.
Limitations of Unstructured Interviews
Given that unstructured interviews are open-ended they are therefore unique in content and consequently very hard to replicate to test the reliability of the findings. A wealth of data is also generated which ultimately is challenging to analyse in a simple way. Furthermore, the ability to steer conversations in certain directions is a considerable skill hence substantial training of interviewers is required. This interview method is also time-consuming as only one participant can be tested at a time. This therefore adds to the costs of a research study.
Semi-structured Interviews
Such interviews combine a structured and unstructured approach to the questioning of participants. With regard to the structured component of this type of interview, the researcher will follow an interview schedule which consists of two components. Firstly, the researcher will construct a set of pre-determined questions that they want to ask the interviewees. The format and wording of these questions will be replicated exactly during the questioning. Secondly, if the interviewer is not the researcher but has been trained to carry out the research, the interview schedule will contain instructions on how to conduct the interview. Semi-structured interviews also use open-ended questions that have not been pre-arranged and the choice of questions is guided by the conversation but still focused towards the research hypothesis.
Focus on Research
Rutten et al (2007) aimed to assess how far organized youth sport was an influential factor in antisocial and prosocial behaviour. Using samples of adolescent soccer players and swimmers aged between 12 and 18 years of age from 10 sports clubs in the Netherlands, the researchers used a variety of methods to assess prosocial and antisocial behaviour. These included two semi-structured interviews, the first of which was the Sociomoral Reflection Measure, a test designed to measure sociomoral reasoning competence. The second semi-structured interview was designed by Rutten et al to measure participant scores on fair play orientation. The use of these interviews and other measures of prosocial and antisocial behaviour led to the conclusion that young athletes with high sociomoral reasoning skills were more likely to be prosocial.
Strengths of Semi-structured Interviews
Given that semi-structured interviews contain an open-ended aspect in terms of the opportunity to ask further questions, semi-structured interviews, like unstructured interviews, provide an opportunity for any ambiguities in participant responses to be pursued and clarified. In addition, there is the opportunity for participants to open up and talk in depth about their opinions. Moreover, the use of pre-arranged questions is useful in this context because the possibility of potential researcher bias in the choice of questions is reduced.
Limitations of Semi-structured Interviews
The limitations of semi-structured interviews mirror those of unstructured interviews as both have open-ended elements in the interview process.
Focus Group Interviews
In this type of interview, participants are given the chance to discuss their opinions and beliefs on the psychological issue under investigation. The ideal size is considered to be 6-10 participants as this helps to ensure that participants are not overwhelmed or intimidated by being in a very large group. Researchers will use this method if they feel that a one-to-one interview may not generate enough spontaneous detailed information. This type of research technique therefore enables participants to talk more freely and generate questions as the conversation progresses.
Focus on Research
Schulze and Angermeyer (2003) used focus groups in their study on experiences of stigma by schizophrenic patients, their relatives and mental health professionals. The research was carried out in Germany in four different towns and 12 focus group interviews in total were conducted, 3 at each centre. The focus groups consisted of either patents or relatives or mental health professionals, i.e., the participants were not mixed in the focus groups. All of the patients had received an ICD-10 diagnosis of schizophrenia and were out-patients during the period of the study. Relatives of the patients were contacted to see if they wanted to take part. The professionals' group considered a wide range of participants including psychiatrists, psychologists and nurses. The patient groups were asked to discuss what had changed in their lives since their diagnosis and the relatives and professional groups were asked to discuss how they viewed the situation of the patients. The interviews were video- and audio-recorded.
The main themes that emerged from the focus group interviews were that stigma was experienced at an interpersonal level, at a social structural level, via negative public images and in terms of employment. The authors concluded that stigma surrounding schizophrenia should be tackled at these four levels and offered some strategies for doing so.
Strengths of Focus Group Interviews
As focus group interviews involve lengthy discursive procedures, the opportunity is more likely to arise for issues to be discussed that the researcher has not previously considered. This could therefore provide a valuable opportunity to gather additional data relevant to their theory. In addition, as in unstructured and semi-structured interviews, there is ample opportunity for participants to open up and expand on their ideas. A further strength is that focus group interviews are cheaper to conduct in that a large number of participants can contribute to the study at once instead of in individual face-to-face interviews.
Limitations of Focus Group Interviews
Some participants may be in a position of vulnerability in a focus group testing situation if they feel they are being dominated by more extrovert and vocal members of the group. This domination leads to a further limitation of this technique because it means that valuable data may be lost if some participants feel they cannot contribute. As a result, such group discussions are difficult to control and the study may therefore not fully achieve what it set out to achieve.
Further Disadvantages of Interviews
The three types of interview described above rely on self-report data from participants and, as a result, are vulnerable to certain biases that distort the genuineness of the data. One type of bias is social desirability bias and is the consequence of participants wanting to represent themselves in a positive light to the researcher. This type of bias can occur if a participant feels embarrassed about admitting something negative about their behaviour, for example. Another type of bias is participant bias and occurs when participants try to respond in a way that they think the interviewer desires. A study is particularly vulnerable to this form of bias if the participant guesses the aim of the study. Finally, bias can also occur in interviews as a consequence of interviewer effects where characteristics of the interviewer such as age, gender or ethnicity influence the participant to respond in ways that they would not normally respond. Taken together, the biases described above, ultimately threaten the internal validity of a study, i.e., the extent to which the results can be considered a true reflection of the behaviour under investigation.
Surveys
A survey is an alternative self-report technique that can be conducted on a large sample of people and can therefore gather more substantial amounts of data than both interviews and case studies. Surveys can take a number of forms and these include mailed surveys, phone surveys and online surveys. They are primarily oriented towards gathering large amounts of quantitative data via questionnaires. Surveys are useful in following up on the results of qualitative interviews and case studies because the results can be assessed on a wider sample of participants. For example, if a qualitative unstructured interview on a small sample of elderly people demonstrates some of the factors that influence positivity in this age group, a large scale survey could be designed to assess whether this observation holds true in the wider population. Similarly, if a case study on an older student going to university after a long break from education demonstrates the particular factors that influence their likelihood of success in their degree course, a large scale survey could then ascertain whether these findings reflect the experiences of older students in general.
Focus on Research
Leder and Forgasz (2004) conducted research into the obstacles and learning environment of mature students in Australian universities. Although the study used a wide range of methods to investigate their aims, the first part of the study involved the use of a large-scale survey given to undergraduates enrolled in mathematics courses at five Australian universities. Ultimately, a sample of 815 students completed the survey of whom 61% were male, 12% were mature students and 37% were from a non-English speaking background. The survey consisted of both open and closed questions hence there was the opportunity to gather quantitative data from the closed questions. Closed questions assessing mood and perceptions of the learning environment were included in the survey questions.
Analysis of the results indicated that mature students from overseas were more likely to feel lonely, were less likely to have the security of a support network and were less familiar with the processes of the academic environment than local mature students. The authors, therefore, concluded that more active support networks and information services would be beneficial to overseas mature students in helping them cope with university study.
Strengths and Limitations of Surveys
Surveys have a number of strengths and these include the ease with which they can be completed because participants can complete them at their own leisure. This technique is also useful in assessing hard to reach participants and there can be a quick turnaround in obtaining data. In addition, interviewer bias can be minimized especially in postal and online surveys. However, without face-to-face contact the motivation to complete surveys may be low and hence response rates will also be low. There is also the potential for greater inaccuracy and bias in survey data as participants may rush completion. Finally, the use of a survey does not provide an opportunity for responses to be followed up by the researcher.
3. Elements of Researching Behaviour
3.1 Research Design
Researchers have many decisions to make before they gather data to answer a question about human behaviour. They must analyse behaviour and identify variables of interest. Are these variables related? Does a change in one variable cause a change in the other variable? They must decide on how best to get their sample of participants and how to organise them. Once a sample has been obtained, experimenters have two options about how they design the experiment: independent measures design (sometimes referred to as between subjects design) or repeated measures design (sometimes referred to as within subjects design).
With a repeated measures design, the researchers deliberately want the same people to be in both conditions. For example, they might want to compare performance before and after a treatment.
With independent measures design, the experimenter randomly allocates participants into two groups. This is an effective design when your behaviour of interest is assumed to be the same for everyone in the general population. One group would receive the experimental condition while the other is the control group who do not receive the experimental condition.
Matched pairs design is very important if the participants are rather mixed and their random allocation to the two conditions could introduce a confounding variable. For example, comparing a middle-aged person’s memory with that of an 18-year-old student’s may not be the best way to allocate participants but that could happen with random allocation. In this case age and different backgrounds mean that these uncontrolled variables might affect the results. The solution is to sort the sample of participants into matched pairs as far as possible and allocate one of each pair to each condition.
3.2 Hypotheses
A hypothesis is a statement that serves as a possible explanation for observed facts. A hypothesis is tested empirically (practically) through trial and error (experimentation) that generates data.
Most of us create theories and test hypotheses every day of our lives. For example, my theory is that my oven light has gone off and the oven is failing to heat because there is no electricity getting to it. I generate a hypothesis: we have a failure of electrical power to the whole house. I test this hypothesis by switching on the light in the kitchen. It works, so I am forced to reject this hypothesis and develop another. I then hypothesise that we have a failure of power to the oven only. I test this revised hypothesis by walking to the fuse box, opening it and checking the fuse—which has indeed blown. I could have tested this hypothesis in several different ways— by inspecting the wiring or calling an electrician, for example, but I decided to start with what, from my experience, was the most likely explanation. In this example, the hypothesis was tested empirically by my switching on the light in the kitchen, which led me to reject my initial hypothesis and develop another, which was supported when I checked and found the fuse had blown.
In psychology, the process of moving from theory to a testable hypothesis is the same as in everyday life. A theory is based on the principle of falsifiability, which was developed by the philosopher Popper (1959). This principle means that we do not prove theories, we merely stay with the one that we have so far been unable to falsify, and hypothesis-testing is how we test predictions that we make from theories. If we find no significant difference between the two or more conditions of the experiment, then we are unable to reject the null hypothesis. In other words, we have to accept the probability that the IV has had no effect on the DV, or at least that any change in the DV is due to chance and not due to the manipulation of the IV.
Research Hypothesis
The research hypothesis (sometimes called the experimental hypothesis or the alternative hypothesis) is written as H1. It is the researcher’s expectation regarding the results of the experiment, and it usually suggests that the IV will have an effect on the DV. An example would be if I supposed that listening to classical music would improve students’ scores on maths tests. (This is a similar effect to that suggested, albeit tentatively, by previous research, see Rauscher et al., 1993.) I am predicting that students who take the maths test while listening to classical music will do better, as measured by their scores, than those who have silence as the background condition.
Research hypothesis: students who listen to Mozart's piano sonata K448 while completing a standardised mathematics test will solve significantly more of the 50 maths problems than students who complete the test in a silent condition.
Null Hypothesis
The null hypothesis is denoted by H0 and is usually the hypothesis that the IV has had no significant effect on the DV and any observed differences in the conditions result purely from chance. The operationalized null hypothesis for the example given above would be:
Null hypothesis: there will be no difference in scores on a standardised mathematics test of 50 problems between students who complete the test while listening to Mozart's piano sonata K448 and students who complete the test in a silent condition; or any differences in the scores will result purely from chance.
One-tailed (Directional) and Two-tailed (Non-directional) Hypotheses
A one-tailed hypothesis is simply one that specifies the direction of a difference or correlation, while a two-tailed hypothesis is one that does not. A one-tailed hypothesis is therefore often called a directional hypothesis and a two-tailed hypothesis is also known as a non-directional hypothesis.
For example, if we look at the relationship between income and number of overseas holidays taken each year we might hypothesize that numbers of overseas holidays taken per year tend to increase with income. This is a one-tailed hypothesis because it specifies the direction of the correlation. A possible directional hypothesis would be:
H1 – There will be a positive correlation between personal disposable income after taxes and the number of overseas holidays that a person takes each year.
On the other hand, if we were correlating people's heights with their income, we might have no good reason for expecting that the correlation would be positive (income increasing with height) or negative (income decreasing with height). We might just want to find out if there was any relationship at all, and, therefore, we would develop a two-tailed hypothesis. A possible non-directional hypothesis would be:
H1 – There will be a correlation between personal disposable income after taxes and a person’s height in centimetres.
Levels of Significance
It is not possible to reject (refute) the null hypothesis unless we set a level for the probability of the effect on the DV being non-accidental. This is called the probability (p) value and it determines whether or not we reject the null hypothesis. The p value provides an estimate of how often we would get the obtained result by chance, if, in fact, the null hypothesis was true. As a general rule, if the results obtained from the statistical analysis of the data are smaller than the p value, then the researcher can reject the null hypothesis and accept that the samples are truly different with regard to the outcome. If the results are larger than the p value, then the researcher can accept the null hypothesis and conclude that the IV had no effect on the DV.
The level of the probability of the effect is due to chance is usually set in the social sciences at 5%. Therefore, the null hypothesis is only rejected if there is a 5% (1/20) or less chance that the observed difference is not due to the effect of the IV on the DV. This is written as p ≤ 0.05, which means that the result is said to be significant at the 0.05 level. We would accept the null hypothesis if there was over a 5% probability that the difference happened by chance (p>0.05).
So, if after applying inferential statistical testing to the data we come up with a result that is significant at the p≤ 0.05 level, then we can reject the null hypothesis and accept our experimental hypothesis that there was a significant difference between the groups that was not just due to chance.
Type I Error
A Type I error is known as a ‘false positive’. It occurs when the researcher rejects a null hypothesis when it is true. With a 0.05 level of significance, there is a 1/20 (5 out of 100) chance that we are wrong and that our IV does not affect the DV. If we then say that it does, we are committing a Type I error.
Example: You hear a fire alarm and react as if there is a fire and there is not. In this example, you are rejecting the null hypothesis (no fire) when it is true.
One of the main ways to check for Type I errors is to replicate studies. If a second researcher confirms the results of a previous study, then the likelihood that the original result was just chance is far lower. Another way is to reduce the p value to p ≤ 0.01. However, this comes with the increased risk of a Type II error.
Type II Error
A Type II error is a ‘false negative’, and it occurs when the researcher accepts a null hypothesis that is false. We are unconvinced by our data and say that the IV did not affect the DV when it did.
Example: You hear the fire alarm and think that there cannot possibly be a fire, so you ignore it. In this example, you are accepting the null hypothesis (no fire) when it is not true.
Type II errors are most common when the p value is reduced to a more stringent level. If the researcher wants to challenge a well-established theory, then the convention is to achieve results that are significant at the p ≤ 0.01 level before publishing. In this way, the likelihood that the results were due to chance is 1% or less.
3.3 Variables
A variable is the phenomenon that changes depending on the experimental circumstances. It is what varies.
Independent Variable
The independent variable (IV) is manipulated by the researcher to measure the effect of the dependent variable. In the example of the effect that listening to classical music has on scores in a maths test, the IV is the presence or absence of the music.
Dependent Variable
The effect of the IV on the dependent variable (DV) is measured, usually by comparing the results during the experimental condition with results from a control group that has not been subjected to the condition or comparing the results between two experimental groups. With our experiment into the effect of classical music, we could have introduced a group that listened to heavy metal music and then the IV would be the type of music, while the DV remains the scores on the maths tests.
Controlled Variable
If a researcher wants to determine a cause-and-effect relationship between the IV and the DV, then it is important to control all possible confounding (extraneous) variables. For example, in the experiment investigating the effect of classical music on maths scores, the researcher might want to ensure, through issuing headphones, that each participant in the experimental group hears the music at the same volume. However, to control for the effect of wearing headphones on the participants, and possibly ultimately on the maths scores, participants in the control group should also wear headphones, even though they are not listening to music.
Confounding variables can be participant variables such as age, gender, ethnicity, motivation. They can also be situational variables, such as time of day, temperature, background noise. Any of these might explain differences between the groups. Similarly, demand characteristics need to be controlled. An example of a demand characteristic is the experimenter effect. This effect is when an experimenter unconsciously, maybe through body language or tone of voice, gives cues to participants about how to behave. These participant expectations can affect the trustworthiness of the data.
As many confounding variables as possible should be controlled if the researcher wants to be sure that the observed change in the DV is caused by the manipulation of the IV. That is why, despite the loss of ecological validity (the confidence that the results are representative of what would happen in real life), experiments where confounding variables are controlled are often preferred over natural experiments, where such control is impossible.
The most difficult variables to control when using experimental research methods are those
associated with demand characteristics. The methods of controlling these include:
Single blind technique—when the participants do not know which group or condition they are allocated to, the experimental group/condition or the control group/condition. This controls participant expectations.
Double blind technique—when neither the researchers nor the participants know which group or condition the participants are allocated to. This controls the experimenter effect and participant expectations.
3.4 Sampling Techniques
Probability-based Sampling Methods
These are many methods that use some form of a random selection of participants. To have a random selection method, you must set up some process or procedure that assures that the different units in your target population have equal probabilities of being chosen. The target population is the population from which the researcher is drawing the sample. For example, if we wish to discover Pamoja students’ attitudes towards online learning, then our target population is Pamoja students, and we would select our sample from this population. If we wished to compare the progress of Pamoja students in the second year of Psychology with those in the first year, then our target population becomes Pamoja Psychology students from both years and we select our sample from this population.
To select participants randomly, researchers have often used such methods as picking a name out of a hat or choosing the short straw. These days, computers are usually used as the mechanism for generating random numbers as the basis for random selection.
Simple Random Sampling
The purpose of simple random sampling is to select participants so that each has an equal chance of being selected. The procedure is to use a table of random numbers, a computer random number generator or a mechanical device to select the sample. If a researcher can use an Excel spreadsheet to generate random numbers from a list containing the whole of the target population, this is probably the easiest way.
Stratified Sampling
This involves dividing your target population into sub-groups and then taking a simple random sample in each sub-group. In stratified sampling, non-overlapping variables such as age, income, ethnicity or gender are often used as the sub-groups.
Stratified sampling has an advantage in that it enables the researcher to represent important subgroups of the population, especially small minority groups. In order to be even more precise, researchers will often use the proportion of participants in the sub-group that represent the proportion of the sub-group in the target population. For example, if people over 65 years old make up 40% of the target population, then 40% of the total participants in the final sample will be from the over 65 sub-group. This is called proportionate stratified sampling and it will generally have more statistical precision than simple random sampling.
Cluster Sampling
Cluster sampling is a sampling technique where the entire population is divided into groups or clusters, and a random sample of these clusters is selected. It is typically used when the researcher cannot get a complete list of the members of a population they wish to study, but they can get a complete list of groups or 'clusters' of the population. It is also used when a random sample would produce a list of subjects so widely scattered that surveying them would prove to be far too expensive, for example—people who live in different postal districts in the UK. It is easier to take a random sample of the postal districts and draw participants from those.
Non-probability-based Sampling Methods
With non-probability-based sampling methods, samples are selected based on the subjective judgement of the researcher, rather than random selection. They are more likely to be used in qualitative research because the goal is not to achieve objectivity in the selection of samples or necessarily to attempt to make generalisations from the sample being studied to the target
population. Instead, researchers tend to be interested in the details of the sample being studied. While making generalizations from the sample study may be desirable; it is more often a secondary consideration.
Opportunity (Convenience) Sampling
This is the most common form of sampling used by students conducting their internally-assessed experiments for their IB Diploma Psychology course. Opportunity sampling consists of taking the sample from people who are available at the time the study is carried out and fit the criteria the researcher is looking for. It is a popular sampling technique because it is easy in terms of time and cost. For example, the researcher may use friends, family or colleagues. It is adequate when investigating processes that are thought to work in similar ways for most individuals, such as memory processes. Opportunity sampling is the usual method with natural experiments because the researcher has no control over who is studied.
Purposive Sampling
There are several examples of purposive sampling, which, as its name suggests, is when researchers choose a sample on the basis of those who are most representative of the topic under research or from those with appropriate expertise.
One of the most common purposive sampling methods is snowball sampling. When information is needed from key people in an organisation or from people who have experience of the topic under research, then the researcher may select one or two people for an interview, and these may, in turn, suggest further relevant people who could be interviewed.
3.5 Ethics
All psychological research is bound by a code of practice and ethical principles. In the UK, the source of such principles is the British Psychological Society’s Code of Ethics and Conduct and in the USA it is the American Psychological Association’s Ethical Principles of Psychologists and Code of Conduct.
Previous animal and human research would, if conducted now, break ethical guidelines. It is argued that sometimes a measure of deception in the case of people, or discomfort or worse, in the case of animals, is justified in the pursuit of knowledge that could improve the lives of many. It is difficult to conduct research without running into ethical arguments, but this is a necessary process to avoid thoughtless damage to participants, be they animal or human.
Research Involving Humans
Conducting research that will cause stress, humiliation or any harm whatsoever to the participants is not allowed. Participants need to be fully informed and to give written consent for the research. At any time, participants have to know that they can withdraw themselves or their data from the study, and they should be fully briefed and debriefed as to the aims, methods and (afterwards) the results and how such results will be used. Children, as defined by the laws of the country wherein the study is being conducted, need the signature of a parent or guardian (who, of course, must be informed of all aspects of the research) before they may participate.
If for the purposes of medical research, participants have a chance of being allocated to a placebo group, and, therefore, not receive the same treatment as the experimental group, they need to be fully informed of and give their permission for this before the experiment starts.
Ethics in Psychological Research
Informed Consent
No physical or psychological harm
Confidentiality
Avoidance of deception
Research Involving Animals
Animals cannot give informed consent, be debriefed or ask to withdraw from the research. However, they also should not be subjected to harm during research, wherever possible. The British Psychological Society’s guidelines recommend using the smallest possible number of animals, engaging in naturalistic studies as opposed to laboratory experiments, never using endangered species and making sure that the knowledge gained justifies the procedure. See Chapter 4 on the Biological Approach for more details of the ethics of animal research.
4. Evaluating Research
4.1 Reliability
The quality of the research data produced by psychologists varies. You cannot, therefore, trust all the conclusions that investigators draw when they have carried out research. Some research evidence is very sound, while some is weaker. It is important to challenge everything you read, including reports of research in the media. Once you start doing this, you begin to notice that newspaper headlines and their brief accounts of research are sometimes biased or misleading and that the research being (mis) described in the article may be seriously flawed and, therefore, unreliable. Conversely, the research itself may be fine, but the media report may be seriously flawed and inaccurate! Where possible, if the media report stimulates your interest, try and find the original study and make your evaluation of it.
To conduct research, psychologists must find ways of measuring things. The measuring instrument must be both reliable and valid. It must measure what we want it to measure, and it must be consistent.
Reliability refers to consistency. For example, a person should measure the same height wherever and whenever s/he measures him/herself. During adulthood, our height will stay the same (at least between the ages of about 20 and 60) and should be consistently provided if our tape measure is reliable. If a measurement is reliable it will be consistent and stable, and we can trust the results. A piece of research is reliable if we can replicate it and get similar results.
Reliability can be checked using correlational techniques, such as the test-retest method, where participants take a test twice, and if the test is reliable the two scores will be highly correlated, so if they score highly in one test they should score highly in the other one.
Another way of checking the reliability of a piece of research is to use the measure of inter-rater reliability. This type of reliability is concerned with how closely different people who are marking a test or observing a behaviour agree with each other. If the behaviour or work has been measured (marked) reliably, then there will be high agreement.
4.2 Validity
It is possible for an experiment and the related method of data collection to be highly reliable and yet not be valid. The validity of a psychological measure is the extent to which it measures what it intended to measure. Sometimes it is possible to measure something else accidentally instead. The classic example is that of the early intelligence quotient (IQ) tests, which instead of measuring intelligence measured general knowledge and language skills, with children from the wealthiest families and with the most-educated parents doing better on the tests.
Internal validity means ‘Does this test accurately measure what it is supposed to?’ If a test is used to measure a particular behaviour and there is a difference in that behaviour between participants, but the test does not measure the difference, then the test has no internal validity. The case above of IQ testing is an example of a lack of internal validity.
External validity means ‘Can the results from this test be generalised to populations and situations beyond the situation or population being measured?’ There are two types of external validity:
Population validity refers to the extent to which the results can be generalized to groups of people other than the sample of participants used. Much psychological research uses university students as participants (for example, Bartlett, 1932) and it is difficult to say for sure that the results can be generalised to anyone other than university students.
Ecological validity refers to the extent to which the task used in a research study is representative of real life. Research into eyewitness testimony (see Loftus and Palmer, 1974) has generally lacked ecological validity because participants viewed videos of accidents rather than seeing them in real life, which would have been impossible to organise and also unethical.
4.3 Credibility
Credibility is a term used in qualitative research to assess whether the findings of a study are congruent with the participants’ perceptions and experiences. If so, the findings of the research gain more value and are more believable. Credibility in qualitative research is closely aligned with the concept of internal validity in quantitative research (see above). This is due to the fact that credibility is concerned with the accuracy of results obtained in research. Ultimately, only the study’s participants have a fully legitimate position in saying whether results of a study are credible or not. This can be carried out by a process called member checking where each participant is allowed to check transcripts of what they have said to the researcher (s) to check that their statements have been transcribed accurately. However, credibility can also be assessed by peer debriefing where a colleague of the researcher (s) or an expert carries out an analysis of a study’s findings. Furthermore, triangulation can be used and this involves one research question being investigated by a number of different research methods to ascertain the consistency of the results: if all of the methods show similar findings, this strengthens the credibility of the qualitative results. Credibility is also known as trustworthiness.
4.4 Bias
One of the factors that researchers need to be aware of in a study is bias. Bias refers to factors that may affect the results of the study. The following are common type of bias in research:
Researcher bias is when the researcher acts differently towards participants, which may influence or alter the participant’s behaviour. There a number of different types of researcher bias and these include confirmation bias, where the researcher seeks evidence to support his/her research hypothesis, and gender bias, where the researcher makes judgements about a participant based on their gender. Researchers should therefore be trained to minimise such biases in their studies. In qualitative research in particular, the researcher must also assess personal biases in relation to the study (for example, topic, choice of participants and method) and should apply reflexivity to control for this.
Participant bias, or demand characteristics, is when participants act according to how they think the researcher may want them to act. They may also want to impress the researcher and present themselves in a positive way especially if they are being asked sensitive questions. In order to avoid any negative aspects about their behaviour being known by the researcher, they could therefore fabricate their responses in order to look better to the researcher. This is known as the social desirability effect.
Sampling bias occurs when the sample is not representative of the target population, whether the sample is based on particular selection criteria in qualitative research or on probability sampling in quantitative research. The outcome of sampling bias therefore is that research can be restricted in how far it is generalizable to the wider population.
Reflexivity
Researchers conducting qualitative studies should be aware of how far their own actions within a study affect the results obtained. They should therefore reflect on their involvement in their research to determine whether they may have biased the study in some way. There are two types of reflexivity:
Personal reflexivity involves researchers assessing whether their personal values, beliefs, experiences and expectations have influenced how the study has been conducted and how the data has been interpreted.
Epistemological reflexivity relates to the knowledge gathered from a study. One aspect of this type of reflexivity is researchers considering whether the research methods used in the study have restricted the findings and assess whether alternative methods may have been better. If a researcher used a case study, for example, he/she may consider on reflection that a focus group would have been more useful in investigating the research question.
5. Drawing Conclusions
5.1 Correlation and Causation
Studies that assess correlational relationships between pairs of variables seem at first sight to imply that any strong relationships between variables are evidence that one variable has a causal effect on another. However, this is not the case because such research investigates how variables behave naturally with other variables and are therefore not directly manipulated by a researcher. In order to establish causal links between variables, researchers must manipulate them to see their effect on other variables. For example, in an experiment, the researcher directly manipulates independent variables to see their effect on dependent variables. In this way, the degree of causation between variables can be scientifically explored.
5.2 Replication
Replication is the degree to which the study can be repeated by the same or different researchers and achieve comparable results. In the field of quantitative research in psychology, experiments have become a dominant research method and one reason for this is that the highly standardised procedures they use can be replicated by other researchers to test the reliability of the results across different locations, participants and time periods. If data is consistently reliable, this indicates the robustness of the replicated effect. This is desirable in research because if results are replicated a number of times and are therefore established as reliable, this helps to modify theories of the behaviours under investigation.
Other methods of psychology are less easy to replicate, particularly if they are investigating one-off situations and measuring qualitative data. This means that the reliability of naturalistic observations, interviews and case studies is less easy to establish with the result that such research may need to rely more on method triangulation (see below). Such a process is naturally more time-consuming and researchers, therefore, need to be more innovative in working out ways to establish the reliability of their findings.
5.3 Generalisation for Quantitative Research
One of the issues in quantitative research is how far the results obtained can be generalised to the target population if restricted samples are used. A large number of experimental quantitative studies around the world rely on the volunteer sampling of university students in a restricted age range. This leads to biased samples and restricts the extent of generalisation of findings to other groups in the target population. Random sampling could help reduce this issue as it samples participants from the target population randomly and is said therefore to be representative of the target population. However, random sampling is time-consuming especially if the target population is large. Added to this, people selected via random sampling may elect not to take part. These issues have contributed to the popularity of non-representative sampling techniques like volunteer and opportunity sampling in psychology research. In order to address the issue of a lack of generalisability to the target population however, studies could be repeated with a variety of different participant groups within a target population to establish how far the results reflect the behaviour of different target population members.
5.4 Transferability for Qualitative Research
In qualitative research, the findings from a study can be transferred to settings and/or populations outside the study only if the findings of a particular study are corroborated by findings of similar studies (for example, in multiple case studies). This concept is highly similar to generalisation in quantitative research as it seeks to establish how far results of research reflect the wider population/other settings. Initially, it is those who read a study who make the decision regarding the degree of transferability. Readers could be other researchers in the same field, for example. Initially, the researcher(s) involved in the original study must ensure that the readers are given as much information as possible so that they can judge whether the study is applicable to other contexts/populations. Readers can then use the results of other detailed case studies to make a comparison of the degree of transferability in the research findings.
5.5 Triangulation
Triangulation is an approach used to ensure enough evidence is available to make a valid claim about the results of a study. To achieve this, there are a number of ways that the concept of triangulation can be utilised in research.
Methodological triangulation tests a theory or a psychological phenomenon using different methods of inquiry. Data from a variety of methods (survey, interview, case study, experiments) is used to help validate the results of a study. Both qualitative and quantitative data can be involved in methodological triangulation.
Theory triangulation is used to assess the results of a study from a range of theoretical perspectives. One way to achieve this is to bring together researchers from a number of disciplines and ask them to interpret the findings. If there is agreement among them, then the validity of the study gains ground.
Researcher triangulation can also be used to check how the data is being collected and interpreted in a study. For example, an observational study could use a number of the researcher’s colleagues as additional observers to assess how far the data collected is similar across all of them. The more similar the results, therefore, the more valid the findings become.
A final way of checking the validity of the findings of a study is to use data triangulation whereby the data collected is compared to other data collected on the same behaviour under investigation. For example, a case study collecting data about coping skills in an amnesic individual could compare this with interview data and observational data on coping behaviour in other amnesic patients to test the validity of the case study’s results.
Further Reading
The Pamoja Teachers Articles Collection has a range of articles relevant to your study of the approaches to research in psychology.
References
Bartlett, F.C. (1932). Remembering: A study in experimental and social psychology. Cambridge, England: Cambridge University Press.
Bronzaft, A. L., & McCarthy, D. (1975). The effect of elevated train noise on reading ability. Environment and Behavior, 5, 517–528.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues in field settings. Boston, MA: Houghton Mifflin
Curtiss, S. (1977). Genie: A psycholinguistic study of a modern day 'wild child'. New York, NY: Academic Press.
International Baccalaureate (2017). Diploma Programme Psychology Guide. Cardiff, Wales: IBO.
Lam, C. B., McHale, S. M., & Crouter, A.C. (2012). Parent–child shared time from middle childhood to late adolescence: Developmental course and adjustment correlates. Child Development, 83, 2089–2103.
Leder, G.C., & Forgasz, H.J. (2004). Australian and international mature students: the daily challenges. Higher Education Research and Development, 23(2), 183-198.
Lenneberg, E. (1967). Biological foundations of language. New York, NY: John Wiley and Sons.
Loftus, E. F., & Palmer, J. C. (1974). Reconstruction of auto-mobile destruction: An example of the interaction between language and memory. Journal of Verbal Learning and Verbal Behavior, 13, 585-589.
Miranda, A., Presentacion, M. J., & Soriano, M. (2002). Effectiveness of a school-based multicomponent program for the treatment of children with ADHD. Journal of Learning Disabilities, 35(6), 546–562.
Pai, S., & Kapur, R.L. (1981). The burden on the family of a psychiatric patient: Development of an interview schedule. British Journal of Psychiatry, 138, 332-335.
Pedersen, C. A., Vadlamudi, S. V., Boccia, M. L., & Amico, J. A. (2006). Maternal behavior deficits in nulliparous oxytocin knockout mice. Genes, Brain and Behavior, 5, 274–281.
Piliavin, I.M., Rodin, J.A., & Piliavin, J. (1969). Good Samaritanism: an underground phenomenon? Journal of Personality and Social Psychology, 13, 289–299.
Popper, K. (1959). The logic of scientific discovery. London, England: Hutchinson.
Rauscher, F. H., Shaw, G. L., & Ky, C.N. (1993). Music and spatial task performance. Nature, 365 (6447), 611.
Rutten, E.A., Stams, G.J.J.M., Biesta, G.J.J., Schuengel, C., Dirks, E., & Hocksma, J.B. (2007). The contribution of organized youth sport to antisocial and prosocial behaviour in adolescent athletes. Journal of Youth and Adolescence, 36, 255-264.
Schulze, B., & Angermeyer, M.C. (2003). Subjective experiences of stigma. A focus group study of schizophrenic patients, their relatives and mental health professionals. Social Science and Medicine, 56, 299-312.
Suderman, M., Borghol, N., Pappas, J.J., Pinto Pereira, S.M., Pembrey, M., Hertzman, C., Power, C. and Szyf, M. (2014). Childhood abuse is associated with methylation of multiple loci in adult DNA. BMC Medical Genomics, 7(13), 1–12.
Last updated