×
Log in to StudySoup
Get Full Access to UM - PSY 290 - Study Guide
Join StudySoup for FREE
Get Full Access to UM - PSY 290 - Study Guide

Already have an account? Login here
×
Reset your password

UM / Psychology / PSY 290 / What are the four goals for research in psychology?

What are the four goals for research in psychology?

What are the four goals for research in psychology?

Description

School: University of Miami
Department: Psychology
Course: Introduction to Research Methods
Professor: Rick stuetzle
Term: Spring 2015
Tags:
Cost: 50
Name: Exam 1 Study Guide
Description: Chapter Notes and Class Lectures Notes
Uploaded: 09/21/2015
40 Pages 174 Views 21 Unlocks
Reviews

Haley Pews (Rating: )


Asmae Fahmy (Rating: )


Daniela Bastardo (Rating: )


Francesca DeLuca (Rating: )


Victoria Chippari (Rating: )


Stephanie (Rating: )


Alexandria (Rating: )


Clementina Riobueno (Rating: )


Eureka (Rating: )


Max Zuckerman (Rating: )


Elizabeth Mompoint (Rating: )


Mustafa Azeem (Rating: )


Alejandra Jimenez (Rating: )



PSY 290: Intro to Research Methods 9/21/15 1:42 PM


What are the four goals for research in psychology?



Approaching Psychology as a Science

• 3 Ways of “Knowing”  

o Authority  

▪ Reliance upon authority figures  

▪ Ex. Parents, teachers, government

o Use of Reason  

▪ A priori method  

▪ Result of discussion between people with different ideas leading into  

consensus

▪ Rationalism leads to be science being transparent

o Experience

▪ Empiricism: process of learning through direct observation or experience  

• The scientific approach to studying psychology combines all other ways of knowing but  puts emphasis on  


What is the history research of the human population?



o Objective observation of phenomena

Characteristics of Scientific Thinking in Psychology  

• Assume Determinism (Knowing what determinism is in science)  

o All behaviors have causes  

▪ Therefore, human behavior is predictable

• Makes Systematic Observations  

o Requires precise definitions, appropriate measures and methodologies  

o Operational Definition: define a variable in terms of how its being measured  

▪ How everything is measured and how you are measuring it  

▪ Ex. Anxiety, depression


What is the comparison between laboratory and field research?



• Produces Public Knowledge

o Objectivity  Don't forget about the age old question of What is psychological science?

▪ Eliminate human factors (e.g. expectation & bias); focus on observable  If you want to learn more check out What is homosexuality?

behaviors that can be publicly verified

▪ Sometimes you get results that feel wrong but are right but you have to  

accept the results  

▪ The data is the data; you must accept the pattern as is

• Produces Data-Based Conclusions

o Are there data to support a claim about behavior?  

o Theories are rigorously tested for observed phenomena

▪ It is not a fact  

▪ It is an explanation for what we have on hand

• Produces Tentative Conclusions  

o Subject to revision based on future research  

• Ask Answerable Questions  

o Based on empirical questions regarding human behavior  

▪ Questions that can be answered through systematic observations &  

experiences

▪ Questions precise enough to allow for specific predictions  

▪ Empirical Questions: set up your questions that can be tested, precise, and  concise  We also discuss several other topics like What are neurotransmitters?

• Develops Theories that can be disproven

o Falsification

o If you set it up for it to be right then you are doing it wrong  

Pseudoscience  

• What is it?  

o Looks like science but it isn’t  

o It is more entertaining than it is science  

o On the surface it has the characteristics of science but it doesn’t mean the criteria  for true science  

o Desire to be scientifically-based

o Ex. Astrology, graphology (based on handwriting how your personality is),  phrenology (figure out who and what you are by measuring the bumps on your  skull, Founded by Franz Gall)  

• Why does it continue to be so appealing?  

o It is fun and because one of the characteristics that it is never wrong  

o It will collect “data” but it will not collect it in a scientific manner

o What’s wrong with relying on anecdotal evidence?  

▪ It is biased  

▪ Ex. Vaccines and Autism

• Sidestep Disproof  

o How is disconfirming evidence handled?  

• Reduces complex phenomena to overly simplistic concepts  

o Why is this so appealing to people/ consumers?  

▪ People don’t want complexity  If you want to learn more check out What are cells found in clusters called?

Four Goals for Research in Psychology (usually show up on exams)

• Describing Behavior  

o Characteristics of a good description?

▪ Simple data, sufficient amount of data that will allow you to recognize  

patterns ???? descriptive data (averages, atypical, variability, standard  

deviation)

• Explaining Behavior  

o How can we infer causality?  

▪ You check how one variable relates to the other  

• Predicting Behavior  

• Application/Control  

o Apply principles of behavior learned through research

History of Unethical Research with Human Populations  

• World War II: Nuremberg Code (1948)  

o German physicians and administrators faced criminal charges for participation in  war crimes and crimes against humanity  We also discuss several other topics like What are the common aspects between mesopotamia and egypt?

▪ Medical experiments on concentration cam prisoners without consent  

result in death or permanent disability  

o Result= Nuremberg Code was first international document advocating voluntary  participation and informed consent  

▪ People have to give consent and consent that it is informed  

▪ Informed consent: we have to disclose what you are going to do as a  

research participant includes: risks, physical risks, etc.  

???? We do not tell everything that is going to happen because the  

nature of the study  

• Late 1950s: Thalidomide  

o Approved as sedative in Europe but no FDA approval in USA  If you want to learn more check out Who sculpted st. george tabernacle?

o Prescribed in US to control sleep and nausea during pregnancy…but later found  that it caused severed deformities in fetus

▪ Deformities: flipper feet and hands

▪ Many patients didn’t know they were taking an experimental drug nor did  they give informed consent  

o Result- new regulations from FDA requiring drug manufacturers to prove  effectiveness prior to marketing  

• Tuskegee Syphilis Study (1932-1972)

o US Public Health Service research study  

▪ 600 low-income African- American males in Alabama monitored for 40  

years  

???? 400 who had already been infected with syphilis

▪ Told they were being treated for “bad blood”; free medical examinations  

but not told about syphilis diagnosis  

▪ In 1950s proven cure (penicillin) discovered but study continued until  

1972 with participants being denied treatment  

o Result= Beneficial treatments  

• Project MKUltra  

o Began in the 1950s, ran until 1973  

o Committed many illegal actions, including using unwitting US and Canadian  citizens as subjects  

o Studied process and effects of mind control and manipulation of mental states,  interrogation and torture  

▪ Administration of drugs, including LSD  

▪ Hypnosis, sensory deprivation, isolation  

▪ Verbal and sexual abuse, and torture

o Some evidence that they also used military personnel in some of the studies, both  willingly and otherwise  

o Some beneficial findings:

▪ Gathered evidence about hypnosis, behavior modification, subliminal  

programming, and effects of a number of drugs  

▪ Gave information about possible techniques captured military and  

intelligence personnel might be exposed to, as well as ways to combat  

them  

APA Code of Ethics: General Principles and Ethnical Standards  

• 5 General Principles (see Table 2.1)  

o Beneficence and Non-Malfeasance  

o Fidelity and Responsibility  

o Integrity  

o Justice  

o Respect for Peoples’ Rights and Dignity  

Role of the IRB  

• Institutional Review Board (know what IRB stands for and what it is)

o Housed within research facilities including most universities  

▪ Members include faculty members, community members, nonscientists  

o Evaluate risks/benefits and approve informed consent forms

o Controversial re:  

▪ Ability to critique area-specific research designs

Ethical Guidelines for Research with Humans  

• Planning the Study  

o Balance the need to discover the basic laws of behavior with the need to protect  participants  

▪ Defining the degree of “risk” for participants  

???? Are the situations similar to “those ordinarily encountered in daily  

life or during the performance of routine physical or psychological  

examination or tests”?  

• Except about ordinary and daily things

???? Especially important consideration with special populations  

▪ Continual monitoring of “risk” throughout course of study  

???? Ex. HRT study conducted by NIH

• Ensuring that participants are volunteers

o Getting informed consent

▪ must give enough meaningful information for participants to volunteer

o Consent must be documented

▪ Exceptions= anonymous surveys, naturalistic observations

▪ See Fig 2.3 for sample consent form  

o Withholding information about the true purpose of a study at the beginning of the  experiment  

▪ Ex. Milgram’s obedience study  

???? Shocking people  

o Offering Inducements for Participants

▪ Targets the vulnerable?

• Treating participants well  

o Full debriefing, dehoaxing, desensitizing  

▪ Debriefing: giving the full information of the study at the end  

▪ Dehoaxing: when you lied to them and you want to explain the truth to  

them  

▪ Desensitizing: where you have taken someone from a resting state to an  uncomfortable place and bring them back to resting and comfort

o Provide appropriate feedback after the study

▪ Summary of results, follow-up contact, contact info for them to reach you  o Maintaining confidentiality  

▪ Identity of participants not to be revealed

???? Exception= when researcher compelled by law to report certain  

disclosures  

• ex. child abuse, intent to harm self or others  

• Animal Ethics

o Animals are treated very well  

o There is a code of ethics that is in place of what could be done to them and what  cannot  

o You are responsible for the animal from the minute you get till it dies or it is  euthanized  

Research Ethics: Scientific Fraud

• Plagiarism  

o Taking somebody else’s work and ideas and using it as your own  

o You don’t have to copy the words for it to be plagiarism but if it is not cited  correctly it is considered plagiarism  

o You must always site your source

• Carelessness

o People will put things down without checking the accuracy of their work  o Ex. Misquoting, statistics

9/21/15 1:42 PM

Inter-relations among Basic and Applied Research  

• Theory of Behavior  

• Study of Basic Behavior

• Accumulate Body of Knowledge

• Accumulate Body of Applied Knowledge

• Generate Applied Research Question

o Today’s applied answer can be tomorrow’s question  

Laboratory Vs. Field Research  

• Laboratory 

o Does not have to be an actual lab

▪ Pros

???? There is a lot of control in a lab (the biggest strength)

• You can control the situation  

o Temperature, the size of the room, time of day

???? It makes it easier for other people observing you as the  

experimenter and making sure that you are following protocol

▪ Cons

???? Some people might act different in a setting that they are not used  

to

• Lab settings are artificial  

• People are not under observation for their lives  

• Does not give realistic data  

???? Mundane Realism

• Labs look and feel artificial so the findings aren’t realistic  

enough  

???? The information we give may be something that the experimenter  

wants to see or wants to hear  

• Field  

o Pros

▪ You get more accurate date  

▪ The mundane realism is higher  

o Cons

▪ It is hard to have a lot of control  

???? You can still pick the time, people, etc ???? just cannot control what  

is going to happen  

o If it purely naturalistic observation then the IRB does not have to be involved

o If they are going into a naturalistic setting (ex. school) you need consent from the  IRB

• How do you decide which study to conduct?

o Depends on what your goal is  

o How much control do you need

• Dutton and Aron Study  

o The bridge study  

o They were testing the two factor theory of emotion  

▪ The two factors are physiological arousal (blood pressure goes up and  down, turn red, heart rate increase, pupils dilating, etc) and the  

interpretation of that arousal  

▪ Physiological arousal can be general  

o They originally conducted a field study  

▪ They had a female confederate approach men in a park setting on a bridge  ???? One bridge was higher and swayed and the other bridge was lower  

???? She asked them if they wanted to be in a study and asked basic  

questions and the TAT (see a picture and tell the story before and  

after)  

???? She told the participants that if they have any questions to call her  back after  

▪ In the higher bridge they are having more physiological arousal and they  are going to interpret that as arousal to the women  

o More men in the higher bridge called the women back  

o They said that this supports the two factor theory of emotion  

▪ They had men who were already crossing the bridge  

▪ If you do not manipulate it then you cannot make cause and effect  

▪ They couldn’t choose which men on the higher bridge than the lower  bridge  

???? Maybe the men on the higher bridge were more confident and  

were willing to take risks more than the men who were on the  

lower bridge  

o They then did it in a laboratory setting  

▪ They randomly assigned the men  

▪ They injected some of the men with adrenaline (high arousal)  

▪ They then found that the men with higher arousal were more likely to call  the women than the men with less arousal

Quantitative vs. Qualitative Research  

• Quantitative  

o Researchers are more likely to use quantitative because it includes numbers ▪ Ex. Make people fill out questionnaires and you get a score (such as the  

BDI, whatever score you get determines how depressed you are) ???? allows  

you to calculate averages, standard deviations, etc.  

o Data Format

o Presentation of Results

• Qualitative  

o Many people have a bias towards qualitative research  

o The data just describes doesn’t tell us exactly what the data is  

o Data Format

o Presentation of Results

Getting Started: Asking an Empirical Question

• A gradual process of narrowing down a broad topic to a specific question  

• Two essential features of an empirical question  

o Answerable with data  

o All terms must be precisely defined  

▪ Operational definition: is when you define a construct or variable in terms  of how it is being measured in that study  

???? Ex. Depression as a total score in the BDI, Depressions as whether  

someone answer yes or no in question

▪ Makes your job easier AND is essential for replication  

• Does social media use increase depression in youth?

o Operational definitions  

▪ Population of interest: teenagers  

▪ Independent Variable: Social Media  

▪ Social media use

???? Hours on facebook per day  

o Phrase as an empirical question  

• Does exposure to media violence damage children?

o Operational Definition

o Phrase as an empirical question

9/21/15 1:42 PM

Getting Started: Sources of Research Questions  

• Observation of Behavior  

o Ideas you generate simply by watching and/ or knowing someone (ex. “What  causes XX to act like YY”)  

• Classic example of observations serving as foundation for research questions  o Kitty Genovese Case (1964)  

▪ The bystander effect  

▪ Women was attacked outside of her apartment screamed for help and no  

one intervened  

???? The attacker left and came back and finished her off  

▪ People in New York got a reputation that they didn’t care about anyone but  

themselves

▪ People thought that someone called the police but didn’t do anything  

• Serendipity  

o Discovering something while looking for something else entirely  

o Example of a serendipitous observation of infant temperature  

▪ Exuberant infants  

• Theory  

o Serves as a working truth subject to revision pending outcome of empirical  

research study

▪ Hypothesis= prediction about specific events based on theory  

???? A testable prediction, NOT a guess  

o A theory that explains every possible outcome is USELESS because it never be  wrong  

• Characteristics of Good Theories

o Productivity- stimulate a lot of research

o Falsification- a good theory is open to testing and can be proved wrong (= is  falsifiable)  

▪ Therefore, you need to state your hypothesis and design your study in a  

way that allows for your hypothesis to be disproved  

o Parsimony- simplicity. Better theory is one that explains phenomenon with a  minimum number of constructs and assumptions  

▪ they explain everything in the simplest way possible  

▪ Getting from point A to point B with the simplest explanation possible  

o Everything is built on assumptions  

• Developing Research from Other Research

o Replication

▪ Direct Replication

???? When you perform the exact same study as the person who did  originally  

▪ Conceptual Replication  

???? When you run a study but you have different methodology or  materials

???? Ex. Mozart Affect  

• The first time they tried to replicate it they used a different  

piece of music  

o Extension

▪ Where you take the idea but extend it  

▪ You see a study and add more to it, ask what if or research something else  with that study  

o Extending previous research based on the “What’s Next?” Question ▪ Unanswered or new questions generated based on the results of  completed studies  

▪ A process that generates systematic programs of interconnected  experiments

???? Within a research group  

???? Among a community of researchers  

o Some Examples of the “What’s Next?” Question and the Generation of New  Research Questions  

▪ It has been reported consistently that…

???? “U.S. born Latino high school graduates enroll in college at nearly  the same rates as whites but are much less likely to earn college  

degrees”

• What’s next?

o What majors?  

o What is their socioeconomic class?

o Where do they live?

9/21/15 1:42 PM

From General Constructs to Specific Measures  

• Operational Definition= a definition of a concept or variable in terms of precisely described  operations, measures, or procedures

• Construct of interest???? Operation Definition???? Specific Behaviors to Measure

An Example of A Measured Used to Infer a “Non-observable” Construct

• The use of the habituation design in the study of infant cognition  

o Based on known fact that infants prefer to look at novelty  

▪ Show infant the same stimulus over and over and they will gradually  

decrease the amount of time they spend looking  

???? = Habituation  

▪ Show infant something new and their looking time will go back  

???? = Dishabituation  

o Habituation design is used to infer infant “understanding”  

• Example:  

o Testing infants for understanding for object permanence

▪ Object permanence: understanding that objects continue to exist when  

they are out of sight  

Evaluating Measures  

• Results are repeatable when behaviors are re-measured

• Is measured score close to the “true” score?

• Measurement error= difference between true score and measured score  

o True Score = Measured score + Error  

o True Score – Measured Score= Error  

• Reliability  

o Depends on relative contribution of measured score vs. measurement error in  predicting true score

▪ If measured score error are high its reliable  

▪ If measured score and error are down reliability is less  

o Ex. GRE Scores  

▪ What’s the problem if you have huge variability in an individual’s score  

from time to time?

???? Test Retest Reliability  

• How similar is score at time 1 to score at time 2  

o Does score at time 1 correlate with score at time  

2?

o High similarity= strong correlation

Evaluating Measures: Validity  

• Validity  

o does the measure actually measure what you hope it does?

▪ Ex. Intelligence Testing  

???? That other than intelligence might you be worried you are testing?  

• Reliability is necessary for validity  

• Types of Validity  

o Content Validity  

▪ The measure makes sense…

???? IQ—Which has greater validity  

• a test of reasoning/thinking  

• a test of ability to ride bike between 2 white lines  

▪ but this isn’t enough to show that a measure is valid…

o Criterion Validity  

▪ Does measure relate to an outcome or criterion?

▪ Does the measure  

???? Accurately predict future behavior?

???? A meaningfully relate to some other measure of behavior?

o Construct validity  

▪ Is the construct measure by the instrument valid?

▪ Is this the best instrument for measuring it/

▪ Scores on test measuring construct  

???? Should relate to scores on tests of theoretically related constructs  

• = Convergent validity  

???? Should not relate to scores on testes of theoretically unrelated  

constructs = discriminant validity  

o Validity assumes reliability (i.e., valid measures must be reliable)  

▪ BUT a measure can be reliable without being valid  

???? Example

Scales of Measurement  

• Nominal Scores  

o a category label  

• Ordinal Scales  

o Sets of rankings  

• Interval Scales  

o Like a rank ordering but an equal interval between events

• Ratio Scales

o Like an interval scale but has a true zero point (typical in studies using physical  measures)

▪ To tell the difference between interval and ratio scales ask yourself  questions like  

???? Does 0 degrees F mean there is no temperature? No  

???? Does a dress size of 0 mean the dress does not exist? no  

???? If you could have a height of 0 cm would that mean there is no  

height? Yes

▪ Includes anything you can count (ex. # of cars in parking lot, # of  students in lecture hall)… in these cases 0 mean there are no cars in  parking lot or no students in lecture hall

9/21/15 1:42 PM

Types of Validity  

• BE SURE YOU CAN EXPLAIN THE LOGIC OF THIS STATEMENT!

o Validity assumes reliability (i.e. valid measures must be reliable)  

▪ BUT a measure can be reliable without being valid  

???? Example  

• Reliability must come first  

• You have to get a consistent reading on something that  

you are measuring

• There is no such thing as a measure that is valid  

sometimes  

o If it is valid sometimes then it is not valid  

Basic Statistical Analysis  

• Descriptive vs. Inferential Statistics  

o Descriptive= summary of data collected from sample of participants in study  ▪ Function is to reduce a large amount of data to a smaller, clearer set  

▪ Presented numerically and/or visually  

o Inferential= allows you to draw conclusions about data that can be applied to  broader population  

Some Common Descriptive Statistics  

• Central Tendency  

o What’s the most typical score in a set?

▪ Mean= average=  

???? Sum of all score/ total #scores  

▪ Median= score in the exact middle of ordered set of scores  

???? Median location (N+1)/2

???? Conditions under which median is more representative than mean?

▪ Mode= score occurring most frequently in a set of scores  

• Variability  

o Amount of spread/distance between scores in a set  

▪ Range= difference between highest and lowest scores  

▪ Standard Deviation= average amount by which scores deviate from the  

mean  

• Important to understand both central tendency AND variability in your data  

Describing Data Visually  

• Histogram  

o A plot of the frequency of each score

▪ X-axis: scores

▪ Y-axis: frequency  

o Positively skewed: Tail is point right

o Negatively skewed: Tail is point negative (left)  

▪ You want your scores to be negatively skewed because the scores are  

higher  

o Be able to describe a histogram for the exam  

• Steam and Leaf Display  

o Easier for data with a lot of variability (ex. range= 0 to 100)  

o Cluster by intervals/ranges of scores  

Inferential Statistics  

• Do my results apply to the wider population of interest?

• Null Hypothesis (H0)  

o No difference in scores/ performance between different groups/ conditions you are  studying  

• Alternative Hypothesis (H1)=  

o Your research hypothesis  

• Goal of research is to try to disprove or reject H0

• As a pair they must have mutual properties  

o Mutually exclusive (one is true, the other CAN’T be true)  

o Exhaustive (they exhaust possible outcomes)  

▪ There is no third possibility  

▪ One of them has to be true  

• Two possible research outcomes  

o Fail to reject Null Hypothesis (H0)

▪ Sample differences observed were most likely chance differences—not  

generalizable  

o Reject Null Hypothesis  

▪ Sample differences observed can be generalized to broad population  

???? Different than accept H1 (Alternative Hypothesis)  

???? You are specifying the degree of confidence that H0 can be  

rejected  

???? Alpha level specifies the probability that result is due to chance  

• Errors in decisions regarding rejection of H0  

• The bigger the effect size, the higher the power  

Beyond Hypothesis Testing

• Effect Size  

o How big is the difference between set of scores  ▪ Provides a common metric across studies  

▪ Small  

???? Low Power

▪ Medium  

???? Less Power

▪ Large

???? Less overlap more power

Chapter 1: Scientific Thinking in Psychology 9/22/14 12:24 P

Ways of Knowing  

• Authority:  

• Use of Reason  

o The point is that the value of a logically drawn conclusion depends on the truth of  

the premises, and it takes more than logic to determine whether or not the  

premises have merit  

o Pierce labeled the use of reason, and a developing consensus among those  

debating the merits of one belief over another, the a priori method for acquiring  

knowledge  

• Experience 

o Empiricism—the process of learning things through direct observation or  

experience, and reflection on those experiences  

o Belief perseverance is motivated by a desire to be certain about one’s knowledge,  

it is a tendency to hold on doggedly to a belief, even in the face of evidence that  

would convince most people the belief is false  

o Confirmation bias: is a tendency to search out and pay special attention to  

information that supports one’s beliefs while ignoring information that contradicts  

a belief  

o Strongly held prejudices include both belief perseverance and confirmation bias  

o Availability heuristic: it occurs when we experience unusual or very memorable  

events and then overestimate how often such event s typically occur  

o “Go with your initial gut feel” a phenomenon that Kruger, Wirtz, and Miller called  

the first instinct fallacy  

• The Ways of Knowing and Science  

o The most reliable way to develop a belief, according to Charles Peirce, is through  

the method of science

▪ Its procedures allow us to know “real things, whose characters are entirely  

independent on our opinions about them”  

Science as a Way of Knowing  

• Determinism simply measures that events, including psychological ones, have causes  

• Discoverability means that by using agreed-upon scientific methods, these causes can be  discovered with some degree of confidence  

• Science Assumes Determinism  

o Statistical Determinism: this approach argues that events can be predicted, but  

only with a probability greater than chance

o Whether the choices we make in life are freely made or not is a philosophical  matter, and our personal belief about free will must be an individual decision  arrived at thought he use of reason  

o The best psychologists can do is to examine scientifically such topics as  ▪ A) the extent to which behavior is influenced by a strong belief in free will  ▪ B) The degree to which some behaviors are more “free” than others (i.e.  require more conscious decision making)  

▪ C) What the limits might be on our “free choices”  

• Science Makes Systematic Observations  

o The scientist’s systematic observations include using  

▪ A) Precise definitions of the phenomena being measured  

▪ b) Reliable and laid measuring tools that yield useful and interpretable  data  

▪ C) Generally accepted research methodologies  

▪ D) Logic for drawing conclusions and fitting those conclusions into general  theories  

• Science Produces Public Knowledge  

o Objectivity

▪ For Peirce, being objective meant eliminating such human factors as  expectation and bias  

▪ Rather, an objective observation as the term is used in science, is simply  one that can be verified by more than one observer  

o This process of reproducing a study to determine if its results occur reliably is  called replication  

o Questions are raised when results cannot be replicated  

o Introspection: Method used in the early years of psychological science in which an  individual completed a task and then described the events occurring in  

consciousness while performing the task  

▪ The problem with introspection was that although introspectors underwent  rigorous training that attempted to eliminate the potential for bias in their  self-observations, the method was fundamentally subjective  

• Science Produces Data- Based Conclusions 

o Data Driven: to be supported by evidence gathered through a systematic  procedure  

• Science Produces Tentative Conclusions 

o Science is a self-correcting enterprise and its conclusions are not absolute

• Science Asks Answerable Questions  

o Empirical Questions: are those that can be answered through the systematic  observations and techniques that characterize scientific methodology  

▪ They are questions precise enough to allow specific predictions to be made  • Science Develops Theories That Can be Falsified  

o Hypothesis, which is a prediction about the study’s outcome

o Hypotheses often develop as logical deductions from a theory

▪ Which is a set of statements that summarize what is known about some  

phenomena and propose working explanations for those phenomena  

▪ A critically important attribute of a good theory is that it must be precise

enough so it can be disproven  

???? ???? Falsification  

Psychological Science and Pseudoscience  

• Pseudoscience: is applied to any field of inquiry that appears to use scientific methods and  tries hard to give that impression but is actually based on inadequate, unscientific  methods and makes claims that are generally false or, at best, simplistic  

• Associates with True Science 

o Proponents of pseudoscience do everything they can to give the appearance of  being scientific  

• Relies on Anecdotal Evidence 

o A second features of pseudoscience, and one that helps explain its popularity, is its  reliance on and uncritical acceptance of anecdotal evidence

▪ Specific instances that seem to provide evidence for some phenomenon  

o The difficulty is that anecdotal evidence is selective  

o Effort Justification: the idea is that after people expend significant effort they feel  compelled to convince themselves the effort was worth while  

• Sidesteps the Falsification Requirement 

o Pseudoscience, any contradictory outcome can be explained or, more accurately,  explained away  

▪ Yet a theory that explains all possible outcomes fails as a theory because it  can never make specific predictions  

o Falsification is sidestepped by pseudoscience is that research reports in  

pseudoscientific areas are notoriously vague and are never submitted to reputable  journal with stringent peer review systems in place  

• Reduces Complex Phenomena to Simplistic Concepts

o A final characteristic of pseudoscience worth noting is that these doctrines take  what is actually a complicated phenomenon (the nature of human personality) and  reduce it to simplistic concepts  

The Goals of Research in Psychology 

• Scientific research in psychology has four related goals  

o Researchers hope to develop complete descriptions of behaviors, to make  predictions about future behavior and to provide reasonable explanations of  

behavior  

• Description  

o Description: in psychology is to identify regularly occurring sequences of events,  including both stimuli or environmental events and responses of behavioral events  o Description also involves classification as when someone attempts to classify forms  of aggressive behavior  

• Prediction  

o Laws: is to say that regular and predictable relationships exist for psychological  phenomena  

o The strength of these relationships allows predictions to be made with some  degree of confidence  

• Explanation  

o Explanation: to explain a behavior is to know what caused it  

▪ The concept of causality is immensely complex, and its nature has  

occupied philosophers for centuries  

• Application  

o Application: refers simply to the way so of applying principles of behavior learned  through research  

A Passion for Research in Psychology (Part 1)  

• Eleanor Gibson 

o “Visual cliff” studies  

o The visual cliff studies, showing the unwillingness of eight-month olds to cross the  “deep side” even with Mom on the other side, are now familiar to any student of  introductory psychology  

• B.F. Skinner (1904-1990) 

o His work on operant conditioning created an entire subculture within experimental  psychology called the experimental analysis of behavior

Chapter 2: Ethics in Psychological Research 9/21/15 1:42 PM

Preview & Chapter Objectives  

• A system of ethics is a set of “standards governing the conduct of a person of the  members of a profession”  

• Research psychologists must  

o A) treat human research participants with respect and in a way that maintains  

their rights and dignity  

o B) Care for the welfare of animals when they are subjects of research

o C) Be scrupulously honest in the treatment of data  

Developing the APA Code of Ethics  

• Critical incidents technique, the committee surveyed the entire membership of the APA,  asking them to provide examples of “incidents” of unethical conduct they knew about  firsthand and to “indicate what they perceived as being the ethical issue involved”  

o Although most concerned the practice of psychology, some of the reported  

incidents involved the conduct of research  

• The five general principles reflect the philosophical basis for the code as a whole o Beneficence and Nonmalificence: establishes the principle that psychologists must  constantly weight ht benefits and the costs of the research they conduct and seek  

to achieve the greatest good in their research  

o Fidelity and Responsibility: obligates researchers to be constantly aware of their  responsibility to society and reminds them always to exemplify the highest  

standards of professional behavior in their role as researchers  

o Integrity: compels researchers to be scrupulously honest in all aspects of the  

research enterprise  

o Justice: obligates researchers to treat everyone involved in the research enterprise  with fairness and to maintain a level of expertise that reduces the chances of their  work showing any form of bias  

o Respect for People’s Rights and Dignity: translates into a special need for research  psychologists to be vigorous in their efforts to safeguard the welfare and protect  

the rights of those volunteering as research participants

Ethical Guidelines for Research with Humans  

• Weighing Benefits and Costs: The Role of the IRB 

o Research Participants (or subjects)  

o Stanley Milgram

▪ Milgram introduced volunteers to obey commands from an authority  

figure, the experimenter

▪ Playing the role of teachers, participants were told to deliver what they  thought were high-voltage shocks (no shocks were actually given) to  another apparent volunteer  

▪ He was sharply criticized for exposing his volunteers toe extreme levels of  stress, for producing what could be long-term adverse effects on their self esteem and dignity, and , because of the degree of deception involved, for  destroying their trust in psychologists  

o The experimenter always faces the conflicting requirements of  ▪ A) producing meaningful research results that could ultimately increase  our knowledge of behavior and add to the general good  

▪ B) Respecting the rights and welfare of the study’s participants and  causing them no harm  

o Institutional Review Board or IRB: In a university or college setting, this group  consists of at least five people, usually faculty members from several departments  and including at least one member of the outside community and a minimum of  one nonscientist  

o Proposals that are exempt from full review include studies conducted in an  educational setting for training purposes  

o Proposals receiving expedited review include many of the typical psychology  laboratory experiments in basic processes such as memory, attention, or  perception, in which participants will not experience uncomfortable levels of stress  or have their behavior in any significant fashion  

o All other research usually requires full review by the IRB

o An important components of an IRB’s decision about a proposal involves  determining the degree of risk to be encountered by participants  ▪ When there is minimal or no risk, IRB approval s usually routinely granted  though an expedited review, or the proposal will be judged exempt from  review  

▪ When participants are “at risk” a full IRB review will occur, and  experiments must convince the committee that  

???? A) the value of the study outweighs the risk  

???? B) the study could not be completed in any other fashion

???? C) they will scrupulously follow the remaining ethical guidelines to  ensure those contributing data are informed and well treated  

o One issue is the extent to which IRBs should be judging the details of research  procedures and designs

o A second issue concern the perception among some researchers that it is difficult  to win IRB approval of “basic” research  

o A third problem is that some researchers complain about IRBs being overzealous  in their concern about risk, weighing it more heavily than warranted, relative to  the scientific value of a study  

o One unsettling consequence of IRBs being overly conservative, according to  prominent social psychologist Roy Baumeister, is that psychology is rapidly  becoming the science of self-reports and finger movements instead of the science  of overt behavior  

o A final issue that concerns psychologists is that IRBs sometimes overemphasize a  biomedical research model to evaluate proposals  

▪ As a result, they might ask researchers to respond to requests that are not  relevant for most psychological research  

o One unfortunate consequence of these four issues is a lack of consistency among  IRBs

• Informed Consent and Deception in Research 

o Informed consent: the notion that in deciding whether to participate in  psychological research, human participants should be given enough information  about the study’s purpose and procedures to decide if they wish to volunteer  

o Deception: Participants might not be told the complete details of a study at its  outset, or they might be misled about some of the procedures or about the study’s  purpose as in the eyewitness memory example you just read  

o Naturalistic and qualitative interview procedures  

o There is evidence that participants who are fully informed ahead of time about the  purpose of an experiment behave different from those who aren’t informed  o Although subjects might not be told everything about the study during the consent  procedure, it need to be made clear to them that they can discontinue their  participant at any time  

o It is important to note that consent is not required for research that is exempt  from full review  

o The key is whether the setting is a public one—if the study occurs in a place where  anyone could be observed by anyone else, consent is not needed  

• Informed Consent and Special Populations 

o Parents or legal guardians are the ones who give consent  

o Assent: researchers give the child as much information as possible o gauge  whether the child is willing to participate

▪ Assent occurs when the “child shows some form of agreement to  

participate without necessarily comprehending the full significance of the  research, necessary to give informed consent”  

▪ Assent also means the researcher has a responsibility to monitor  

experiments with children and to stop them if it appears that undue tress  is being experienced  

o Legal guardians must give truly informed consent for research with people who are  confined to institutions

o It is imperative to ensure that participants do not eel coerced into volunteering for  a study

• Treating Participants Well 

o Debriefing: during which the experiment answers questions the participants might  have and fills them in about the purpose of the study  

▪ It is not absolutely essential that participants be informed about ALL  aspects of the study immediately after their participation  

o Participant Crosstalk: A tendency for people who have participated in a research  study to inform future participants about the true purpose of the study  

▪ There is evidence that participant crosstalk occurs especially in situations  where participants can easily interact with each

other  

o Dehoaxing: means revealing to participants the purpose of the experiment and the  hypotheses being tested (or some portion of them)  

o Desensitizing: refers to the process of reducing stress or other negative feelings  that might have been experienced in the session  

▪ Subjects are also informed that, if they wish, they may have their data  removed from the data set  

o Confidentiality: Research participants should be confident their identities will not  be known by anyone other than the experimenter and that only group or disguised  (coded) data will be reported  

▪ The only exceptions to this occur in cases when researchers might be  compelled b law to report certain things disclosed by participants (ex. child  abuse, clear intent to harm oneself or another)  

• Research Ethics and the Internet 

o First, some websites are designed to collect data from those logging into sites ▪ this happens most frequently in the form of online surveys and  

questionnaires but can involve other forms of data collection as well

o The second form of e-research involves a researcher studying the behavior of  Internet users  

▪ This research ranges from examining the frequency of usage of selected  websites to analyses of the content of web-based interactions (monitoring  the activity of a Twitter feed)  

o For e-research in which computer users contribute data, problems relating to  informed consent and debriefing exist  

o Confidentiality, researchers using internet surveys, responded to by those using  their own personal computer, must take steps to ensure the protection of the  user’s identity  

o Also, users must be assured that if their computer’s identity is returned with the  survey, the researcher will discard the information  

• Ethical Guidelines for Research with Animals  

o Animals are used in psychological research for reasons  

▪ Methodologically, their environmental, genetic and developmental histories  can be easily controlled  

???? Genetic and lifespan developmental studies can take place quickly  

o Ethically, most experimental psychologists take the position that, with certain  safeguards in place, animals can be subjected to procedures that could not be  used with humans  

• The Issue of Animal Rights 

o Some argue that humans have no right to consider themselves superior to any  other sentient species—that is, any species capable of experiencing pain  ▪ Sentient animals are said to have the same basic right sot privacy,  

autonomy, and freedom from has as humans and therefore cannot be  

subjugated by humans in any way, including participating in any form of  research  

o The argue that humans may have dominion over animals, but they also have a  responsibility to protect them  

o Critics have suggested that instead of using animals in the laboratory, researchers  could discover all they need to know about animal behavior by observing animals  in their natural habitats, by substituting nonsentient for sentient animals or by  using computer simulations  

• Using Animals in Psychological Research  

o Neal Miller

▪ Argued that

???? A) animal activists sometimes overstate the harm done to animals  

in psychological research  

???? B) Animal research provides clear benefits for the wellbeing of  

humans  

???? C) Animal research benefits animals as well

▪ Miller argued that situations involving harm to animals during research  

procedures are rare, used only when less painful alternatives cannot be  

used, and can be justified by the ultimate good that derives from the  

studies  

▪ First, he argued that while the long history of animal conditioning research  has taught us much about general principles of learning, it also has had  

direct application to human problems  

▪ Finally, Miller argued that animal research provides direct benefits to  

animals themselves  

???? Medical research with animals has improved veterinary care  

dramatically, but behavioral research has also improved the  

welfare of various species  

o Anthrozoology—the study of human-animal interactions  

The APA Code for Animal Research  

• The animal use committee is composed of professors from several disciplines in addition to  science and includes someone from outside the university  

• The guidelines for using animals deal with  

o a) the need to justify the study when the potential for harm to the animals exists  o b) the proper acquisition and care of animals, both during and after the study  o c) the use of animals for educational rather than research purposes  

• Justifying the Study 

o The researcher should  

▪ a) increase knowledge of the processes underlying the evolution,  

development, maintenance, alteration, control, or biological significance of  

behavior  

▪ b) determine the replicability and generality of prior research  

▪ c) increase understanding of the species under study  

▪ d) provide results that benefit the health or welfare of humans or other  

animals  

Scientific Fraud  

• Plagiarism: deliberately taking the ideas of someone else and claiming them as one’s own

• Falsifying data: Is a problem that happens only in science

• Data Falsification 

o A scientist fails to collect any data at all and simply manufactures it  o Second, some of the collected data are altered or omitted to make the overall  results look better  

o Third, some data are collected but “missing” data are guessed at and created in a  way that produces a data set congenial to the researcher’s expectations  o Fourth, an entire study is suppressed because its results fail to come out as  expected  

o In each of these cases, the deception is deliberate and the scientist presumably  “secures an unfair or unlawful gain” (ex. publication, tenure, promotion)  o The traditional view is that fraud is rare and easily detected because faked results  won’t be replicated  

▪ That is, if a scientist produces a result with fraudulent data, the results  won’t represent some empirical truth  

o Fraud also may be detected during the normal peer review process  o Whenever a research article is submitted for journal publication or a grant is  submitted to an agency, it is reviewed by several experts whose recommendations  help determine whether the article will be published or the grant funded  o A third way of detecting fraud is when a researcher’s collaborators suspect a  problem

Chapter 3: Developing Ideas for Research in  Psychology 9/21/15 1:42 PM Varieties of Psychological Research  

• Basic Versus Applied Research  

o Some research in psychology concerns describing, predicting, and explaining the  fundamental principles of behavior and mental processes; this activity is referred  

to as basic research

o Applied research is so named because it has direct and immediate relevance to the  solution of real-world problems  

o It is sometimes believed that applied research is more valuable than basic  

research because an applied study seems to concern more relevant problems and  

to tackle them directly  

o It could be argued, however, that a major advantage of basic research is that the  principles and procedures (ex. shadowing) can potentially be used in a variety of  

applied situations, even though these uses aren’t considered when the basic  

research is being done  

o Basic research is a frequent target of politicians (and some IRBs as you recall from  the last chapter), who bluster about the misuse of tax dollars to fund research that  doesn’t seem “useful”  

o In some cases, what is learned from basic research can be useful in applied  

project from a completely different topic area  

The Setting: Laboratory versus Field Research  

• Laboratory Research: allows the researcher greater control; conditions of the study can be  specified more precisely, and participants can be selected and placed in the different  conditions of the study more systematically  

• Field Research: The environment more closely matches the situations we encounter in  daily living  

• Mundane Realism: refers to how closely a study mirrors real-life experiences  

• Experimental Realism: Concerns the extent to which a research study (whether in the  laboratory or in the field) “has an impact on the subjects, forces them to take the matter  seriously, and involes them in the procedures”  

• First, conditions in the field often cannot be duplicated in a laboratory  

• A second reason to do field research is to confirm the findings of laboratory studies and  perhaps to correct misconceptions or oversimplifications that might be derived from the  safe confines of a laboratory  

• A third reasons is to make discoveries that could result in an immediate difference in the  lives of the people being studied

• Fourth, although field research is ordinarily associated with applied research, it is also a  good setting in which to do basic research  

• Research Example 1—Combining Laboratory and Field Studies  

o Confederate—someone who appears to be part of the normal environment but is  actually part of the study  

o Manipulation: This procedure is often used to be sure the intended manipulations  in a study have the desired effect  

o Pilot Study: Pilot studies are often used to test aspects of the procedure to be sure  the methodology is sound  

o One last point about the decision on where to locate a study concern ethics  o One last point about the decision on where to locate a study concerns ethics  o In laboratory research, it is relatively easy to stick closely to the ethics code  o In the field, however, it is difficult, and usually impossible, to provide informed  

consent and debriefing; in fact, in some situations, the research procedures might  be considered an invasion of privacy  

• Quantitative versus Qualitative Research  

o Quantitative research: the data are collected and presented in the form of  numbers—average scores for different groups on some task, percentages of  people who do one thing or another, graphs, and tables of data, and so on  

o Qualitative Research: is not easily classified, but it often includes studies that  collect interview information, either from individuals or groups; either from  individuals or groups; it sometimes involves detailed case studies; or it might  involve carefully designed observational studies

o Operationism: Bridgman argued the terminology of science must be totally object  and precise, and that all concepts should be defined in terms of a set of  “operations” or procedures to be performed  

o Operational Definitions: The length of some object, for instance, could be defined  operationally by a series of agreed by a series of agreed-on procedures  o Despite this problem with the strict use of operational definitions, the concept has  been of value to psychology by forcing researchers to define clearly the terms of  their studies  

o One important outcome of the precision resulting from operational definitions is  that it allows experiments to be repeated  

o Converging Operations: which is the idea that our understanding of some  behavioral phenomenon is increased when a series of investigations, all using

slightly different operational definitions and experimental procedures, nonetheless  converge on a common conclusion  

o Empirical questions may evolve out of  

▪ a) everyday observations of behavior  

▪ b) the need to solve a practical problem

▪ c) attempts to support or refute a theory  

▪ d) unanswered questions from a study just completed

Developing Research from Observation of Behavior and Serendipity  

• This phenomenon—memory is better for incomplete rather than completed tasks—is today  called the Zeigarnik effect  

• Serendipity, or discovering something while looking for something else entirely, has been  source of numerous important events in the history of science

Developing Research From Theory  

• The Nature of Theory 

o A theory is a set of logically consistent statements about some phenomenon that  ▪ A) best summarizes existing empirical knowledge of the phenomenon  

▪ B) Organizes this knowledge in the form of precise statements of  

relationships among variables  

▪ C) Proposes an explanation for the phenomenon

▪ D) Serves as the basis for making predictions about behavior  

o The essence of the theory is the proposal that whenever people hold two opposing  cognitions at the same time, a state of discomfort, called cognitive dissonance  o Construct: is a hypothetical factor that is not observed directly; its existence is  inferred from certain behaviors and assumed to follow from certain circumstances  ▪ Cognitive dissonance is assumed to exist following circumstances of  

cognitive inconsistency and presumably leads to certain predictable  

behaviors  

o Dissonance reduction can come about several means: One or both of the  cognitions could be altered, behavior could be changed, or additional cognitions  could be added to bring the two dissonant cognitions into consonance  

o An important feature of ay theory is its continual evolution in light of new data  ▪ No theory is EVER complete  

• The Relationship Between Theory and Research  

o Deduction: reasoning from a set of general statements toward the prediction of a  specific event

o Hypothesis: which in general can be considered a reasoned prediction about an  empirical result that should occur under certain circumstances  

o Induction: is the logical process of reasoning from specific events (the results of  research) the to the general (the theory)  

o Scientists do not use the words PROVE and DISPROVE when discussing theories  and data  

▪ If a study comes out as expected, that outcome supports but cannot prove  a theory, for the simple reason that future studies could potentially come  

out in a way that fails to support it  

▪ Similarly, if a study fails to come out as hoped, that outcome cannot  

disprove a theory since future research might support it  

o Theories are indeed discarded. but only when scientists lose confidence in them,  and this takes awhile occurring only after predictions have been repeatedly  

disconfirmed in a number of laboratories and some competing theory arrives and  begins to look more attractive  

Attributes of Good Theories  

• Productivity: good theories advance knowledge by generating great deal of research, an  attribute that clearly can be applied to dissonance theory  

• Falsification 

o Theories that are continually resistant to Falsification are accepted as possibly true  (with the emphasis on possibly)  

• Parsimony  

o Parsimonious: this means, ideally that they include the minimum number of  constructs and assumptions need to explain the phenomenon adequately and  predict future research outcomes  

▪ If two theories are equal in every way except that on is more  

parsimonious, then the simpler one is generally preferred  

• A Common Misunderstanding About Theory 

o “Working truths” about some phenomenon, always subject to revision based on  new data but reflecting the most reasonable current understanding of the  

phenomenon  

o “Facts” are the results of research outcomes that add inductive support for  theories or fail to support theories  

o Theories can never be absolutely show to be true because of the limits of  induction—that is, future studies might require that a theory be altered  

o Theory never becomes fact; instead theory serves to explain facts

Developing Research from Other Research  

• Programs of research: a series of interrelated studies  

o Researchers become involved in a specific area of investigation and conduct a  series of investigations in that area that may last for years and may extend to  many other researchers with an interest in the topic  

• Research Teams and the What’s Next? Question  

o Research Teams: within their laboratories that operate under what has been called  an apprenticeship model  

▪ Typically the team includes a senior researcher, several graduate students  working under that person and perhaps two or three highly motivated  

undergraduates who convinced the senior researchers of their interest and  

willingness to work  

o The pilot study is an invaluable way to determine whether the researchers are on  the right track in developing sound procedures that will answer their empirical  questions

o Research in psychology  

▪ a) usually involves a continuous series of interrelated studies, each  

following logically from the prior one  

▪ b) is often a communal effort, combining the efforts of several people who  are immersed in the same narrowly specialized research area  

▪ c) unstructured in its early, creative stages  

• Replication and Extension 

o Replication: refers to a study that duplicates some or all of the procedures of a  prior study  

o Extensions: on the other had, resembles a prior study and usually replicates part  of it, but it goes further and adds at least one new feature  

o Partial Replication: is often used to refer to that part of the study that replicates a  portion of the earlier work  

o Exact replication or direct replication is used to describe a point-for-point  

duplication of a study  

▪ Exact replication occurs because researchers are seldom rewarded for  

simply repeating what someone else has done  

▪ Exact replications seldom will occur when serious questions are realized  

about a finding  

▪ The possibility that listening to the music could increase ability was dubbed  the Mozart effect

Creative Thinking in Science  

• Creative thinking: in research design involves a process of recognizing meaningful  connections between apparently unrelated ideas and seeing those connections as the key  to developing the study  

o Such thinking does not occur in a vacuum however, but rather in the context of  some problem to be solved by a scientist with considerable knowledge of the  problem

Chapter 4: Measurement and Data Analysis 9/21/15 1:42 PM

What to Measure—Varieties of Behavior  

• Research Example 2—Habituation  

o This habituation procedure involves showing an infant the same stimulus  

repeatedly and then changing to a new stimulus  

o Habituation is defined as a gradual decrease in responding to repeated stimuli  

▪ If a new stimulus is present and it is recognized as something new or  

unusual, the infant will increase the time spent looking at it  

Evaluating Measures

• Reliability  

o Reliable: it is results are repeatable when the behaviors are re-measured  

o A behavioral measure’s reliability is a direct function of the amount of  

measurement error present  

▪ If there is a great deal of error, reliability is low, and vice versa  

▪ No behavioral measure is perfectly reliable, so some degree of measure  

error occurs with all measurement  

o Reliability is assessed more formally in research that evaluates the adequacy of  any type of psychological test  

o The degree of similarity is expressed in terms of correlation (high similarity=  

strong correlation)  

• Validity 

o Valid: if it measure what is designed to measure  

o Content validity: this type of validity concern whether or not the actual content of  the items on a test makes sense in terms of the construct being measured  

o Face Validity: which is not actually a “valid” form of validity at all  

▪ Face validity concerns whether the measure seems valid to those who are  

taking it, and it is important only in the sense that we want those taking  

our tests and filling out our survey to treat the task seriously  

o Criterion Validity: which concern whether the measure  

▪ a) can accurately forecast some future behavior  

▪ b) is meaningfully related to some other measure of behavior  

▪ This term criterion validity is used because the measure in question is  

related to outcome or criterion  

o Construct Validity: concerns whether a test adequately measures some construct,  and it connects directly with what is now a family concept to you—the operational  

definition

▪ Constructs are never observed directly, so we develop operational  

definitions for them as away of investigating them empirically, and then  

develop measures for them  

▪ Construct validity relates to whether a particular measurement truly  

measure the construct as a whole  

o Scores on a test measure some construct should relate to scores on other tests  that are theoretically related to the construct (convergent validity) but not to  scores on other tests that are theoretically unrelated to the construct (discriminant  validity)  

Reliability and Validity  

• Note that validity assumes reliability, but the converse is not true  

o Measures can be reliable but not valid; valid measures must be reliable, however. Scales of Measurement  

• Measurement scales: ways of assigning numbers to events  

• Nominal Scales 

o Sometimes the number we assign to events serves only to classify them into one  group or another  

o Nominal Scale: Measurement scale in which the numbers have no quantitative  value but rather identify categories into which events can be placed

• Ordinal Scales 

o Ordinal scales of measurement are sets of rankings showing the relative standing  of objects or individuals  

• Interval Scales  

o Interval scales extend the idea of rank order to include the concept of equal  interval between the ordered events  

o Research using psychological testes of personality, attitude, and ability are the  most common examples of studies typically considered to involve interval scales  o Psychologists prefer to use interval and ratio scales generally because data on  those scales allow more sophisticated statistical analyses and a wider range of  them  

o It is important to note that interval scales, a score of zero is simply another point  on the scale—it does not mean the absence of the quantity being measured  

• Ratio Scales 

o Ratio scale: the concepts of order and equal interval are carried over from ordinal  and interval scores, but, in addition, the ratio scale has a true zero point—that is,

for ratio scores, a score of zero means the complete absence of the attribute being  measured  

Statistical Analysis  

• Descriptive and Inferential Statistics 

o Population consists of all members of a defined group  

o Sample: is a subset of that group  

o Descriptive Statistics: summarize the data collected from the sample of  

participants in your study

o Inferential Statistics: allow you to draw conclusions about your data that can be  applied to the wider population  

• Descriptive Statistics 

o Descriptive statistical procedures enable you tot turn a large pile of numbers that  cannot be comprehended at a glance into a small set of numbers that can be more  easily understood  

o Descriptive statistics include measures of central tendency, variability, and  association, presented both numerically and visually  

o Mean: arithmetic average, found by adding the scores together and divided by the  total number of scores  

o Median: the score is the exact middle of a set of scores  

o Median Location: which means it falls midway between the tenth and the eleventh  numbers in the sequence  

o Scores that are far removed from other scores in a data set are known as outliers  o Mode: is the score occurring most frequently in a set of scores  

o Range: the difference between the high and low scores of a group  

o Standard deviation: for a set of sample scores is an estimate of the average  amount by which the scores in the sample deviate from the mean  

o Variance: which is the number produced during the standard deviation calculation  just prior to taking the square root  

▪ Variance is standard deviation squared  

▪ It is however, the central feature of perhaps the most common inferential  procedure found in psychology, the analysis of variance  

o Interquartile Range: is the range of scores between the bottom 25% of scores and  the top 25% of scores  

▪ The IQR would not change if outliers were present  

o Histogram: is a graph showing the number of times each score occurs, or if there  is a large number of scores, how often scores within a defined range occur

o Frequency distribution: a table that records the number of times each score occurs  o Distribution is the familiar bell-shaped curve known as the normal curve or normal  distribution  

▪ The normal curse is a frequency distribution just like one for the memory  scores, except instead of being an actual (or empirical) distribution of  

sample scores it is a hypothetical (or “theoretical”) distribution of what all  

scores in the population would be if everyone was tested  

o Descriptive statistics are reported three ways  

▪ If there are just a few numbers to report (ex. means and standard  

deviations for the groups in an experiment), they are sometimes working  

into the narrative description of the results  

▪ Second, the means and standard deviations might be present in a table  

▪ Third, They might be reported in the visual form of a graph  

Null Hypothesis Significance Testing  

• The first step in significance testing is to assume there is no difference in performance  between the conditions that you are studying, in this case between immediate and delayed  rewards  

o This assumption is called the null hypothesis( nu—nothing), symbolized H0 and  pronounced “H sub oh”  

• The research hypothesis, the outcome you are hoping to find (fewer learning trials for rats  receiving immediate rewards is called the alternative hypothesis ( some times research  hypothesis)  

• Failing to reject H0 means you believe any difference in the means (and studies almost  find some differences between groups) were most likely chance differences; you have  failed to find a genuine effect that can be generalized beyond your sample  

• Rejecting H0 means you believe an effect truly happened in your study and the results can  be generalized  

• The researchers hypothesis (h1) is never proven true in an absolute sense just as  defendants are never absolutely proven guilty  

• Alpha level: refers to the probability of obtaining your particular results if H0 (no  differences is really true  

o Alpha is set at 0.05 but it can be set at other, more stringent levels as well  o If h0 is rejected when alpha equals 0.05 it means you believe the probability is  very low that your research outcome is the result of chance factors

o Another way to put it is to say the obtained difference between the sample means  would be so unexpected if H0 were true that we just cannot believe H0 is really  true  

• Type I and Type II Errors 

o Rejecting H0 when it is in fact true is called a Type I error  

▪ Type I errors are sometimes suspected when a research outcome fails  

several attempts at replication  

o Type II error: this happens when you fail to reject H0 but you are wrong—that is,  you don’t find a significant effect in your study, naturally feel depressed about it,  but are in fact in error  

▪ Type II errors sometimes occur when the measurements used aren’t  

reliable or aren’t sensitive enough to detect true difference between  

groups  

o With these substitution in mind, correct decisions mean either  

▪ a) no real difference exists, which is OK because you didn’t find one  

anyway  

▪ b0 a real difference exists and you found it (experimenter heaven)  

o A type I error means there is nor real difference but you think there is because of  the results of your particular study  

o A type II error means there really is a difference but you failed to find it in your  study  

• Inferential Analysis  

o Systematic Variance: is the result of an identifiable factor, either the variable of  interest (reinforcement delay) or some factor you’ve failed to control adequately  o Error Variance: is nonsystematic variability due to individual differences between  the rats in the two groups and any number of random, unpredictable effects that  might have occurred during the study  

o This ideal outcome is to find that variability between condition is large and  variability within each condition is small  

• Interpreting Failure s to Reject H0 

o Studies finding no differences are less likely to be publish and wind up stored away  in someone’s filed—a phenomenon called the file drawer effect  

Going Beyond Hypothesis Testing  

• Effect Size  

o Effect size: provides an estimate of the magnitude of the difference among sets of  scores while taking into account the amount of variability in the scores

▪ Different types of effect size calculations are used for different kinds of  research designs  

o Meta-analysis: uses effect-size analyses to combine the results from several  (often, many) experiments that use the same variables, even though these  variables are likely to have different operational definitions  

▪ The outcome of meta-analysis relates to the concept of converging  

operations

▪ Confidence in the generality of conclusion increases when similar results  occur, even though a variety of methods and definitions of terms have  

been sued  

o Confidence Interval: is a range of values expected to include a population value  with a certain degree of confidence

• Power 

o When completing a null hypothesis significance test, one hopes to be able to reject  H- when it is, in fact false  

o Power: that is, a test is said to have high power if it results in a high probability  that a real difference will be found in a particular study  

o Power is affected by the alpha level by the seize of the treatment effect (effect  size)< and especially by the see of the sample  

o This latter attribute is directly under the experimenter’s control and researchers  sometimes perform a power analysis at the outset of a study to help them choose  the best sample size for their study

Page Expired
5off
It looks like your free minutes have expired! Lucky for you we have all the content you need, just sign up here