Psych 111, week 2 notes
Psych 111, week 2 notes Psychology 111
Popular in Intro to Psychology II
Popular in Psychology (PSYC)
This 5 page Class Notes was uploaded by Sara Auger on Thursday October 6, 2016. The Class Notes belongs to Psychology 111 at Emory University taught by Dr. Delawalla in Fall 2016. Since its upload, it has received 10 views. For similar materials see Intro to Psychology II in Psychology (PSYC) at Emory University.
Reviews for Psych 111, week 2 notes
Report this Material
What is Karma?
Karma is the currency of StudySoup.
Date Created: 10/06/16
-Overview: Why it's important: o Errors in hindsight, overconfidence, and coincidence o Scientific attitude and critical thinking o Scientific method o Data: description, correlation, and experimentation/causeation o Issues in psych o Describing data -Some basic ideas in intuition contradict So psych can't rely on intuition. o Errors in Thinking: 1. Hindsight bias- looking back and saying "I knew this would happen." Making sense of events after the fact. Ex: "I knew these two would be married, they're meant to be." 2. Over-confidence Error- overestimating how much we really know, confidence is higher than accuracy. Usually when we're familiar. 3. Coincidence Error- tendency to misinterpret different events; attributing more importance to something than it actually deserves. Thinking a random sequence is a meaningful pattern. Instead: The Scientific attitude o Helps combat biases o Curiosity --> always asking questions ("Do people eat chocolate mostly under stress?") o Skepticism --> not accepting a "fact" as true without challenging it. ("Is there another explanation for the behavior I'm seeing?") 1. Even after using the scientific method 2. Change hypothesis to suit evidence, not vice versa o Humility --> seeking to find the truth rather than trying to be right 1. Safeguard against confirmation bias (tendency to just look for evidence that supports your theory and ignore evidence that doesn't) and belief perseverance ( tendency to keep believing what we want to believe, even in the face of contradictory data) o Critical thinking --> analyzing info, arguments, and conclusions rather than simply accepting it. 1. Look for hidden assumptions and decide if you agree. 2. Look for hidden bias, politics, values, or personal connections. 3. Put aside your own biases and look at the evidence. 4. See if there was a flaw in how the information was connected. 5. See if there are other possible explanations than the one put forward by the author. Apply the scientific method: a process by which we test ideas about the natural world. o Have a theory and turn them in to testable hypotheses. o Gather information related to predictions. o Analyze whether or not data fits with ideas. o If so, confirm theory. If not, reassess theory and repeat. Theory: the big picture o A set of principles, built on observations and other verifiable facts, that explains some phenomenon and predict its future behavior. (Ex: Theory that all ADHD symptoms are a reaction to eating sugar. Can you test it? Make predictions? Hypotheses: the predictions o Comes out of a theory o The testable (worded so that you can make observations and see if it's true) prediction (Ex: If a kid gets sugar, they'll act more distracted, impulsive, and hyper.) o Operational definitions --> definitions of how to detect or measure something in hypothesis (ADHD symptoms: Impulsivity [times per hour a kid speaks out without raising hand], hyperactivity [number of times out of seat in an hour], inattention [number of minutes a kid can work on something without being distracted.] 1. "operational" because they relate directly to where/how/when the experiment is conducted. 2. Allows for replication. (to make sure it's true.) Doesn't just refer to repeating same exact study. Doing in a slightly different way; different participants/population or situation (Ex: ADHD and sugar with older kids or at home) Protects against confirmation bias. Changing operational definition Testing hypothesis in a different way (disconfirming hypothesis --> test the opposite to see if your original holds up) Can help expand or modify a theory.(Ex: some ADHD symptoms are cause by excess sugar.) Types of Studies o Descriptive Study: 1. Descriptive research: a systematic, objective observation of people 2. Goal is to provide a clear, accurate picture of people's behaviors, thoughts, and attributes Can be done by observing 1 person (case study) You get a lot of rich, deep information Phineas Gage-- bar through his frontal lobe. Gave us a lot of information about what the frontal lobe does. Cases of brain damage have suggested the function of different brain parts. Can be a source of ideas/observations about human nature in general. Can give an overgeneralization from anecdotal evidence. (Ex: meditation cures cancer!) Naturalistic Observation: not just looking at one person. You go into the world and observe something that's happening. Watching, but not intervening. De Waal (chimpanzee guy on staff) Can be used to study more than one individual and find truths to apply to broader population. Surveys and interviews: having people report on their own thoughts, feelings, attitudes and behavior Opinions, things it's hard to observe Just there to collect information, not intervene or persuade Word things in an unbiased way Sampling error can give biased results People that you survey aren't representative of population of interest. (Ex: Landon Vs Roosevelt poll said Landon would win.) *8 Random sampling--> sample that represents the population Types and proportions. Tends to give more accurate information. Just as likely that any part of the population can be part of the sample. Driven by chance, not any characteristic. The information that we get is the observation that two things might be related (correlation) Correlational coefficient: a number representing how closely two variables are related (they change together) Direction of correlation can be positive or negative. From 0 to 1; closer to 1 is stronger relationship Positive correlation means two things increase or decrease together Negative correlation means as one goes up the other goes down. Correlation is not causation!!!! How we determine causation: o Experimentation: manipulating one variable in a situation to determine its effect 1. Control group: how you make sure something else didn't give you a result. This is the group where everything is the same except experimental group except that they don't have the experimental manipulation. How do you make sure control group is exactly the same? Random assignment. Randomly selecting some study participants to the control group and some to the experimental group. Random refers to chance Random Sampling vs. Random Assignment Sampling is how you get your pool of participants Assignment is how you put them in a group once they're picked 2. Placebo effect: Seeing a change because you expect it How do you make sure that the group isn't affected? Don't tell group participants what group they're in so they have no expectations. (Single blind study) Make the administrator blind too; (double blind) Experimenter doesn't know which group is which so also has no leading expectations. 2 Experimental Variables: Independent Variable: Factor that's extremely manipulated (Ex: the sugar) Dependent Variable: factor that is measured; variable that might change when the IV is manipulated (Ex: ADHD symptoms) Confounding Variables: any other variables that might have an effect on the dependent variable (Ex: personal development, stress) Best way to control: random assignment and random sampling; there's a wide away of confounding variables that probably average out to represent the reality Sample of convenience: getting your "random sample" from a group that is easily accessible. (Note: random assignment trumps clean distinction between variables) Laboratory vs Life o Lab experiments are controlled, artificial environments to test general principles o To truly apply to real life, you have to take into accounts culture, gender, ethics. 1 You can't overgeneralize Bias in research o What questions are asked 1 Studies look at child development when the mother works outside the home Why not when men work outside that home? 2 Studies exploring whether homosexuality is genetic Why not study if heterosexuality is cultural? 3 Personal/cultural values affect what is studied and how it's studied, as well as how results are interpreted Can color "facts" o The way you choose to interpret 1 Sheldon Cooper: rigid or consistent? 2 Scientific method alone doesn't protect against bias; we have to go further. But there's always some bias. Ethics in Psychology o All research studies done at universities or medical centers have to be approved by the IRB (Institutional review board) 1 Inclusive samples 2 Participants not subjected to harm "Tuskegee Study of Untreated Syphilis in the Negro Male" Conducted in 1932 in rural Alabama Enrolled about 600 AA males. 399 with syphilis (e. group) and 201 control subjects Penicillin discovered in 1947 but withheld from the study subjects due to the "nature of the study" (cure withholding is what makes it unethical) Study continued until 1972 until the story broke 3 IRB protects people who can't speak for themselves or fully understand the experiment and what it implies. Drawing Valid Conclusions o Research studies are subject to peer review process before they are published 1 Are results reliable? 2 Nonbiased sample? 3 Avoiding overgeneralization 4 Can methods be replicated? 5 Is the difference is the control/experimental group statistically significant? 6 Can it occur by chance? Typically only except results when the odds of it happening by chance 7 Are the findings meaningful? o After publication, other scientists can experiment too/understand the experiment.