New User Special Price Expires in

Let's log you in.

Sign in with Facebook


Don't have a StudySoup account? Create one here!


Create a StudySoup account

Be part of our community, it's free to join!

Sign up with Facebook


Create your account
By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here

Comm 88 Notes

by: Rachel Sung
Rachel Sung
GPA 3.51

Preview These Notes for FREE

Get a free preview of these Notes, just enter your email below.

Unlock Preview
Unlock Preview

Preview these materials now for free

Why put in your email? Get access to more of this material and other relevant free materials for your school

View Preview

About this Document

These are the notes for the ENTIRE COURSE!! This includes ALL lecture notes (super detailed), with some notes from readings and section thrown in. NOTE: Although these notes are from Winter 201...
Communication Research Methods
D. Mullin
Comm, communication, Comm 88
75 ?




Popular in Communication Research Methods

Popular in Communication Studies

This 32 page Bundle was uploaded by Rachel Sung on Sunday February 21, 2016. The Bundle belongs to Comm 88 at University of California Santa Barbara taught by D. Mullin in Fall 2014. Since its upload, it has received 83 views. For similar materials see Communication Research Methods in Communication Studies at University of California Santa Barbara.


Reviews for Comm 88 Notes


Report this Material


What is Karma?


Karma is the currency of StudySoup.

You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 02/21/16
Ways of Knowing Tuesday, October 7, 20110:51 AM REMINDER: finish assignment #1, print assignment papers (project), bring section Some "Truths" -- How do you know? - It's not raining outside ○ We look at resources we rely on (internet, other people) ○ Look at other days - Vegetables are good for you ○ Experts say so ○ Own experience (know it's affects you in good way) - People who are similar to each other tend to like each other Ways of Knowing - Also known as "Epistemology" ○ The science of knowing Some "Everyday" Ways on Knowing (& their problems) 1. Method of tradition/tenacity ○ Things that are constantly repeated ○ But we don't really know the real reason behind it ○ Tenacity: things that just don't go away  Even though you give facts, people still believe 2. Method of authority ○ Believe something to be true because you think they have more authority (experts, parents, teachers) ○ Good since we don't have to research ourselves ○ Bad since we don't know if it's actually true - Problem with both: authorities and handed-down truths can be WRONG 3. Method of experience/observation ○ Personal experience (surface level)  Vegetables make me feel better  BUT how do you know it was the vegetables, can it be mental, could be just you ○ "Baconian empiricism"  Deeper observations, more systematic  Problems: □ Can be making inaccurate observations  Ex. thought it was veggies but actually not □ Selective observation (only see what you want to see)  Ex. like a certain car, suddenly see it everywhere  Ex. Think that dark suit, red tie means important person, so when you see president/CEO where it, you think it's true but you don't notice that other less important people also where dark suit/red tie 4. Method of intuition/logic ○ Common sense (surface level logic)  Problem: personal biases can affect what you think ○ "Platonic idealism"  Way to get to truth, use deep/rational thought  Don't go out and observe it, just have debates in your mind, think it through Problems:  Problems: □ Incorrect premises  Ex. A=B, B=C, so A=C, ◊ Are you sure that A=B? if not then further thoughts will be false □ Illogical reasoning  Ex. All fish can swim, I can swim, so I am a fish ◊ This logic just doesn’t make sense - Problem for all methods: ○ Overgeneralization  Think that it applies to everyone  Ex. How do you know you're not the only one that experiences it that way? - Everyday ways of knowing can even lead to conflicting ideas about "truth" ○ Ex. One person's logic can be different from another ○ Ex. Long distance relationship…  Absence makes the heart grow fonder OR out of sight, out of mind The Scientific Method - Combines "platonic idealism" (logic/intuition --> constructing theories) with "baconian empiricism" (observation/experience --> gathering data) - Communication Science: Use empirical observations to test theories about comm processes Unique Characteristics of Science How is "science" different from the other "everyday" ways of knowing? - Scientific research is public ○ Published in peer-reviewed journals ○ Opportunity to replicate studies - Science is empirical ○ Conscious, deliberate observations  GOOD DATA! ○ Many observations - Science is "objective" ○ Control/remove personal biases ○ Bad if personal biases influence results ○ Biases still happen unknowingly though  Ex. Question wording ○ Explicit rules, standards, & procedures - Science is systematic & cumulative ○ Builds on prior studies/theory ○ New knowledge modifies old  Doesn't mean new is always correct Goals of Scientific Research (what can science tell us?) - Description ○ Look for social regularities of aggregates  Breaking people up into groups ○ Science can tell us "what is"  What % of people think __________ - Explanation ○ Understand WHY patterns exist (e.g., what causes what)  Ex. Playing video games improves spatial skills ○ Science can tell us "Why it is"  This is because of…. - Prediction Predict outcomes gives certain factors ○ Predict outcomes gives certain factors  Ex. Playing video games improves spatial skills so I predict that if I give spatial skill test, I can guess how good a person is at video games ○ Science can tell us "what will be" - Science CANNOT settle questions of moral value or opinion!!! ○ Ex. Which Star Wars movie is better? No scientific way to prove which is better  BUT you can find which one is more popular but no real right or wrong ○ CANNOT tell us "what should be"  Right/wrong, good/bad, moral/immoral The Research Process Theories, Hypotheses, Research Questions Thursday, October 9, 20111:41 AM The Wheel of Science Theories: an idea Theories Deduction : traditional Induction : Empirical Hypotheses: prediction way of science; humanistic/interpretive; Generalizations: put to the test quantitative methods qualitative methods group common things that people said Observations Observations Quantitative VS Qualitative Methods - Quantitative ○ Employ numerical measures and & data analysis ○ Use statistics ○ Put it on a scale ○ Adhere strongly to scientific goals & principles (objectivity, empirical data, etc.) ○ Ex. Surveys, experiments, content analysis - Qualitative ○ AKA interpretive research or field research ○ A "humanistic: form of social science  Values SOME aspects of science - especially empiricism  But also values researcher subjectivity ○ Ex. Participant observation, depth interviewing, conversation analysis ○ Note: there's also purely humanistic research in comm called "critical studies":  Rhetorical criticism, feminist analysis, cultural studies Using Theories in Research - Theory: an attempt to explain some aspect of social life ○ Scholar's ideas about how/why events/attitudes occur ○ Includes set of concepts and their relationships - What are "concepts"? ○ Terms for things/ideas/parts of the theory ○ Researchers must define them  Because different people have different definitions of words ex. What is "satisfaction" in a relationship? ○ Ex. Social Cognitive Theory (Bandura)  We learn by watching modeled behavior  Requires attention, retention, motor reproduction, motivation (ex. Reward/punishment)  What are some "concepts" involved? □ What counts as model? Watch TV, face to face? □ What counts as reward/punishment? Is model or kid being rewarded? □ Modeled behavior, attention, motivation, etc. - Concepts are studied as "variables" ○ They have variations that can be measured ○ Ex. Concept: what does it mean to be happy? Variable: a little happy or very happy ○ Ex. Motivation  Rewarded vs. punished model; AMOUNT of reward/punishment… ○ Ex. Modeled Behavior  Violent vs. prosocial (positive thing); amount of violence; intensity/graphicness; realism vs. fantasy ○ Ex. Gender  Male/female; masculinity/femininity - From prior findings and/or theory, we derive a testable hypothesis: ○ A specific prediction about the relationship between variables in your study ○ Ex. Social Cog. Theory: to make prediction about the effects of TV violence:  TV violence viewing will produce more aggressive behavior than will non-violent TV viewing What are the variables involved here? TV violence, aggressive behavior - What if theory or previous research does not lead to a specific prediction? ○ Or if previous findings conflict/inconclusive? - Pose research question instead of hypothesis! Examples:  RQ: To what extent will children imitate the behavior of a TV character whom they do not like/relate to?  RQ: Will there be gender differences in children's imitation of violence? Testing a Hypothesis: An Example - Researcher A ○ Social cog theory : children learn behavior by watching models behave Week 1 Page 4 ○ ○ Hyp: watching TV violence will increase kids' aggressive behavior - Researcher B ○ Catharsis Theory : watching others behave allows "purging" of pent-up feelings  Pent up hostiles will go away after watching others be violent ○ Hyp: watching TV violence will reduce kids' aggressive behavior - Researcher A - Researcher B ○ Choose 600 random students across the state ○ Goes to school, split up kids ○ Measured how much violent TV viewed ○ Watch one of four clips (# of hits: 0, 5, 10, 20)  Ask child, "what's your favorite show" come up with score ○ Number of hits on toys ○ Watch how much aggression on playground Aggression Aggression 0 5 10 20 TV Violence TV Violence Conclusion: TV violence increases aggression Conclusion: TV violence decreases aggression - Don't use words like INCREASE/DECREASE because it is showing - CAN have causality because we controlled the environments causality but we can't actually predict that - But have to add that you drew this conclusion for these participants, these lab conditions, etc. ○ Change to : TV violence IS RELATED TO aggression - Different people and conditions can change the results Types of Hypotheses & RQs - Hypotheses & RQs can be… ○ Causal (state how 1 variable changes/influences another) ○ OR correlation (state more association between variables) - Example: ○ H1: TV violence being will produce more aggressive behavior than will non-violent TV viewing (causal) VS ○ H1: the more TV violence children watch, the more aggressive they will be (correlation) OR ○ Aggressive kids watch more violence than non-aggressive kids See. p. 106 on "continuous" and "difference" statements Different Methods for Different Hypotheses! - Survey/Correlational Research ○ Ex. Researcher A ○ Tests correlation hypotheses/RQs  Correlation: mere relationship/association  Measure some variables and relate them Compare existing groups, etc.   Doesn't always mean questioning people!!!! ○ Great for external validity  Ability to generalize results to other people and/or to "normal life" settings ○ Poor for causality! - Experimental Research ○ Ex. Researcher B ○ Tests causal hypotheses/RQs  Manipulate variables/groups, control everything else, and measure effects  Questionnaires can also be a part of experiments ○ Great for internal validity  Ability to establish that X causes Y  I.e. not JUST relationship between variables BUT ALSO establishes time order (which variable came 1st) and rules out other explanations ○ Poor for generalizability! Week 1 Page 5 The Research Process Defining Concepts & Variables Thursday, October 16, 20111:05 AM Variables in Experimental Research (causal hypotheses) - Independent Variable (IV) ○ Variable manipulated by researcher ○ The "cause" in cause-effect relationship ○ Ex. Researcher B, violent video - Dependent Variable (DV) ○ Variable affected/changed by the IV ○ The "effect" or outcome ○ Ex. Researcher B, aggression Example hypotheses: ○ Greater physical attractiveness will create impressions of greater friendliness  IV: physical attractiveness □ Manipulate hi vs. lo attractiveness (bed head, no makeup VS. shower, hair done, makeup)  DV: impression of friendliness □ Ratings on friendliness scale Variables in Survey/Correlational Research (Correlation/relational hypotheses) - Can't be cause-effect, so… - IV considered a "predictor" variable - DV is what is being predicted by the IV ○ AKA "criterion" variable Example hypotheses: ○ Stronger "fan" identity predicts (is related to/associated with) greater participation in online fan forums ○ IV: fan identity  Rate how strongly connected to fandom ○ DV: participation in online fan forums  Measure how often people post, or report reading posts, etc.) ○ Could the IVs/DVs be other way around in this survey? YAAASSS Defining Concepts/Variables - Conceptualizing your variables ○ Defining what the concepts mean for purposes of investigation  Usually based on theory/prior research ○ Operationalizing your variables  Deciding exactly how the concepts till be measured (or manipulated) in a study Ch. 5 Measurement Saturday, October 18, 204:21 PM - Conceptualize variables ○ Clearly define what each variable means - Operationalize ○ Describes the research operations that specify the values or categories of a variable ○ Gets into specifics of how things are done - Indicator: a single observable measure ○ Ex. Single questionnaire item in survey ○ But often contain errors and rarely capture all the meaning of a concept ○ Often rely on more than one indicator when proving a concept - Two kinds of operational definitions ○ Manipulation: designed to change the value of variable  Ex. Blocking cars at a greenlight ○ Measurement: estimate existing values of variables  Ex. Counting how many times people honk - Systematic measurement error: factors influence the process of measurement or concept being measured ○ Questions being asked tend to be bias in one direction - Reactive measurement effect: participant's reaction/answer is affected by being observed ○ You might not react the same way knowing that someone is watching - Random measurement error: chance factors can affect the way a participant responds ○ Ex. Tired participant might answer differently than if was wide awake ○ Unpredictable, changes with every participant - Test-retest for reliability: ask same questions to same people at different times of day ○ Problem because people might remember or repeat what they answered before  People's attitudes can change in between tests  Random factors can cause varying results (positive/negative experience) Measurement -- Operationalizing Variables (both IVs and DVs) Thursday, October 16, 20112:11 PM Types of Measures - Physiological measures ○ Ex. BP, brain imaging, Cortisol (stress hormone), etc. - Behavioral measures ○ Ex. Nonverbal gestures, time/money spend, actual posts on social media - Self-report measures ○ Ex. Items on questionnaire Levels of Measurement - Nominal (categorical/discrete): ○ Variable is measured merely with different categories ○ Mostly demographic stuff ○ Ex. Political party, sex, ethnicity, TV violence (reward/punishment), TV use (High/low) ○ Nominal measures are for comparing differences ○ IN BOOK, CALLED QUALITATIVE VARIABLES - Ordinal: ○ Variable is measures with rank ordered categories  Don't know how far apart each is  Ex. Rank top 5 fav TV shows, most to least important, - Interval: ○ Variable is measured with successive points on a scale with equal intervals  Ex. Measure on immigration policy opinion □ The US should increase border security Strongly oppose 1 2 3 4 5 Strongly favor Strongly oppose -2 -1 0 +1 +2 Strongly favor  The numbers don't really mean anything, just where it is on a scale  0 does not mean an absence of something - Ratio: ○ Interval measurement with a true, absolute zero point  Ex. Time in hours, weight in lbs, age in years, test score, etc. ○ Interval & ration measures are "continuous" variables  Allow continuous type of hypotheses □ Ex. The more time you spend on FaceBook, the more depressed you are  BOOK SAYS QUANTITATIVE VARIABLES Measures should… - Capture variation! ○ Use continuous variables for DVs where possible ○ You get more information from it ○ Can always collapse into categories later - Minimize potential ○ "social desirability" effects  You answer in the way that would be socially okay  Ex. Over estimate how much exercise you do - Provide the data that will TEST your hypotheses (or answer your RQs) ○ Hypotheses must be "falsifiable"  Able to be tested empirically  There is some data that (if you got it) would show that hypotheses (and  There is some data that (if you got it) would show that hypotheses (and the theory that it's based on) is wrong  Note: you can never "prove theories/hypotheses true" & can only gain support/evidence ○ Using Questionnaire Items as Measures Tuesday, October 21, 20111:43 AM Common for IVs and DVs in surveys Common for DV's in experiment (IVs is a manipulation into groups) Types of Questionnaire Items - Open-ended ○ Respondents give their own answers to Qs ○ Ex. interview - Closed-ended ○ Respondents selected form list of choices ○ Ex. Questionnaire ○ Categories must be mutual exclusive! ○ Categories must be exhaustive  Have all options! Not necessarily list everything, but major ones then put "other" Some closed-ended formats - Likert-type items: ○ Respondents indicate their agreement with a particular statement ○ Ex. Parents should talk openly about sex with their children Strongly disagree 1 2 3 4 5 strongly agree ○ Other responses options also possible  Opposed/favor, not at all/very much, almost never/almost always - Semantic Differential ○ Respondents make ratings between two opposite (bipolar) adjectives ○ Ex. My best friend is… Check off where you think is appropriate, then researcher gives numbers later on. Ex. High numbers are negative, low are Warm __:__:__:__:__:__:__ cold positive unintelligent __:__:__:__:__:__:__ intelligent Warm 1 2 3 4 5 6 7 cold ○ WORD CHOICE IS IMPORTANT!!! (Cold vs. Hot, Intelligent vs. Stupid?) Unintelligent 7 6 5 4 3 2 1 intelligent ○ With multiple items, some may be "reverse coded" - Composite Measures ○ Use multiple items for one variable; combine those items into an "index" (aka "scale") ○ Ex. Variable: perceived credibility of a speaker  As a single-item measure: □ The speaker I just heard is credible 7 6 5 4 3 2 1 not credible  As a composite measure: □ The speaker I just heard is credible/not, knowledgeable/not, experienced/not, trustworthy/not, etc. ○ Uni-dimensional index: all items added (or averaged) into one overall score  Back to example…Credibility can be uni-dimensional: add scores on all items into one total "credibility" score ○ Multi-dimensional index: group different items into different "subscales"  To separate the different "dimensions"  Back to example… credibility can be multidimensional □ know + exp + comp = "expertise" dimension □ Trust + honest + unbiased = "trustworthiness" dimension Week 3 Page 10 How Good is Your Measurement? Reliability and Validity Thursday, October 23, 201411:07 AM Reliability of Measurement - Are you measuring the concept consistently? ○ Ex. If you step on scale & weight is always different, not reliable Assessing Reliability - For measures using questionnaire items: ○ Inter-item reliability (used to assess consistency) ○ Administer same items more than once  Ex. Test-retest (test same people the same thing, at different times); split- half  Ex. IQ test: give same test to same general population and hope that you get same general range of scores every time ○ Look at internal consistency of similar items in a scale/index  Compute "Cronbach's alpha" □ Scale from 0 - 1, the higher the number, the more reliable your scale is □ Ideally, 0.7 and higher  Ex. Questions about being happy, enjoyable Example: credibility of speaker Credible 7 6 5 4 3 2 1 not credible Knowledgeable not knowledgeable Experienced/not Trustworthy/not Honest/not Unbiased/not Competent/not ○ Unidimensional "credibility": add scores on all items into one total "credibility" score  We can't put all of these into a Cronbach alpha because they are not all measuring the same things! Ex. Trustworthy & competent.  Turn out to have LOW Cronbach's alpha ○ Multidimentional "credibility": likely higher reliability since computed separately for each subscale  Know + exp + comp = "expertise" dimension  Trust + honest + unbiased = "trustworthiness" dimension - For measures using coders (e.g. behavioral observations): ○ Inter-coder reliability  Compare multiple coders  Both coders measure the same thing in the exact same way ○ Intra-coder reliability  Compare multiple observations of same coder  Same person measures the same thing consistently Validity of Measurement - Does your measure really capture the concept you intend to be measuring? ○ You want a good "fit" between your conceptual and operational definition Assessing Validity - Subjective types of validation: ○ Face Validity  The measure looks/sounds good "on the face of it"  Ex. Yea, that sounds about right!  Ex. Yea, that sounds about right! ○ Content Validity  The measure captures the full range of meanings/dimensions of the concept ○ Criterion-related validation:  AKA Predictive Validity  The measure is shown to predict scores on an appropriate criterion/future measure □ Ex. SAT scores (supposed to measure your "potential" to achieve in college) to predict college GPA (your achievement) □ Ex. Verbal aggressiveness scale predicts actual aggression in later interaction ○ Construct validation  The measure is shown to be related to measures of other concepts that should be related (and not to ones that shouldn't) □ Ex. Verbal aggressiveness scale <-- --> hostility scale There is correlation…  □ Ex. Awkwardness not correlated with self-esteem DON'T WORRY ABOUT CONVERGENT/DISCRIMINANT Relationship between validity and reliability - Can a measure be reliable but not valid? ○ YES. You can have a scale that constantly says you are 50 LBs, but it's not true. - Can a measure be valid but not reliable? ○ NO. if what you are measure weight, if you can't even get an exact weight? Triangulation of Measurement - Use several different ways to measure one variable; compare results ○ Ex. To measure kids' "fear" ○ Use different types of measures (facial fear, heart rate, & self-reported fear) ○ OR use differently phrased scales (yes/no scared, how scared/frightened/terrified) - Can triangulate measures within one study or across different studies Sampling How we select participants (or other units) for a study Thursday, October 23, 201412:11 PM - Sample: A subset of the target population (who/what you want to report about) ○ Ex target population: voters, facebook users, married couples, juries, football fans, etc. ○ Or TV shows, magazine ads, blog posts, etc. - Sampling units: ○ Individual persons (e.g., voters, fans, etc.) ○ Groups (e.g. couples, juries, orgs, countries) ○ Social artifacts (e.g. ads, TV scenes, tweets, threads) Representative Sampling (Probability sampling) - Intended to be a "miniature version" of the target population - KEY is random selection: ○ Everyone in population has equal change of being included in sample - HOW representative is it?? - Will always be "Sampling Error": ○ Sample data will be slightly different from population because of chance alone  Aka "random" error ○ Statistically, this is known as the "margin of error"  Ex. National poll N = ~1000 --> 3% ○ Bigger sample size, smaller margin of error  More chance of everyone being factored into the sample so less error Representative Sampling Techniques - Simple random sampling ○ Select elements randomly from population ○ Listed populations: random #'s table  Random numbers are listed, then you randomly find a start and use the numbers to match to names ○ Phones: random-digit dialing  Random numbers are randomly dialed - Systematic sampling ○ From a list of the population, select every "nth" element, AND must have random start; cycle through entire list (when you start again, it has to overlap) ○ Similar results as simple random sample - Stratified sampling ○ For getting population proportions even more accurate ○ Divide population into subsets ("strata") of a particular variable  Usually stratify for demographic variables □ Ex. Sex, race, political party, class) ○ Select randomly from each strata to get right proportions of the population  Ex. School says 30% of freshmen in school, so you pull out 30% in your freshmen strata ○ Need prior knowledge of population proportion Ex. Stratify by "sex" School population: 60% girls 40% boys Choose 6 girls from girls strata, and 4 from boy strata randomly The 10 people picked come together to be your sample ○ Increases representativeness because it reduces sampling error (for the stratified variable) ○ stratified variable)  Ex. No sampling error for sex, has error elsewhere, but not sex. ○ But more costly & time consuming - Multistage cluster sampling ○ Useful for populations not listed as individuals (or too big of population) ○ First randomly sample as group level (clusters), then randomly sample individual elements within each cluster ○ Ex. Sampling "high school athletes"  Have to go to every high school and ask for every team roster, then put into computer  1st stage: randomly sample high schools 2nd stage: random sample athletes from those high schools in sample  ○ Reduces costs ○ But sampling error at each stage will add up So, for all of Representative Sampling techniques: - Will always have sampling error - But can generalize to the larger target population (assuming done properly) - Caution: Avoid Systematic Error (sampling bias) ○ Systematically over- or under- represent certain segments of population ○ Caused by:  Improper weighting (use wrong proportions of people)  Very low response rates  Wrong sampling frame (didn't really focus on the group that you really wanted) □ Ex. Want high school athletes, but you choose people randomly from high school, you will get some athletes & some not  Using non-representative sampling methods Non-representative Sampling (cannot generalize) - Convenience sample ○ Select individuals that are available/handy - Purposive sample ○ Select specific individuals for special reason (their characteristics, etc.)  Ex. Disney cast members are interesting so I'll use them - Volunteer sample ○ People select themselves to be included  You choose which polls you want to participate in - Quota sample ○ Select individuals to match demographic proportion in population  Kind of like strata but not random, stop when you get the numbers right - Network/snowball sample ○ Select individuals, who contact other similar individuals, and so on…  You know people who know people who know people Ch. 6 Sunday, October 26, 20148:38 PM - Simple random sample: everyone in the population has equal chance of being chosen - Stratified random sampling: population divided into groups (strata) by a commonality, then random samples are chosen and put together into one big sample ○ Ex. Can be divided by gender, grade - Cluster sampling: population broken down into natural groupings ○ Ex. College campus, city, state ○ Used when entire population list just isn't available ○ Single stage cluster sample: just use entire cluster as sample ○ Multistage cluster sampling: randomly select people within the cluster ○ Also less $ - Systematic sampling: selecting participant at every _th interval. Also starts at a random place - Purposive sampling: researcher uses best judgment to select people that are representative of the population - Quota sampling: population placed in strata then researcher creates a quota for group & chooses anyone who can meet this quota - Network sampling: initial participants are asked to identify members in target population who are socially linked to them (kin, friends, neighbor) - Snowball sampling: initial participants refer another person within the target population and so on Survey Research Tuesday, November 4, 201411:03 AM Primary Goals - Identify/describe attitudes or behaviors (in a given population) ○ Can examine one point in time or track over time Cross-sectional Surveys - One sample measured at one point in time ○ Ex. ask parents, how do you feel about Deltopia? Longitudinal surveys - Variables measured more than one point in time ○ Same variables, does attitudes change over time? ○ Ex. Do people's attitudes about Deltopia change over time (after riots have passed)? - Panel - same people each time - Trend - different random samples from same population ○ Ex. Poll "Americans" every 5 years their church-going; track "likely voters" over the course of election campaign) - Cohort - different samples, but of same "cohort" ○ Something that connected people based on time  Ex. Birthday, graduation year ○ Ex. Survey class of 2012 every 5 years regarding their employment since graduation) Primary Goals (cont.) - Examine relationships between the attitude/behavior variable measured ○ Does X predict/relate to Y? ○ Ex. Does exposure to alcohol ads (X) predict teen drinking (Y)? Administering Surveys Tuesday, November 4, 201411:49 AM Self-Administered Questionnaires - Mail surveys; online or emailed questionnaires; handouts; diaries ○ Relatively easy and inexpensive ○ No interviewer influence ○ Increased privacy/anonymity BUT…. ○ Must be self-explanatory  Something people are willing to do  Easy to understand ○ Very low response rate!! - Ways to increase response rate ○ Have inducements  Ex. Money, raffle ○ Mark it easy to complete and return  Don't make it too long ○ Include persuasive cover letter and/or do advance mailing ○ Send follow up mailings  Keep bothering them about it Interview Surveys - Personal/face-to-face ○ More flexible (can probe for depth) ○ Higher response rate BUT…. ○ More potential for interviewer influence  Don't want to do anything that could influence responses (way you stand, dress, etc.) ○ Higher costs - Telephone ○ Quickest results  Don't have to wait for responses, staff to come back, etc. ○ Compare to face-to-face: reduced costs, more privacy, more efficiency ○ Compared to self-admin: more detail possible, better response rate ○ But what about call screening & cell phones?  Some people don’t pick up unknown numbers  What if they are outside of their home? Driving? Shopping? Question Wording (and order) is IMPORTANT! Thursday, November 6, 20111:26 AM Importance of answer choices: - Gallup Poll Questions on health care: "Do you think it's responsibility of federal govt. to make sure all Americans have healthcare coverage? Or is that not the responsibility of the fed. Govt.? VS. Which comes closer to your view about health insurance-- the govt. should be primarily responsible for making sure all Americans have health insurance or Americans themselves should be primarily responsible for making sure they and their families have health insurance THE ANSWERS WILL COME OUT DIFFERENTELY! More people will say that Americans are responsible for health insurance when asked second question. How Do We Relate Variables in Survey Research? Recall the goals for surveys… - Identify/describe attitudes or behaviors (in a given population) - Examine relationships between the attitudes/behaviors variables… Relating variables Depends on your hyp/RQ, and how your variables are measured! - ○ When both IV & DV are nominal/categorical (discrete variables):  Ex. Yes/no; M/F; support/oppose/no opinion ○ All you can do is break down the percentage by category Ex. Gallup survey on support for legalization of marijuana…(2009) Q: do you think the use of marijuana should be made legal, or not? 44% yes, 54% no Could this be related to another variable? How about gender? Ex. RQ: does gender (IV) predict support for legalization (DV)? □ In 2005 41% male vs. 32% female □ In 2009 45% male vs. 44% female - If IV is categorical, but DV is interval/ratio data (continuous): ○ DV uses Likert, semantic diff items, etc. ○ Compare mean (average) DV scores for the different IV categories  Ex. Compare mean score for men vs. mean score for women ○ Comparing means  Ex. RQ: Does political ideology (IV) predict support for legalization (DV)?  IV: political ideology □ Measured as categorical/nominal var:  I consider myself (check one): __liberal __moderate ___conservative Break down the percentages (categorical IV & DV □ OR, if IV originally measured as a continuous variable, but then collapsed to Compare means (continuous DV) categorical:  I consider my political views to be: Compute correlation (continuous IV & DV) Very liberal 1 2 3 4 5 6 7 very conservative THEN we split it up ourselves…1-3=liberal 4=moderate 4-6=conservative  Divide participants into categories based on their scores (median split) ◊ Ex. Above median = conservative, below median = liberal  DV: support for legalization □ As continuous variable (interval/ratio):  The recreational use of weed should be made legal. Strongly agree 1 2 3 4 5 6 7 strongly disagree  So, you just compute a mean score on the DV for subjects in each IV category ◊ Ex. Conservatives: M=3.3 Liberals: M=6.1 Conclusion: liberals are (significantly) more in favor of the legalization of weed - If BOTH IV and DV are interval/ratio data: ○ Compute a correlation  Statistical value that relates two (or more) continuous variables Correlation - Compute an "r" value (Pearson r") ○ r tells you: type (+ vs -) and magnitude (strength) of relationship Type of relationships Positive r: as X increases, Y increases AKA "direct" relationship - Ex. More alcohol, more BAC - Negative r: as X increases, Y decreases AKA "inverse" relationship Week 5, 6 Page 18 Magnitude of Relationship (strength) - r ranges from 0-1 ○ -1.00 <-- 0 --> 1.00 - The further from zero, the stronger the relationship - r= +1.0 (perfect correlation) ○ All dots on the line - r is smaller (weaker) ○ The further the dots are from the line, the less correlation What can you conclude from survey/correlational data? - CAN conclude that variables are related/associated CANNOT conclude that one variable causes the other!! - ○ Why not? You didn't control all factors, so cannot come up with causality  Outside factors  Don't know which came first A --> B or B --> A To establish causality… - Variables must be related (A correlated with Y) ○ Okay so far - surveys can show that  Ex. ^ time studying, ^ GPA - Must establish time order (IV happened BEFORE DV) - Must rule out other explanations/causes So, Survey/Correlational Research Has Two Causality Problems 1. Causal Direction Problem (time order) ○ Does X ---> Y or Y ----> X? 2. Third Variable Problem (other explanations/outside factors) ○ Does some 3rd variable explain the X/Y relationship? Getting Closer to Causality - To help solve 3rd variable problem: ○ "Partial correlation"  Measure potential 3rd variables  Statistically "partial out" (control for) effects of those 3rd variables  Then see if X/Y relationship still holds Ex. GPA Time Studying Only look at the overlap part without the 3rd variable 3rd variable: Interest in School □ If X/Y relationship shill holds, can rule out the 3rd variable as the cause  What if the third variable takes away almost all of the correlation? □ If X/Y relationship disappears (or is reduced substantially), then the 3rd variable explanation matters - To help solve causal directional problem: need a longitudinal study ○ "cross-lagged panel design"  Time 1: Measure X & Y variables  Time 2: measure X & Y variables again later for the same people  Compute r's for X & Y, but across the times measured Ex. IM and adolescents' friendships IV: IM use DV: friendship quality IM use IM use Time 1 Time 2 (6 months later) Quality of Quality of Friendship Friendships Time 2 Time 1 (6 months later) You know that time 1 came before time 2 so you can find correlation between IM use (Time1) to Quality of friendship (Time2), which is positively correlated, and then see if there is correlation between Quality of Friendship (Time1) to IM use (Time2), which there isn't So, measure both variables for same people at different times; then see which "cross" relationship holds Week 5, 6 Page 19 Types of Experiments Tuesday, November 18, 2011:13 AM Design notions: X: Manipulation/treatment O: observation measure of DV) R: Random assignment True Experiments 1. Posttest only (control group) design ○ R X O1 (group 1) ○ R O1 (group 2) (missing the treatment) ○ Ex. R X (anti-smoking ad) O1 (beliefs about smoking) Compare both R (no anti-smoking ad) O1 (belief about smoking) groups IV manipulation DV manipulation If you get a difference between group means (on O1), the IV caused it ○ Variations: more groups, several different treatments  Ex. IV: anti-smoking appeals R X1 (ad w/ personal testimony) O1 (group 1) R X2 (ad w/health statistics) O1 (group 2) R X3 (ad re tobacco industry) O1 (group 3) 2. Pretest-posttest (control group) design ○ R O1 X O2 (group 1) ○ R O1 O2 (group 2) ○ Ex. R O1(beliefs about smoking) X(anti-smoking ad) O2(beliefs about smoking) R O1(beliefs about smoking) (no anti-smoking ad) O2(beliefs about smoking) Again, if get difference between groups (on O2), the IV caused it! ○ The pretest can help to make the groups more equal, can be compared with O2 (did your manipulation change their mind?) ○ Possible problem: diffs on O2 MIGHT be results of interaction of manipulation with pretest  Did O2 change because of X? 3. Solomon four-group design ○ R X O1 (group 1) ○ R O1 (group 2) ○ R O1 X O2 (group 3) ○ R O1 O2 (group 4) Ex. The effect of sex ed on adolescents' use of condoms: applying the Solomon four-group design Sex ed ONLY increased students' condoms use when there was a pretest first! ○ Most research DO NOT use this test though Pretesting: Should you or shouldn't you? - Useful: ○ To "check" on random assignment ○ To get info on change - But: ○ Not necessary to establish causality ○ Bad idea if treatment/pretest interaction is likely Example study: music & learning - RQ: does listening to music (while studying) hinder or enhance learning? IV: music while studying DV: learning Possible experiment: Possible experiment: - Post-test only: R X(music) O1(test score) (group 1) M=65 R (no music) O1 (test score) (group 2) M=78 Conclude that music hinders learning WHAT IF we want to test for effects of ANOTHER IV? Do factorial designs Experimental Research Thursday, November 13, 201411:49 AM Purpose - To test hypotheses of cause and effect ○ Goal is to establish internal validity ○ Willing to sacrifice external validity - Remember…to establish causality… Key Elements to a True Experiment - Manipulation of causal variable(s)… ○ Divide IV into "conditions"  Ex. IV: New painkiller drug □ Half of the subjects get drug/half don't …while controlling all other variables Subjects in each condition treated the same way, same amount of time, etc. ○ Examine effects on DV  Compare measures (mean scores) for subjects in each condition and see if differences exist □ Ex. DV: amount of perceived pain (e.g., Likert scale) - Random assignment of participants to conditions ○ Everyone must have an equal chance of ending up in either condition ○ Why important? Factorial Designs Tuesday, November 18, 20112:00 PM - Purpose: to examine the effects of two or more IVs simultaneously "Factors" are IVs - Each factor/IV has at least two levels (conditions) ○ Ex: music factor: music/no music AND caffeine factor: caffeine/no caffeine DV: learning (test score) Music while studying Music No music Caffeine Caffeine while studying none A 2x2 design (2 levels of music x 2 levels of caffeine) What if more than two levels of factor? Music while studying Pop music Classical music no music Caffeine Caffeine while studying none 3 X 2 design 3 levels of music X 2 levels of caffeine - What if more than two factors? ○ Music factor: pop, classical, none ○ Caffeine factor: caff, none ○ Gender: male, female 3 X 2 X 2 design Factorial designs test for: - Main effects - Interaction effects Main Effect - The effect of one IV individually on the DV ○ Ex. (for the 2 music x 2 caffeine study ○ Main effect for caffeine: lower scores with caffeine without (caffeine worsens learning) - The test for main effects, compare "marginal" means of DV for each factor/IV DV: learning (test scores) Music while studying Music No music Caffeine M=60 Caffeine M=50 M=55 while studying M=60 none M=70 M=65 M= 60 M=60 - Yes, there is main effect for caffeine: Greater learning without caffeine than with OR caffeine worsened test scores - No main effect for music: studying with or without music made no difference - BUT…main effects don't tell the whole story!! ○ There could be other factors ○ Music is bad with caffeine but okay without it Interaction Effect - The unique effect of the combination of IVs ○ The effect of one IV depends on the levels of the other IV(s) - Examples for a music X caffeine interaction: ○ Caffeine reduces learning only when combined with listening to music; without music it has no effect - To test for an interaction effect, graph the cell means 100 - There is an interaction effect if lines 90 are not parallel 80 70 No caffeine 60 50 Caffeine … X: Put one IV/factor Music no music So, although caffeine lowered scores overall, the effect was worse when combined with music. Music actually improved scores when without caffeine. Music while studying Week 7, 8, 9 Page 23 Music no music So, although caffeine lowered scores overall, the effect was worse when combined with music. Music actually improved scores when without caffeine. Music while studying Music No music M=70 Caffeine Caffeine M=80 M=90 while studying M=50 none M=70 M=60 M= 80 M=60 100 If only get graph, put into back into the box to find 90 100 Caffeine No caffeine main effects Caffeine 90 80 70 80 60 70 60 50 No caffeine 50 … … Music no music Music no music A word about factors (IVs) - In one design, can have as IVs/factors: ○ Manipulated variable(s) - Ex: music exposure; caffeine ○ Subject variable(s) - Ex. Gender; personality traits; TV use (hi/lo) - Can only make causal conclusions about manipulated IVs (NOT SUBJECT VARIABLES) - If NO manipulated variables at all, then it's not an experiment!!! (it's a survey, with factorial-type set-up) Week 7, 8, 9 Page 24 Experimental Research (cont.) Thursday, November 20, 20111:54 AM Key Elements to a True Experiment - Manipulation/control + random assignment = interval validity ○ Posttest only (control group) design ○ Pretest-posttest (control group) design Threats to Internal Validity - If NOT a TRUE experiment or if do experiment improperly, then….. ○ Alternative explanations for results become possible (i.e., threaten internal validity) "Pre-Experiments" Some manipulation of IV, but no random assignment, thus many threats to internal validity ○ One-shot case study X O1 (group 1) Ex. Test "when you smile, people will smile back" X (smile at people) O1 (record smiles received) BUT, LOTS OF 3RD VARIABLES: maybe people were already smiling, you only smiled at happy people ○ One-group pretest-posttest design O1 X O2 (group 1) Ex. O1 (record smiles) X(smiles at people) O1(record smiles again) Smile at people, then go around and smile at the same people again BUT STILL, 3RD VARIABLES: they thought of something funny, recognized you from first time ○ Static Group Comparison (posttest only, non-equivalent groups) X O1 (group 1) O1 (group 2) Ex. X(smile at some people) O1(record smiles) (no smiling at people) O1(record smiles) Go to lagoon, smile at them, go to library, smile at them BUT, could have been because of location (library: sad because of studying) - Selection Bias ○ Don't randomly assign subjects ○ You can choose subjects that would support your hypothesis ○ You don't know if your results were because of the people since there was no random assignment - History effect ○ Something outside the study happens that's a possible effect of the result ○ Ex. Something good happened so they were smiling, not because you smiled at them - Reactivity Effects ○ Participants reactions to being studied, rather than to IV/treatment, influences DV ○ Social desirability effect ○ Hawthorne effect  Get a reaction from people when they know that they are being watched  Ex. Trying to see if lighting affects productivity, workers knew they were being watched; lights were raised, productivity went up, lights raised more, productivity rose again, lights dimmed, productivity still went up ○ Placebo effect  Getting something that is supposed to result in a particular outcome  Believes that the pill works when it was really just a sugar pill ○ Demand characteristics  Subject thinks they guessed your hyp so they try to give you the answers you want - So, how to remove/control these threats? ○ Conduct a TRUE experiment!  Random assignment to proper conditions ○ Be sure to treat groups equally  All group(s) get equal time, attention, etc. - Threats related to pre-testing (or measures over time): ○ Testing effect (AKA sensitization)  When asking the same questions again, their answers might change just because you Week 7, 8, 9 Page 25  When asking the same questions again, their answers might change just because you already asked about it & they're now thinking about it ○ Maturation ○ Statistical regression (to the mean)  When you fail a test, just by chance, you should get a better score. If you ace a test, by change you will score lower  Threat to internal validity when you have extremes ○ Instrumentation  Change of measurement to support your research  Ex. Overnight, 500000 people suddenly became overweight because the BMI equation was changed ○ Mortality (attrition)  From time 1 to time 2, people could drop out of your study  Smiles went up in time 2 because the sad people in time 1 were dropped out ○ TRUE EXPERIMENT better because there are 2 groups that you can compare. If the results differ, it would be because of the manipulation - SO HOW TO REMOVE THREATS? ○ TRUE EXPERIMENT! - Experimenter Effect/Bias: ○ Experimenter's behavior or attributes, rather than treatment (IV), influences DV ○ Treat the groups the same!  Ex. Give medicine "feel better" give placebo "good luck with that.." ○ How to control experimenter effects?  Same thing again (true experiment, etc.) but also: □ Double blind study  Both the researcher and participants don't know which is control/experiment group □ Automate (or script) the experiment  Makes sure that the researchers say the same thing Quasi-Experiments - Not true experiments (no RA), but have decent "comparison" groups - Nonequivalent Control Group Design O1 X O2 (group1) O1 O2 (group2) Use pre-test scores (and any other info you have) to "match" groups before manipulation - Time Series designs ○ Track many observations over time, before and after manipulation ○ Single-group interrupted time series design  O1 O2 O3 O4 X O5 O6 O7 O8 (group 1) Improves upon the "one-group pretest posttest" EX. CRIME PREVENTION PROGRAM Put up a rec center, crime rates seem to go down Crimes reported BUT maybe it wasn't the rec cen SO.. Jan X Feb Crimes reported Red: still criticize and say it crime was already on decline Blue: should see a MAJOR change Nov Dec Jan x Feb Mar Apr Solves some threats to internal validity (testing, maturation) Variation: take treatment away & measure again ○ Multiple time series design  O1 O2 O3 O4 X O5 O6 O7 O8 (group 1)  O1 O2 O3 O4 O5 O6 O7 O8 (group 2) Improves upon "non-equivalent control group" design EXAMPLE: MEDIA LITERACY PROGRAM School A(red) does not get program Week 7, 8, 9 Page 26 Improves upon "non-equivalent control group" design EXAMPLE: MEDIA LITERACY PROGRAM School A(red) does not get program Media literacy School B(blue) gets program after week 3 Scores 1 2 3 4 5 6 7 8 Variation: give comparison group treatment and keep measuring Give School A treatment later on & see if it matches School B's results Week 7, 8, 9 Page 27 Experimental Research Special issues... Tuesday, December 2, 201411:37 AM Laboratory vs. Field Experiments - Laboratory Experiments ○ Bring subjects into highly controlled setting ○ High control --> high internal validity ○ Artificial setting --> low external validity ○ Must watch more carefully for experimenter & reactivity effects - Field Experiments ○ Manipulate IV(s) in the "real world"  Ex. Littering study (why do people do what they know they're not supposed to do, litter, cut in line, etc.) □ In parking lot, put flyers on car and see whether people would throw on floor □ Control group: no confederate □ Experiment: confederate drove by and threw flyer out the window □ Experimental group littered more  Ex. "She Said No" study □ Control: asked to watch TV at home like normal □ Experimental group: asked to watch this movie at home ○ More natural setting/behavior --> higher external validity ○ Less reactivity ○ But harder to maintain experimental control Within-Subject Designs (repeated measures) - So far, a of our experiments have been "Between Subjects" designs: ○ Randomly assign subjects to different conditions - Within-Subjects Design ○ Every subject is in every condition ○ Ex. Pilots' reaction time to warning lights  Possible Between Subjects design, with 18 pilots Tell pilots to flip switch as fast as they can when they see warning light R X (red) O1 (reaction time) (group1) R X (green) O2 (reaction time) (group2) R X (yellow) O3 (reaction time) (group3) PROBLEMS: small N's; much random error  As a Within Subject design: each pilot reacts to every color light □ Pilot #1: 1st red light --> reaction 2nd green light --> reaction 3rd yellow light --> reaction NOW: N=18 in each condition (increase power; decrease random error) P


Buy Material

Are you sure you want to buy this material for

75 Karma

Buy Material

BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.


You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

Why people love StudySoup

Steve Martinelli UC Los Angeles

"There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

Amaris Trozzo George Washington University

"I made $350 in just two days after posting my first study guide."

Jim McGreen Ohio University

"Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

Parker Thompson 500 Startups

"It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

Become an Elite Notetaker and start selling your notes online!

Refund Policy


All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email


StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here:

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.