Comm. 88 Midterm Review Guide - Mullin Spring 2016
Comm. 88 Midterm Review Guide - Mullin Spring 2016 Comm. 88
Popular in Communication Research Methods
verified elite notetaker
Popular in Communication Studies
This 13 page Study Guide was uploaded by meg.camp on Monday April 25, 2016. The Study Guide belongs to Comm. 88 at University of California Santa Barbara taught by Dolly Mullin in Spring 2016. Since its upload, it has received 184 views. For similar materials see Communication Research Methods in Communication Studies at University of California Santa Barbara.
Reviews for Comm. 88 Midterm Review Guide - Mullin Spring 2016
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 04/25/16
Ways of Knowing & Approaches to Research -Lecture: Everyday ways of knowing (authority, experience, etc.) & their problems; characteristics of science; goals of science; research approaches (science/positivism vs interpretive vs. critical); induction & deduction; quantitative & qualitative methods; basic/theoretical/scholarly research vs. applied -Ch. 1 & Ch. 2: lecture overlap, plus proprietary research (Ch. 2); additional info/terms in some overlap areas. Text Ch. 3: there is some “testable” info t (e.g., primary vs. secondary sources; peer review; etc.) Lecture: Review Lectures 2 & 3 Chapter 1 What is research? Research: the activity of conducting intellectual investigations into the observable world o Social research: focuses less on the observable world in which human beings interact and more on the interactions themselves, how they come into existence, how they function, and how they affect the human experience How do we know what we know? Epistemology: the study of knowledge o Experience: a common way of understanding the social world through previous experiences ("learning the hard way") o Tenacity: the assumption that something is true because it has always been said to be true (norms and assumptions) o Authority: the reliance upon someone in a position of power to determine what is factual; may be derived from a variety of places (political power, religious authority, interpersonal trust) o A priori: a form of reasoning based on hunches and observations that have been accumulated over time; intuitively makes sense that something is true What's wrong with everyday ways of knowing? Accuracy: most people are just casual observers Over-generalization: a problematic way of knowing in which individuals base their knowledge on just a few experiences or observations Cognitive conservatism: the idea that how someone views the world is often based upon his or her prior beliefs Contradictory knowledge: sources (parents, friends, authorities) give different advice or perspective that both to an extent are valid, yet don't match up Scientific Research Scientific reasoning: a way of knowing where one has a hunch about how things ought to be, then tests that hunch by making observations o Hypotheses: educated guesses about a social phenomenon based on prior observations o Theory: a set of interrelated constructs, definitions, and propositions that present a systematic view of phenomena by specifying relations among variables, with the purpose of explaining and predicting the phenomena What do communication researchers do? Scholarly research: primary research conducted by academic researchers and distributed through academic publications with the desire to build theory Applied research: takes the theoretical lessons learned in academic studies and applies them to varying real life contexts What are some examples of communication research? Business and organizations o Organizational communication: dedicated to studying the ways in which people exchange information in order to accomplish group or individual goals Media o Media research: uses surveys, lab experiments, and content analysis to answer questions about the uses of and responses to mediated information and can be useful in understanding how humans convey meaning Health Care o Health communication: looks at the interpersonal exchanges between doctors and patients, public health campaigns, and the flow of information in public health organizations, and other health issues Interpersonal interactions o Interpersonal communication: study of compliance gaining and social influence; persuading people to adopt certain attitudes or behaviors; establishing identity, information gathering, building understanding with each other; and creating and maintaining relationships in a variety of contexts Chapter 2 Meta-theoretical Considerations Meta-theory: "theory about theory" that allows people to understand the philosophy driving their decisions about research methods, design, and analysis o Ontology: the study of the nature of reality; "how do we know about the world around us?" Realist: the belief that the world exists and is tangible Nominalist: the belief that without individual cognition, the social world is arbitrary Social constructionist: the belief that our ideas about reality are constructed through societal and interpersonal communication o Epistemology: the study of the nature of knowledge; "what can we know?" Objectivist: the belief that it is possible to know and explain the world Subjectivist: the belief that knowledge is relative and can only be understood from the point of view of the individual involved o Axiology: the study of values Research Perspectives and Paradigms While our meta-theoretical considerations tend to be intuitive and learned through everyday ways of knowing, early in our lives, research paradigms are more often taught through formal education Positivism: the most traditional research perspective with the belief that scientific evidence is superior to other types of knowing o Know by discovery o Falsification: prove that something is not true rather than true Interpretivism: a research perspective in which understanding and interpretation of the social world is derived from one's personal intuition and perspective o Phenomenology: concerned with the essence of what we experience from an individual's first-person perspective; "lived experience" o Subjective Critical perspective: uses research methods as a tool to challenge unjust discourse and communication practices, with the goal of using knowledge as a tool to create social change o The structure and control of language perpetuates power imbalances in our society o Individuals become less sensitized to repression through the rold of the mass media o We rely too heavily on the scientific method and accept all empirical findings, without adequate critique Hermeneutics: a way to study textual data through interpretation to create meaning; knowledge gained via interpretation Types of Research Proprietary: conducted for a specific audience which is not shared beyond the intended audience Scholarly: conducted to contribute to generalizable knowledge for public consumption o Methodical, creative, self-critical, public, cumulative and self- correcting, cyclical Two Logical Systems Inductive model: often referred to as grounded theory; communication scholars begin the research process by gathering data, observing patterns and idiosyncrasies within the data, and developing theory based upon that data o Most who prefer this method of inquiry use qualitative research methods Deductive model: typically referred to as theory driven; scholars preferring a deductive approach begin with a theory and gather evidence to evaluate that theory o Most who prefer this method of inquiry use quantitative research methods Qualitative and Quantitative Research Qualitative: research, usually studying words or texts, that uses methods that embrace a naturalistic, interpretive paradigm typically from an inductive discovery based point of view Quantitative: research, usually studying numerical data (or reducing words to numerical data), that uses methods that embrace a post-positivist paradigm typically from a deductive explanatory based point of view Chapter 3 What are the purposes of Library Research? Determine what's already known Define the problem and formulate possible solutions Plan the collection of primary data Define the population and select the sample in your primary information collection Supply background information o Body of knowledge: research that has already been conducted Types of Research Primary research: conducted to answer a specific problem or question and produces original data Secondary research: research that has been previously collected or conducted Phases of Research Conceptualization: forming an idea Operationalization: defining your terms as you're using them in your research project Reconceptualization: you reconnect it back to the larger body of knowledge Coming Up with your Research Question Study objectives: what is it that researchers hope to answer through their research o Good researchers refer back to the objectives every step of the way and ensure that everything they are doing answers these objectives o Usually, there is one main objective, and several related secondary objectives Applied research: tests theory in real life contexts to see if it can be used to solve problems Databases: sources that search through many journals at a time Peer review: the way in which a scholarly field determines which research is acceptable, sound, and valid, and which is not Theories, Hypotheses, Research Questions -Lecture: Theory, concepts, variables; hypotheses & RQs; falsifiability; survey/observational vs experimental research; external vs internal validity; causal vs associational relationships; difference vs. continuous statements (discussed in section) -Text Ch 6: mostly lecture overlap; ignore “null” hypothesis stuff Lecture: Review Lectures 4 & 5 Chapter 6 How do you ask Research Questions? Research question: questions scholars; asks about the way things work o Good starting points for new areas of inquiry o Used when researchers want to find out certain information about a topic but don't know enough about a topic to make a prediction o Or they want to understand or describe communication behavior Questions of definition: How exactly do you define communication? "One cannot not communicate" Not empirical questions with a definitive answer, but important to the process Questions of fact: asking questions regarding what has or will happen Much communication scholarship focuses on questions of fact Concern what is going on in the world May ask about relationship of association or relationship of causation What are Research Hypotheses? Hypothesis: a statement a researcher makes about the relationship between a dependent and an independent variable o A numeric subscript following the symbol H refers to the number of the hypothesis in a particular study o Null hypothesis: a statement that the research hypothesis is wrong; there is no (null) relationship between the variables that the research predicted Forms of relationships in hypotheses o Associational relationship: implies that where one variable is found, the other will also be found; NOT implying one causes the other o Causal relationship: implies that one variable causes a change in the direction of the other variable Positive linear relationship: A increases, B increases Negative linear relationship: Aincreases, B decreases Curvilinear relationship: Aincreases, B decreases for a while, then B increases o Directional hypotheses: one-tailed; you know where you expect to find your result o Non-directional hypotheses: two-tailed; tells you that you must consider a result that could occur on either end of our distribution o Fact pattern: a factual relationship occurring repeatedly How do you set up good Research Questions? Conceptual definition: how the concept or variable that is being studied is defined Operational definition: how the concept or variable of interest is measured and/or observed What are the boundaries of Research Questions and Hypotheses? Under which conditions do you suspect your research question and/or hypotheses may be true? o Time, place, people, and situations o The key is to consider boundaries that might influence the process of inquiry o You need to be aware of the specificity you offer Defining Variables & Measurement -Lecture: Independent & dependent variables; conceptualizing versus operationalizing variables; types of measures (self report, etc.); levels (nominal, interval, etc.); questionnaire items (open/closed; likert, semantic differential); composite measures; reliability (inter-item, inter-/intra-coder); validity (face, content, etc.); triangulation -Text Ch. 7: lecture overlap; plus extraneous variables, social desirability; ignore the section on the different types of relationships (stochastic, sequential, sufficient, etc.). Text Ch. 9: mostly lecture overlap; ignore “concurrent validity”; skip pp. 188 – 193 for now (save for final exam) Lecture: Review Lectures 5, 6, & 7 Chapter 7 What is a Variable? Variables: any concepts that have the ability to take on more than one value o The key is that a variable varies Revisiting Conceptual and Operational Definitions Measured operational definition: describes how a researcher can measure the existence or quantity of a variable (ex: IQ test, letter grade) Experimental operational definition: specify how the researcher can manipulate a variable in an experiment to produce at least two values or levels of a concept (ex: reaction to an attractive v unattractive speaker) Operationalizing: Matching Your Variables to Your Study Conceptual fit: how closely your operational definition matches your conceptual definition o What is measured and what you set out to measure o Next step is to ask if you agree with how the research measured the variable? Measuring variables o Self-report: this procedure is good at measuring individual's beliefs, attitudes, and values, or in finding out about behaviors we might not be able to observe directly Social desirability: the idea that if participants are asked to answer questions that are sensitive in nature, they will undoubtedly feel swayed to present themselves in a particular light, regardless of whether it is indeed true o Other report: you ask others to report on the individual in question's behavior; they may be more objective than an individual's self- report, eliminating your concern about social desirability o Observing behavior: tendency to become more accurate than self- report and other report because they tend to be much more objective Hawthorne Effect: an effect where people alter their behavior because they know they're being observed Triangulation: combining a variety of different methods for measuring the variable in question o The comparison of two or more forms of evidence with respect to an object of research interest Triangulate sources: multiple interviewees, multiple field sites, multiple cases, multiple observations Triangulate methods: qualitative methods plus quantitative methods, observation plus self-report plus other-report Triangulate researchers: multiple interviewers or observers Measurement: a process of determining the characteristics and/or quantity of a variable through systematic recording and organization of observations o Nominal level measurement: makes use of unordered categories, classifying the variable into qualitatively different and unique categories These categories do not indicate any type of order or intensity of the degree to which a characteristic exists It represents the potential categories of some variable of interest o Ordinal level measurement: has ordered categories or rank, and can be determined whether an observation is greater than, less than, or equal to other observations This type of measure does not indicate how much the difference is; the amount of separating levels is not known o Interval level measurement: specifies relative position and also establishes standard, equal distances between points on the scale Likert scale: measures participants' feelings or attitudes toward another person, issue, and event (ex: agree, disagree, strongly disagree) Semantic differential scale: measures the meanings participants assign to some stimulus based on connotative meaning of concepts (ex: happy-sad, calm-anxious) o Ratio level measurement: the most specific type; has all of the characteristics of an interval scale, but also a true, meaningful zero point Types of Variables Independent variable: causes or determines the value in another variable o Manipulated Dependent variable: assumed to depend on or be caused by another variable o Measured Confounding variable: an extraneous variable that muddies the relationship between the independent and dependent variables o When you design a research project and identify your variable, you need to attempt to anticipate potential confounds and eliminate them in the design of the study Mediating variable: Acauses M which in turn causes B Moderating variables: M has an effect on both Aand B o Mediating and moderating are controlled by the researcher, but confounding is not Different Types of Relationships between Variables Reversible relationships: can go in either direction, either way Irreversible relationships: only go in one direction or one way Deterministic relationships: occur when the dependent variable must result from the independent variable Stochastic: those that are probable o Individualistic fallacy: when people fail to recognize the differences between stochastic and deterministic relationships, they may make inaccurate assumptions about the relationships between variables Sequential relationships: the ordering of the variables is important and must occur sequentially, meaning chronologically or in order Coextensive relationships: the variables co-occur or happen simultaneously Sufficient relationships: one variable is enough to bring about a second variable Contingent relationships: one variable is enough to bring about a third variable, if needed Necessary relationships: one variable must be present for a second variable to be present Substitutable relationships: other forces might bring about the same effect as a necessary relationship Dimensions of Variables Uni-dimensional concepts: variables containing only one dimension Multidimensional concepts: complex variables embodying more than one component or dimension Chapter 9 Thinking about the Quality of Your Observations What is reliable? What is valid? What is credible? o Reliability: the ability of a measure to produce the same results if replicated o Validity: accuracy of a measure, in terms of measuring intended constructs or observations o Credibility: the believability of the research to the reader; how much the reader believes the research to represent participant's reality Reliability Random error: a fluctuation in measurement Types of reliability o Test-retest reliability: a reliability method in which the same measure is given to the same people at two different times o Alternate form reliability: a method to determine if the order in which the items in a measure are presented affect the ways in which people respond o Split-half reliability: a means of evaluating internal consistency of a scale that compares one randomly selected half of a scale from the other randomly selected half of the same scale o Item-total reliability: a means of evaluating internal consistency of a scale that compares the total score for a scale with individual item scores for the same scale o Inter-coder reliability: an indicator of how similarly coders are coding content, both in terms of identifying units of analysis and in the contextual labels they ascribe to those units Reliability statistics: statistical tests that can show consistency between items, reflecting reliability of a scale Validity Face validity: a type of validity consideration in which measures, or procedures, are looked at and questioned if they make sense at face value o Criterion validity: a type of validity consideration that deals with how a particular measure holds up when compared to some outside criterion o Predictive validity: how well a measure predicts that something that will happen in the future o Construct validity: the extent to which your variables are logically related to other variables o Convergent validity: when two measures you expect to be related are shown to be positively statistically related o Discriminant validity: when two measures you expect to be negatively related (opposite to each other) are shown to have a negative statistical relationship Sampling Lecture: Representative sampling techniques (simple random, systematic, stratified, cluster); Non-representative sampling (convenience, purposive, volunteer, quota, network/snowball); sampling error vs systematic error Text Ch 8: lecture overlap; plus sampling units and frame; response rate and power (but don’t worry about the math part); note also to consider “stratified” & “proportional stratified” as same thing On GS: “Rasmussen Reports” & the “Pew: Cell Phone...”: know main points only Lecture: Review Lecture 8 Chapter 8 How Important is Sampling? Proper sampling ensures that you are appropriately representing whomever you say you're representing External validity: are your findings valid among the population you're studying? Sampling Theory Generalizability: ensuring that a researcher's findings will apply to other people and situations that a study's sample supposedly represents o This sample is expected to be representative of the entire population under study; you can find out information about the population without having to actually interview the entire population Sampling frame: a realistic version of your population; the ones you can identify and access Unit of analysis: sampling units; individuals in the study Sampling in Quantitative Research Random sampling: a sample in which everyone in your sampling frame has an equal chance of being included in the study o Typically used in the positivist paradigm because it helps ensure the objective reality being measured is being measured accurately Simple random sampling: a basic method where a group of subjects (sample) are selected for study from a larger group (population), and each member of the population has an equal chance of being chosen at any point during the sampling process Stratified sample / Proportional stratified sample: a type of sampling that uses a technique in which different subcategories of the sample are identified and then randomly selected / a type of sampling that uses a technique in which different subcategories of the sample are identified and then selected proportionate to their occurrence in the population Cluster sampling: a type of sampling method in which clusters, or groups (subsets of a population) are identified that are representative of the entire population, and are then sampled randomly within each cluster, letting each cluster represent the population Nonrandom sampling: sampling that is not generalizable to the population; sample that is not a random sample o Convenience sample: a group of people that is convenient to access o Volunteer sample: consists of people who are willing to volunteer for a study o Snowball sampling: this sampling method asks study participants to make referrals to other potential participants, who in turn make referrals to other participants, and so on o Network sampling: using social networks to locate or recruit study participants Response rate: the proportion of people actually included in a sample, relative to the number of people that were attempted to be included Refusal rate: the number of people that refuse participation in a study Sample size: the number of data sources that are selected from a total population Statistical power: the probability that research will identify a statistical effect when it occurs Sampling in Qualitative Research Purposive samples: samples chosen for a particular purpose Quota sampling: a non-probability (nonrandom) sampling technique that sets quotas for key categories to identify how many members of the sample should be put into those categories Maximum variation sampling: selects study participants that represent a wide range of characteristics that are present in the population and are of interest to the research Theoretical construct sampling: selection of study participants who have characteristics representing theories on which a study is based Typical instance sampling: consists of sampling units who have characteristics typical of a population Extreme instance sampling: consists of sampling units who have characteristics quite different from the rest of a population Data saturation: sampling until no new information emerges GS: “Rasmussen Reports” & the “Pew: Cell Phone...” -“Rasmussen”: Automated polling system that uses a single, digitally-recorded, voice to conduct the interview. The automated technology insures that every respondent hears exactly the same question, from the exact same voice, asked with the exact same inflection every single time. Calls are placed to randomly- selected phone numbers through a process that insures appropriate geographic representation. After the surveys are completed, the raw data is processed through a weighting program to insure that the sample reflects the overall population in terms of age, race, gender, political party, and other factors. The processing step is required because different segments of the population answer the phone in different ways. For example, women answer the phone more than men, older people are home more and answer more than younger people, and rural residents typically answer the phone more frequently than urban residents. -“Pew”: Cell Phone Surveys - . Research has shown that as the number of adults who use cell phones only has grown, the potential for bias in landline surveys that do not include cell phone interviews is growing. Cell phone surveys are conducted in conjunction with a landline survey to improve coverage. The data are then combined for analysis. One of the most important considerations when conducting cell phone surveys is that the costs are substantially higher than a traditional landline survey. The cost of a completed cell phone interview is one- and-a-half to two times more than a completed landline interview (more expensive because of the additional effort needed to screen for eligible respondents. A significant number of people reached on a cell phone are under the age of 18 and thus are not eligible for most of our surveys of adults. Cell phone surveys also cost more because federal regulations require cell phone numbers to be dialed manually (whereas auto-dialers can be used to dial landline numbers before calls are transferred to interviewers)). Response rates are typically lower for cell phone surveys than for landline surveys. In terms of data quality, some researchers have suggested that respondents may be more distracted during a cell phone interview, but our research has not found substantive differences in the quality of responses between landline and cell phone interviews. Interviewer ratings of respondent cooperation and levels of distraction have been similar in the cell and landline samples, with cell phone respondents sometimes demonstrating even slightly greater cooperation and less distraction than landline respondents.
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'