Final Psych 324
Final Psych 324 Psych 324
Popular in Brain and Behavior Psychology
Popular in Psychlogy
This 48 page Study Guide was uploaded by Allie S on Tuesday December 8, 2015. The Study Guide belongs to Psych 324 at Clemson University taught by Dr. Claudio Cantalupo in Fall 2015. Since its upload, it has received 27 views. For similar materials see Brain and Behavior Psychology in Psychlogy at Clemson University.
Reviews for Final Psych 324
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 12/08/15
Study Guide Industrial Psychology Exam 1 People Hofstede – develops 5 dimensions of THEORY of CULTURE IC Individualistic vs. collectivistic PD Power Distance UA UNCERTAINTY AVOIDANCE MF Masculinity vs. femininity LS Long-term vs. Short-term James McKeen Cattell – measures individual difference/first Mental Test; Important to IO beginnings psychology/American Counterpart Lillian Gilbreth – TIME MOTION STUDIES; All about efficiency and energy conservation Concept Majority of IO psychologists end up in academia 41% Personnel psychology Part of Human Resources MGMT - Field of psych that deals with selection, recruitment, promotion, appraisal, transfers, terminations, training, and performances - Goal is to find the best fit person for the job at hand Human Factors psychology Aka Human Engineering - Study of human limitations in respect to environment - Goal is to CREATE an environment that fits the Workers SIOP Society for Industrial and Organizational Psychology Army Alpha/Army Beta Early IO history mental/ability tests distributed two ways: • Army Alpha Test – Military tests given to LITERATE people assessing strengths, skills, weaknesses in order to PLACE them in the correct job. • Army Beta Test – Tests given verbally to ILLITERATE people to assess their strengths, etc. Hawthorne Studies 1930s – discovered that observation increases productivity Studies of job satisfaction - Conducted in general electric plant – increase the amount of light in a certain section of the plant - Carefully measured, saw that the performance of the workers INCREASED directly proportional to the amount of light added. - Noted that the performance ALSO INCREASED when light was dimmed. Turns out, the physical act of observing the workers was what was actually increasing ***Really spurred an interest in job satisfaction/Quality studies; HUMAN RELATIONS movement Civil Rights Act of 1964 previous experiences shape people’s view of the world – cause and effect o the “requirements” for success appear different for people of different socioeconomic situations o Former characteristic testing was done based on the white sense of the world, self-serving biases/didn’t account for background differences o Realize that there are unfair differences between races/inequities – evoking emotions/overcorrecting judgment § Reverse discrimination, highlighted existing difference, creates more close-knitted groups/exclusion, segregate --- Focuses on creating groups that emphasize DIFFERENCES, instead of similarities and forced interactions Title VII of Civil Rights Act o Groups named in 1964 § Race § Color § Gender § National Origin § Religion o Later added § ADEA (age) 1967 § ADA (Disability) 1990 Time and Motion Studies Efficiency studies conducted by LILLIAN GILBRETH/Fredrick Taylor • Broke down tasks to the second and determined the most efficient way to perform them – least amount of time, reduce fatigue, and increase productivity Culture System in which individuals share meaning & common ways of viewing events & objects – shared sense making Sharing of meanings & interpretations § Interacting with different cultures means having to compromise and understanding in order to merge views on subjects to achieve “success” Hofstede’s 5 factors 5 dimensions of the Theory of Culture 1. Individualism or collectivism o Where does your identity reside • Family-oriented; use of I or we; sense of self-importance • USA very individualistic – somewhat looking out for self; • China – communal; the common good 2. Power Distance o Distance between authority/power figures and the people • USA power distance is smaller than in China/East • Authority can direct or collaborate • Shapes the place a person has in hierarchy and how they can interact in the “caste” system 3. Uncertainty Avoidance o The willingness to take on a risk/step out of comfort zone • USA is fairly low on UA – take a lot of risks • France/Japan very high – goes along with the collectivism 4. Masculinity or femininity o Gender Role definition and assignment/duties • Shapes the idea of gender and place in society o Authority 5. Long-term or Short-term o Goals and decisions based on longevity and desired pay out • Immediate vs future • USA – short-term; quarterly/daily reports § To engage in a behavior with immediate outcome • China – more long-term; benefits come later These characteristics shape the way people make sense of things --- Must look at each dimension SEPARATELY; they are individual, not indicative of each other-can’t draw conclusions from each other Regions may differ – ex: West coast = individualistic; SE= less so Disinterestedness In order to comply with the unbiased requirement in experimentation, a scientist must be detached from his work Research design types in I-O psychology 3 Research Designs: 1. Experimental • Random assignment of participants to conditions o Random population strata • Conducted in a laboratory or the workplace o Manipulate variables – independent and dependent § Have a control group and experimental of random people • HIGH degree of control • Pros: cause-and-effect are established, • Cons: Mistaken relationships, 2. Quasi-experimental • Non-random assignment of participants to conditions 3. Non-experimental • Does not include manipulation or assignment to different conditions o 2 common designs: § Observational design: Observes and records behavior § Survey/Questionnaire design (most common) Independent variable The “treatment” Observational design Observational design: Observes and records behavior Introspection Early scientific method in which the participant was also the experimenter, recording his or her experiences in completing an experimental task: considered very subjective by modern standards Triangulation Quantitative and Qualitative are NOT mutually exclusive • Triangulation: technique capturing both quantitative and qualitative to hone in a full view of the study o Examining converging information from different sources (qualitative and quantitative research). o Combining all aspects to create a holistic view -- we get puzzle pieces (quantitative), but need to have an abstract idea (qualitative) in order to assemble the puzzle Job analysis Process used by IO psychologists to gain understanding of a job 1. investigating the jobs and duties 2. human attributes necessary to perform the job 3. Context in which job is performed Typically involves Triangulating data sources to get a complete understanding Generalizability Generalizability in research: • Application of results from one study or sample to other participants or situations o The more areas a study includes, the greater its generalizability – the broader context is derived from more diverse samplings, larger number of subjects o Want to be able to make findings more generalized, more predictable – is applying results from one study to a larger scale o Every time a compromise is made, the generalizability of results is reduced – IF more variables/less controls are used, it is much harder to pinpoint the results of an experiment versus additional influencers o Generalizability decrease over TIME; continual process Personality: consciences as a predictor for job performance • Want to sample the ENTIRE organization • Measure both con and job performance o Now any future applicants will receive a con test – there scores will determine if I should hire them (based on this exp) • However, if you had just tested engineers (job title), you cannot expect the same results – generalizing isn’t possible Experimental control Experimental control o Eliminates influences that could make results less reliable or harder to interpret o Standardize the experiment as much as possible – control the experience, allowing us to measure a SINGLE point § Assess other variables that might be making an impact ú Ex: self-esteem – measure self-esteem and performance, but maybe mood affects esteem. So theoretically, positive mood may have the correlation. SO if you add mood data point, you are able to account for it in experiment -- However, can make it much harder to generalize – very specific to its situation Histogram Histograms: random data assembled into a graph + Positive skew = scores are high ranged, bunched at the bottom of the score range - Negative skew = scores arranged at the lower, top of the score range (All depends on where the MEAN is) Measures of central tendency Measures of central tendency – clustering, o Mean – the average of the data o Mode – most often data component, repeated # o Median – Once data is arranged low to high, middle # Standard deviation Variability • Standard deviation – an averaging, the amount it is allowed to vary (i.e. SD 1.3 means with a mean of 5, can vary from/within 3.3 to 6.3) Lopsidedness or skew • Mean is affected by high or low scores, median is not • Mean pulls in direction of skew Calculate mean of a set of data Add up all the data points and divide by the number of points gathered. Example of inferential statistics Creating inferences from a study Ex: a mental test is given to two groups: High school kids and College kids. IF the results were that the lower MEAN scoring group was the high school group, it could be inferred that education is associated with higher test scores - Why can you do this? It is an average Correlation Ex: self-efficacy experiment There is a normally distributed curve. AFTER the Test, we will see distribution of scores on performance o Intelligence is .6 Correlation Coefficient, but is a STRONG correlation of performance… But o Intelligence does NOT necessarily cause performance, performance does not cause intelligence § It may indicate education is a factor, environment, a third variable influencing BOTH x and y x-axis = confidence curve y-axis = performance curve z-score = how far a single point is from the mean (in STD units) Scatterplot • Displays correlational relationship between 2 variables Regression • Straight line that best “fits” the scatterplot and describes the relationship between the variables in the graph Correlation Coefficient – tells in z score units • Statistic or measure of association • Reflects magnitude (numerical value) & direction (+ or –) of relationship between 2 variables • Ranges from 0.00 and 1.00 • COEFFICIENT IS THE SLOPE OF THE Z SCORE Correlation Coefficient has a valence and magnitude: Ranges from -1 to 1 Positive correlation → As one variable increases, other variable also increases & vice versa; Positive slope Negative correlation → As one variable increases, other variable decreases & vice versa; negative slope Positive linear correlation Reliability The consistency or stability of a measure • Needs to be repeatable with consistent results • i.e. can we predict the outcome of this experiment once repeated? Is the conclusion we drew consistent? Validity The accuracy of Inferences made based on test • Does this data accurately represent the intended measure? • Are these valid conclusions? Test-retest reliability A type of reliability calculated by correlating coefficient measurements taken at time 1 with measurements taken at time 2. • Method to measure reliability; tested once, then retested, should get same results • Testing to see if over time, results are the same – consistent answers o There is often contamination in experiments o Tested with the EXACT same test to same individual § Pitfall: amount of time between two tests, less time = consistency of Memory, essentially; butttt over too much time, integrity/feelings may change… ú But is this due to personal change or testing inconsistencies Equivalent forms reliability Type of reliability calculated by correlating measurements from a sample of individuals who completed 2 different forms of the same test. o Can have 5 very similar questions in one test, to get a bit of consistency, slightly reworded, but with the same end meaning/measurement § Pitfall: If there are too many questions, people lose interest; can put a bit more emphasis/level of importance on the first answer in each set Predictor/criterion Predictor – the Test chosen to assess attributes; once the desired attributes are Identified Criterion – an outcome variable that describes important aspects of the job The variable that we predict when evaluating the validity of a predictor When the demands are identified Predictive validity Criterion-related validity design in which there is time lag between gathering test scores and performance data Able to predict what WOULD have happened had you actually used the test scores to make the hiring decisions Concurrent validity Criterion-related validity design in which there is NO time lag between gathering test scores and performance data Test scores are “concurring” with performance data Give test to CURRENT employees – IF performances are good, you can use the test later for new hires Construct validity Investigators gather evidence to support decisions or inferences about psychological constructs Study Guide Exam 2 Industrial Psych People Campbell Concepts Intelligence as “g” • Involves the ability to reason, plan, solve problems, comprehend complex ideas, & learn from experience • A high g = high intelligence level • Intelligence (or “g”): broad general capability – describes person’s ability to learn from experience Intelligence (or “g”): broad general capability – describes person’s ability to learn from experience o ↑ job complexity = ↑ predictive value of general intelligence tests o Tend to LEARN from mistakes, quick learners are crucial o “g” is one of the best overall performance predictors § Complex jobs need someone who can grow/adapt quick 3 higher-order categories: 1. Cognitive abilities Carroll’s Hierarchical Model • Identified 7 even more specific characteristics that better defines intelligence 7 characteristics 1. Fluid Intelligence • Dynamic intelligence, creative problem-solving o Typically, younger people are stronger in this 2. Crystalized Intelligence • More solid method of problem-solving o Older, wiser people 3. General Memory • Ability to convert short to long-term 4. Visual Perception • Interpret visual stimuli, make recognitions quickly 5. Auditory Perception • Tone/pitch distinguishes, localize sound well 6. Retrieval Ability • Able to store, but also RETRIEVE info quickly 7. Cognitive Speediness • Quick learners, assimilate info fast, 2. Physical abilities Hogan’s 3 major classifications: 1. Muscular Strength 2. Cardio Endurance 3. Movement quality Muscular Strength 1. Muscular Tension • Static strength – able to stay in place and experience tension o Hanging, push up (at top) 2. Muscular Power • Explosive Strength – power used 3. Muscular Endurance • Core Strength, leg Cardio Endurance 1. Stamina – endurance strength over a period of time Movement Quality 1. Flexibility 2. Balance 3. Neuro-muscular Coordination 3. Perceptual-motor abilities o Vision § Visually determine differences and recognitions based on visual stimuli o Touch § Tactile; control over senses; a strong sense of touch can lead to high anxiety, but could also be beneficial for some jobs – i.e. massage therapists, doctors ú This is why it is important to identify these differences in intelligence in order to place people in the correct job o Taste § A strong pallet that can discern different tastes that most can’t ú Food critics, foodies, quality control/taste checkers o Smell § Tied with taste usually o Hearing § Discerning tones, pitches, music ú Audio engineer, singers, o Kinesthetic feedback § Similar to sense of touch; able to understand the feedback/pressure from another object ú Surgeon can take the feedback from the scalpel to adjust their pressure Cognitive ability testing COGNITIVE ABILITY: o IQ, problem-solving, Intelligence, mental ability, cognition, reasoning, memory, processing speeds. o Allow individuals to demonstrate what they know, perceive, remember, understand, or can work mentally Allow individuals to demonstrate what they know, perceive, remember, understand, or can work mentally 2 types of tests: 1. G = general cognitive ability § Mental working processes § WONDERLIC PERSONALITY TEST 2. Specific cognitive ability § BENNETT TEST OF MECHANICAL COMPREHENSION Perceptual-motor abilities Problems with “g” testing Psychomotor abilities Aka Sensorimotor/Motor Abilities Calculations of movement to perform a task – combining visual process with movement in order to preform -physical functions of movement -associated with coordination, dexterity, and reaction time • Fleishman’s psychomotor abilities o Arm-hand steadiness o Manual dexterity o Finger dexterity o Response orientation o Rate control o Reaction time o Wrist-finger speed o Control precision Five factor model of personality This model drives the personality tests used today. ***Must know these 5 for the test Openness Conscientiousness Extraversion Agreeableness N(euorticism) Emotional stability Conscientiousness is highly related to performance The Five Factor Model of Personality: -Conscientiousness -extraversion -agreeableness -emotional stability -openness to experience Factor Conscientiousness Extraversion Agreeableness Emotional Openness to stability Experience Characteristic Responsible Sociable Good natured Secure Curious Prudent Assertive Cooperative Calm Imaginative Persistent Talkative Trusting Poised Independent Planful Ambitious Likable Relaxed Creative Achievement Energetic Friendly oriented Integrity Overt and personality based integrity tests: 1. Overt Integrity Test - DIRECT • Asks questions directly about past honesty behavior (stealing, etc.) as well as attitudes toward various behaviors (employee theft, etc.) 2. Personality Based Integrity Test • Test that infers honesty and integrity from questions dealing with broad personality constructs (conscientiousness, reliability, and social responsibility) o Infers integrity and honesty from the questions Personality becomes a PREDICTOR of integrity When does personality predict performance best? Declarative knowledge Knowing the “That” of a situation Procedural knowledge Knowing the “How” of a situation Tacit knowledge Street smarts Campbell’s Model of Job Performance o 3 direct determinants of job performance (overall) § Declarative knowledge (DK) – WHAT you know § Procedural knowledge & skill (PKS) – HOW (skils/practice) § Motivation (M) – WHY (effort sunk into a behavior) KSAOs KSAOs – bring about behaviors • Knowledge o Collection of discrete, related facts & information about a particular domain • Skill (e.g., computer or interpersonal skills) o Practiced act • Ability o Stable capacity to engage in a specific behavior • Other characteristics: interests, personality, etc. Potential obstacles/DISTORTIONS when observing: • Desire to make one’s job look more difficult • Attempts to provide answers that SME thinks job analyst wants – Prompts and hints to suede answers • Carelessness Test norming Norming & norm groups are used to interpret & give meaning to a score Wonderlic Cognitive ability test that produces a single score (WPT – Wonderlic Personnel Test) Bennett Mechanical Specific abilities testing (Bennett Test of Mechanical Comprehension) Screen-in vs. screen-out testing Screen-IN test Identify normal personality May be administered as Pre-employment tests o Know highly desirable skills, so we test and find them Screen-OUT test Identify PSYCHOPATHOLOGY Generally used for positions of public trust o To ensure that people with undesirable qualities are screened out – people may have undesirable characteristics embedded in them May only be administered AFTER offer of employment o Would be discriminating, legally need to screen out afterwards – can only withdrawal offer after a certain score Structured interviews 1. Structured interview Specific questions, same questions asked between all candidates – gives better comparison o More valid structure for interviews; scoring candidates fairly ú Past Behavior – best performance indicator ú Tell me about a time when… Situational interview 2. Unstructured interview Go with the flow- just trying to gain insight o More of a smaller business technique – not fair technically § Create a bias; bad for interviewee; validity coefficient to interview, there is NO correlation o Tend to cover job knowledge, abilities, skills, personality, & person-org. fit Behavioral descriptive interview (BDI) Form of Structured interview question PAST behavior – the best performance indicator - Tell me about a time when… Reliability and validity of interviews Incremental validity Value in terms of increased validity of adding a particular predictor to an existing selection system o Value after adding additional method/predictor § Getting the secondary/new angle to see the validity… Performance Actions or behaviors Effectiveness Evaluation of RESULTS due to performance/actions How effective was this action? How was I rewarded/punished? Productivity Ratio of Effectiveness Productivity: Ratio of effectiveness (output) to cost of achieving that level of effectiveness (input) § Was this WORTH the effort put in? ú Output/input Criterion contamination When actual criterion includes information unrelated to the behavior one is trying to measure Criterion deficiency When actual criterion is missing info that IS part of behavior one is trying to measure Ultimate criterion Theoretical Ideal measure of all relevant aspects of job performance Actual Criterion – the relevant captures of behaviors, but we still miss components Misconstruing because we don’t see the whole picture – end up with CRITERION DEFFICIENCY Criterion deficiency – missing elements erroneously Extra elements – Criterion Contamination Examples/Types of OCB Organizational Citizenship Behaviors Aka Contextual Performance Above and beyond normal expectations Examples/Types of CWB 1. Altruism a. Helpful behaviors directed toward individuals or groups within organization 2. Generalized Compliance a. Behavior that is helpful to the broader organization Goals of job analysis Task-oriented and worker-oriented job analysis Task-oriented Job analysis Begins with statement of actual tasks and what is accomplished by those tasks o Defining the tasks and behaviors Worker-oriented Job analysis Focuses on attributes of the worker necessary to accomplish tasks Critical incidents Methods/procedures for conducting a job analysis HOW Job analysis is done: 1. Observation 2. Interviews: Incumbent, Supervisor 3. Critical incidents & work diaries 4. Questionnaires/surveys 5. Performing the job Job Analysis Process 1. The more information gathered from the greatest number of sources, the better the job analyst can understand the job 2. Most job analyses should include considerations of personality demands & work context Cognitive task analysis Reasons for performance measurement Relationship between types of performance measurement Objective measures Performance management Performance Management o Emphasizes link between individual behavior & organizational strategies & goals o 3 Components of Performance Management § Definition of performance § Actual measurement process § Communication between supervisor & subordinate about individual behavior & organization expectations Distributive, procedural, interpersonal justice • Distributive justice o Fairness of outcomes related to decisions § How fair we perceive the outcomes – what was put in x what we got out IN COMPARISON to others § If we put in the same amount of outcome as another, but had a different outcome = unfair • Procedural justice o Fairness of process by which ratings are assigned & a decision is made § IS the process fair? § Ex: tests – if the questions are overly hard/if the process of knowledge evaluation is unfair… ú This comes before distributive • Interpersonal justice o Respectfulness & personal tone of communications surrounding evaluation Types of rating formats 1. Graphic rating scales (most common) o Graphically display performance scores running from high to low 2. Checklist o List of behaviors presented to rater who places a check next to items that best (or least) describe the rate § Does the employee engage in X behavior or not 3. Weighted checklist o Included items have assigned values or weights § Some behaviors are MORE important than others, we hold them as being more important § Need to ID behaviors job requires, and then are there any behaviors that are even more important 4. Forced-Choice Format o Requires the rater to choose two statements out of four that could describe the ratee § Rater gets statements of a person’s behavior – rate chooses whatever describes them best 5. Behaviorally anchored rating scales (BARS) o Rating format that includes behavioral anchors describing what worker has done, or might be expected to do, in a particular duty area o Numerical scale (1-10, low-high) and give a specific ex behavior so you can gauge how you compare 6. Behavioral Observation Scale o Observing behaviors and record how often they occur o Assessing the frequency in which the little p’s occur in a workplace Components of overall performance rating • Performance rating components • Trait ratings – a warning o E.g. I am aloof, abrasive o Can’t change • Task-based ratings o Effectiveness of employee in accomplishing duties o Most easily defended in court o Controllable • Critical incidents method o Examples of critical behaviors that influence performance Graphical rating scale Graphic rating scales (most common) o Graphically display performance scores running from high to low Behaviorally anchored rating scale Behaviorally anchored rating scales (BARS) o Rating format that includes behavioral anchors describing what worker has done, or might be expected to do, in a particular duty area o Numerical scale (1-10, low-high) and give a specific ex behavior so you can gauge how you compare § Behavioral observation scale o Observing behaviors and record how often they occur o Assessing the frequency in which the little p’s occur in a workplace § Types of errors raters make - We open ourselves up to many biases/distortions through each rating system • Central tendency error o Raters choose mid-point on scale to describe performance when more extreme point is more appropriate o Bias where we recognize behaviors – we identify where the midpoint is and that is where we rate ourselves/others (safe) § Avoid the extremes • Leniency-severity error o Raters are unusually easy or harsh in their ratings o Rater uses only one end of the scale with all of their employees § Either highly severely or super lax; no middle range • Halo error o Same rating is assigned to an individual on a series of dimensions, causing the ratings all to be similar; lack of identification of strengths and weaknesses § A “halo” surrounds the ratings Psychometric training Psychometrics o Thus, psychology was combined with metrics Psychometric training – Makes raters aware of common rating errors in hopes of reducing such errors – Form of Rater Training to correct Biases Frame of reference training Based on assumption that rater needs context for providing rating o How SHOULD the ratings occur?? o Basic steps 1. Provide information about multidimensional nature of performance 2. Ensure raters understand meaning of scale anchors 3. Engage in practice rating exercises of standard performance 4. Provide feedback on practice exercise Want INTERPERSONAL Justice Exam 3 Study Guide People Skinner - operant conditioning Kirkpatrick o Several purposes of training evaluations*** ON TEST 2 take place within training environment o Level 1 – attitude about training, how you felt; your reactions § Positive reaction typically means you did learn/will grow o Level 2 – has learning occurred? § Do we see a difference, evaluate o Level 3 – did the behavior change? § Is there an external change? Has the behavior changed? o Level 4 – the outcomes; results § Did training increase profits, decrease customer complaints, decrease costs? Theories Reinforcement Theory Reinforcement theory o Learning results from association between behaviors & rewards § Learning and motivational theory applied to training § Positive reinforcement (Increase behavior) ú Desired behavior followed by reward ú Getting a treat for doing a trick ú +(stimulus) +(added) = ^ Increase § Negative Reinforcement (Increase behavior) ú Taking away an UNDESIRED thing/experience ú Using Advil to take away the pain of a headache ú -(stimulus) -(taken away) = ^ Increase § Positive Punishment (Decrease behavior) ú Adding an Undesirable stimulus ú Spanking in order to deter a behavior ú -(stimulus) +(added) = decrease § Negative Punishment (Decrease behavior) ú Take away a desired stimulus ú Ex: taking away the desired toy to stop children from fighting ú +(stimulus) -(taken away) = decrease Negative = Taken away (subtraction) Positive = Added (addition) Reinforcement = Increase behavior Punishment = decrease behavior o Behavior modification § Simple recognition & feedback can be effective in increasing performance ú Using these techniques to change behavior Social Learning Theory Social learning theory proposes that there are many ways to learn including: o Behavioral modeling 1. Observe actual job incumbents demonstrate positive modeling behaviors § Learning does NOT only occurs when you experience it yourself…not just through direct reinforcement § Watching is an expert is an influential way to learn 2. Rehearse before using role-playing § Practicing and gaining experience for self, but simulation of skill 3. Receive feedback on rehearsal § Learn what you did wrong 4. Try behavior on the job § Test out behavior fully Concepts Types of validity Validity: Accurateness of inferences made based on test or performance data Validity designs • Criterion-related *** - Does the test we developed ACTUALLY PREDICT the performance? • Content-related *** - Does the test cover the ENTIRE domain of the test? • Construct-related - Does the test measure the CONSTRUCT, the abstract idea? Selection ratio (what is it how to calculate it) Selection Ratio (SR) Index ranging from 0 to 1 that reflects the ratio of available jobs to applicants SR = n/N n = number of available jobs N = number of applicants assessed Want the ratio to be low – if you only have a smaller # of ABOVE average at the very least, you will save money in resources o it is a waste of money to higher mediocre people – want the higher end of the distribution (this is why there are 3 yr experience requirements) • We get messy data, and validity is somewhat hard to depict o We can correlate coefficients Errors in selection decisions False positive Applicant accepted BUT performed poorly (Bad choice) False negative Applicant rejected BUT would have performed well (Bad choice) True positive Applicant accepted AND performed well (Good choice) True negative Applicant rejected AND would have performed poorly (Good choice) Positive = ACCEPTED Negative = Rejected candidate True = AND False = BUT Cut-scores (types and how they are determined) • Specified point in distribution of scores below which candidates are rejected • Raising cut score will result in fewer false positives but more false negatives • Strategy for determining cut score depends on situation 1. Criterion-referenced cut score o Consider desired level of performance & find test score corresponding to that level 2. Norm-referenced cut score o Based on some index of test-takers’ scores rather than any notion of job performance Utility • Assesses economic return on investment of HR interventions like staffing or training o Utility analysis can address the cost/benefit ratio of one staffing strategy versus another o Discovering worth of an employee based on Mean and STD • Includes consideration of the Base Rate, which is the percentage of the current workforce performing successfully o If performance is already high, then new staffing system will likely add little to productivity o How successful are they right now? The ROI to measure more is dramatically reduced – adding additional info will maybeeeee marginally beneficial to the organization if at all • Utility analysis calculations can be very complex, many other factors; uncertainty Statistical vs. clinical decision-making When combining info, there are several ways to combine info: 1. Clinical decision making • Human o Uses judgment to combine information & make decision about relative value of different candidates o Single hiring manager uses own judgment to hire § Combine all by self 2. Statistical decision making • Machine o Combines information according to a mathematical formula § Algorithm for combining info on person/ on job to find the best candidate *** Algorithm is the best – less bias Sub-group norming Subgroup Norming • Develop separate lists for individuals in different demographic groups who are then ranked within their respective group • In general, subgroup norming is not allowed as a staffing strategy Hurdle vs. compensatory decision making Another way to combine info: 3. Hurdle system of combining scores o How we space the info out over time – all at once, or gradually? § Spacing out the predictor variable over time, creating hurdles; minimum qualifications need to be met at EACH step of the way § Ex: resume data gathered first and ONLY x candidates will make it to the next round; next hurdle is g-test; then interview hurdle o Non-compensatory strategy: individual has no opportunity to compensate at later stage for low score in earlier stage o Establishes series of cut scores – can narrow down candidates 4. Compensatory approach o Multiple regression analysis § Results in equation for combining test scores into a composite based on correlations of each test score with performance score ú Opposite of NON – may not pass hurdle, but you have another skill that would compensate for said deficiency Score banding Score banding • Individuals with similar test scores can be grouped together in a category or score band • CREATE “bands” around predictions • Selection within band can be made based on other considerations • Score Banding is controversial • Score Banding uses the Standard error of measurement (SEM) for the test o SEM provides a measure of the amount of error in a test score distribution o Function of reliability of test & variability of test scores 2 types of Score Banding: 1. Fixed band system o Candidates in lower bands not considered until higher bands have been exhausted 2. Sliding band system o Permits band to be moved down a score point when highest score in a band is exhausted Primary reason for score banding is to help with diversity within an organization Disparate treatment Legal staffing issues: Discrimination or adverse treatment • Could be overt or covert; intentional Overt: o Adverse treatment/ Disparate TREATMENT – very intentional mistreatment/discrimination of a person § Disparate treatment means differently from the others; directed at one person in an organization o Plaintiff attempts to show that employer treated plaintiff differently than majority applicants or employees Disparate (adverse) impact Covert: o Disparate /Adverse IMPACT; Unintentional, but occurs due to hiring practices/tests developed – the innate biases o Acknowledges employer may not have intended to discriminate against plaintiff but employer practice had AI on group to which plaintiff belongs § Burden of proof on plaintiff to show: ú a) he/she belongs to a protected group, & ú b) members of protected group were statistically disadvantaged compared to majority employees 80% or 45% rule o Guideline for assessing whether there is evidence of Adverse Impact (AI) § Shows ratio of selection/majority group and the minority ratio to see if hiring is fair; Job offers / number of people who apply o Plaintiffs must show that protected group received only 80% of desirable outcomes received by majority group in order to meet burden of demonstrating AI § Basically select 5 white applicants out of 10 § 50% selection ratio for WHITE Applicants § The selection ratio for Black applicants must be 80% of the majority ú Therefore, the selection ratio for blacks must be= .5*.8 = 4 ú So out of 20 total applicants, 8/20 must be black ^^^Test Q • If the KSAOs are missing from a minority applicant/under-qualified…must demonstrate that all predictors point towards the applicant not doing well o Results in AI ratio o Can be substantially affected by sample sizes o Burden of proof shifts to employer once AI is demonstrated SR = Number of job spots AVAILABLE to how many applicants applied # spots / # applicants Ex: there are 100 applicants and we hire 50 SR = .5 (for the majority) Now we had 10 minority applicants, therefor we must calculate to see the # of Minorities ( .8 * .5 = 4 minorities must be accepted) So if 50 Majority 100 Then n Minority 10 n = 4 Learning outcomes (types and examples) 3 broad categories of learning outcomes Cognitive outcomes – DECLARATIVE KNOWLEDGE is gained and displayed The WHAT of something Ex: class Skill-based outcomes – PROCEDURAL KNOWLEDGE; the steps, practice over time The HOW of something Affective outcomes – MOTIVATIONAL; Self efficacy Training needs analysis (levels and procedures) Training Needs Analysis (performance) • 3-step process 1. Organizational analysis – asses What are the needs? 2. Task analysis - Task, employee, organizational, which departments need help? Who is under performing? 3. Person analysis – Which Person needs training Required to develop systematic understanding of where training is needed (organizational), what needs to be trained (task), & who will be trained (person) Trainee readiness Characteristics that lead to change Goal-orientation approaches (mastery vs. performance) Confident in ability to learn Previous experience with training Motivation *** Predicting if/how much a trainee will be changed by training Goal orientation Trainee characteristics of interest o Goal orientations: § Performance Goal-orientation – a goal of X OUTCOME based on behavior ú Concerned with doing well ú Not necessarily the process § Mastery Goal-orientation – wants to master the material in order to become more effective ú Concerned with increasing competence ú Care about the process of acquiring knowledge/growth Operant conditioning (reinforcement theory) § Positive reinforcement (Increase) ú Desired behavior followed by reward ú +(stimulus) +(added) = ^ Increase § Negative Reinforcement (Increase) ú Taking away an UNDESIRED thing/experience ú -(stimulus) -(taken away) = ^ Increase § Positive Punishment (Decrease) ú Adding an Undesirable stimulus ú -(stimulus) +(added) = decrease § Negative Punishment (Decrease) ú Take away a desired stimulus ú +(stimulus) -(taken away) = decrease Self-efficacy Self-efficacy o Belief in one’s capability to perform o The more confidence in ability to perform a behavior, the better chance you have to do it well § Believing you can answer test questions well, the better chance you have at answering correct Active practice Active practice o Actively participating in training/work tasks o Autopilot Vs. Automaticity o Occurs when tasks can be performed with limited attention; likely to develop when learners are given extra learning opportunities (overlearning) after they have demonstrated mastery of a task. Whole vs. part learning Whole learning o When entire task is practiced at once o More effective when complex task has relatively high organization Part learning o When subtasks are practiced separately & later combined o More effective when complex task has low organization o e.g., surgeons & pilots Massed vs. distributed practice Massed practice o Individuals practice task continuously & without rest (e.g., cramming for test) Distributed practice o Rest intervals between practice sessions o Generally results in more efficient learning & retention than massed practice Apprenticeship training Apprenticeship Form of ON-SITE training • Formal program used to teach a skilled trade • Watching a master – learning from watching Horizontal vs. vertical transfer Transfer of Training • Degree to which trainees apply knowledge, skills, & attitudes gained in training to their job • Transfer of training climate o Workplace characteristics that either inhibit or facilitate transfer of training o Learning that you are able to transfer to Job (vs. learned and forgotten, never making it to the job) § Outcomes, lessons, KSAOs The culture maters greatly as to whether or not info will be transferred – negative climate means no § Horizontal Transfer ú Training that takes place within the same level, skills you learn and can share within same level of business… § Vertical Transfer ú Training spans multiple levels of an organization… broader skills, additional insight • BUT YOU never actually participated in first hand training, skill is passed down boss ot employee Quid pro quo Specialized Training • Sexual harassment awareness training 1. Quid pro quo – exchange scenario; do this and get that… both somewhat benefit 2. Hostile working environment – more one sided, uncomfortable, and inappropriateness Coaching *** TEST Coaching o Practical, goal-focused form of personal, one-on-one learning for busy professionals o Practical, flexible, targeted form of individualized learning for managers/executives
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'