Industrial Psychology Notes (Week of 3/7)
Industrial Psychology Notes (Week of 3/7) Psyc 3640
Popular in Industrial Psychology
verified elite notetaker
Popular in Psychlogy
This 5 page Class Notes was uploaded by Courtney Luber on Thursday March 10, 2016. The Class Notes belongs to Psyc 3640 at Clemson University taught by Eric S McKibben in Fall 2016. Since its upload, it has received 16 views. For similar materials see Industrial Psychology in Psychlogy at Clemson University.
Reviews for Industrial Psychology Notes (Week of 3/7)
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 03/10/16
3/7/2016 Performance Rating—Process o Rating sources Supervisors Most common information source Many actively avoid evaluation & feedback o i.e. professors who take a long time to give feedback Peers More likely to know about a worker’s typical performance Conflict of interest likely when competing for fixed resources o Potential sabotage o Alliance built—promising to rate each other high Self-Ratings o Discussion of ratings with supervisor increases perceptions of procedural fairness o Potential for distortion & inaccuracy Minimized with supervisor discussion o Conflict of interest if used for administrative purposes o But, employees can be too harsh on themselves o Self-ratings make employer ratings easier Rating Sources o Subordinate ratings Critical that subordinate feedback be kept anonymous i.e. professor ratings done by students each semester o Customer & supplier ratings Important from business strategy vantage point Customers, vendors, consultants o 360 degree systems Collect & provide an employee with feedback that comes from many sources Often used for feedback & employee development However, not very consistent—not very reliable Not good for determining who to hire, fire, give a raise to, etc. Rating Distortions o Central tendency error Raters choose mid-point on scale to describe performance when more extreme point is more appropriate Rating everyone to be average i.e. professors who give most everyone a C o Leniency-severity error Raters are unusually easy or harsh in their ratings i.e. professors who fail everyone or professors who give everyone an A o Halo error Same rating is assigned to an individual on a series of dimensions, causing the ratings all to be similar; lack of identification of strengths and weaknesses Assume that if someone is performing one area of a job well, they are performing all of the other performance dimensions well A “halo” surrounds the ratings Rater Training o Some distortions (errors) may be corrected through training o Administrative training Important for uncommon rating systems (e.g., BARS) or if 1 or more structural characteristics are deficient What & how performance is being evaluated o Psychometric training Makes raters aware of common rating errors in hopes of reducing such errors Trying to understand and describe a normal distribution Framer of Reference Training o Based on assumption that rater needs context for providing rating Basic steps Provide information about multidimensional nature of performance Ensure raters understand meaning of scale anchors Engage in practice rating exercises of standard performance Provide feedback on practice exercise Reliability & Validity of Performance Rating o Reliability Currently the subject of lively debate Inter-rater reliability considered poor but this isn’t necessarily bad considering each rater relies on a different perspective o Validity Depends on manner by which rating scales were conceived & developed 3/9/2016 Social & Legal Context of Performance Evaluation o Motivation to rate Suggestion that raters use process as a means to an end, either personal or organizational Performance appraisal as a goal-directed activity with 3 stakeholders Rater goals Task performance Interpersonal—relationships between people Strategic—i.e. want to achieve a promotion, so they approach rating stragetically Internalized—i.e. be as truthful or as honest as possible Ratee goals Information gathering—developing, growing, training Information dissemination—trying to explain performance (dramatic increases or decreases over time) Organizational goals Between-person uses o Trying to understand differences between individuals Within-person uses o How to make each individual person more effective and efficient; who gets training Systems-maintenance uses o System-wide; developing entire company; overall strategy of organization Goal Conflict o When single system is used to satisfy multiple goals from different stakeholders, rater must choose which goal to satisfy before assigning a rating o Possible solutions Use multiple performance evaluation systems Obtain involvement of stakeholders in developing the system Reward supervisors for accurate ratings Performance Feedback o Problematic when same information is used for multiple purposes o Feedback (especially negative) should be stretched over several sessions o “Praise-criticism-praise sandwich” o Employee more likely to accept negative feedback if he/she believes: Supervisor has sufficient “sample” of subordinate’s actual behavior Supervisor & subordinate agree on subordinate’s job duties Supervisor & subordinate agree on definition of good & poor performance Supervisor focuses on ways to improve performance Destructive Criticism o Feedback that is cruel, sarcastic, & offensive o Usually general rather than specific o Often directed toward personal characteristics of employee o Leads to anger, tension, & resentment on part of employee o Apology best to repair damage of such criticism Implementing 360 Degree Feedback o Ensure anonymity of sources o Rater & ratee should jointly identify the evaluator o Use for developmental & growth purposes o Train information sources & those giving feedback o Follow up feedback session with regular opportunities for progress assessment Performance Evaluation & Culture o Hofstede’s 5 dimensions of culture might affect performance evaluations o Modesty bias When raters give themselves lower ratings than warranted Prevalent in cultures with high power distance Performance Evaluation & the Law o Ford Motor Company & its forced distribution rating system Evaluators were required to place managers into performance categories based on predetermined percentages Ford sued by managers & eventually paid over $10 million to litigants o Review of court cases from 1980-1995 Judges primarily concerned with issues of fairness rather than technical characteristics of the system o Lawsuits most often brought against trait-based systems Arguments Ratings unduly subjective & decisions based on those ratings are unreliable or invalid Ratings have no basis in actual behavior due to subjectivity Little evidence of such unfairness has been found Research suggests performance evaluations do not systematically discriminate against protected subgroups
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'