Log in to StudySoup
Get Full Access to University Physics - 13 Edition - Chapter 21 - Problem 17dq
Join StudySoup for FREE
Get Full Access to University Physics - 13 Edition - Chapter 21 - Problem 17dq

Already have an account? Login here
Reset your password

In Example 21.1 (Section 21.3) we saw that the electric

University Physics | 13th Edition | ISBN: 9780321675460 | Authors: Hugh D. Young, Roger A. Freedman ISBN: 9780321675460 31

Solution for problem 17DQ Chapter 21

University Physics | 13th Edition

  • Textbook Solutions
  • 2901 Step-by-step solutions solved by professors and subject experts
  • Get 24/7 help from StudySoup virtual teaching assistants
University Physics | 13th Edition | ISBN: 9780321675460 | Authors: Hugh D. Young, Roger A. Freedman

University Physics | 13th Edition

4 5 1 346 Reviews
Problem 17DQ

In Example 21.1 (Section 21.3) we saw that the electric

Step-by-Step Solution:
Step 1 of 3

Psychology of Learning – Choice + More on Reinforcement Self­control ­ Sometimes we have to make choices between immediate rewards, and awards that are delayed but also better. This is an issue of “self­control vs. impulsivity.” ­ What are some ways we battle between self­control and impulsivity  Example #1: Dieting! (Decision to be made: eat the chocolate cake for dessert, or don’t.) ­ Self­control decision: You decide not to eat the chocolate cake, because you know making healthy choices will only benefit your diet results. ­ Impulsive decision: You decide to eat the chocolate cake, because even though your ultimate end goal is weight loss, the immediate reward of that tasty goodness is much more appealing.  Example #2: Fun with friends! (Decision to be made: leave work early so you can go to a party with your friends, or stay at work until the end of your shift.) ­ Self­control decision: You stay at work, because even though the immediate reward of fun with friends sounds appealing, you know that you’ll be happier with your paycheck in a week if you get in as many hours as possible. ­ Impulsive decision: You go out with your friends, because even though it will dock your paycheck a little bit, fun with friends is a more immediate reward. ­ Keep in mind that every now and then, a decision might seem impulsive, but really it’s due to factors outside of choice. Take paying a bill for example. Let’s say you have a bill due tomorrow, and you don’t have the money for it. Someone offers you either $300 now, or $1000 in two weeks. You take the $300 now so that you can pay your bill. This doesn’t make your decision impulsive in this instance, since factors outside of choice (needing the money for a legal bill) influenced your decision. Delay discounting ­ The above scenarios can be referred to as delay discounting, which you can find a precise definition for here http://www.psychogenics.com/delayeddiscounting.html. ­ When people prefer immediate deliveries of small rewards, or fail to heavily take into account the long­term consequences, this can lead to risky decision making. ­ Delay discounting formula: V = A / (1+kD) Where, V = present value of delayed reward, A = amount of delayed reward, and k = rate of delay discounting. *Remember what each letter stands for in case you have to plug in this formula on a quiz! ­ Indifference points: options that are equally valuable to someone; helps us decipher how long someone will wait for a reward that is bigger and better than something immediate. Primary vs. secondary reinforcement ­ Primary reinforcement: unconditioned; no learning required; phylogenetically determined  Ex: food, sex ­ Secondary reinforcement: conditioned; learning required; ontogenetically determined ­ So for example… Let’s say you wore your underwear inside out one night when you go out to eat. On that night, it just so happens that the waiter accidentally gives you a free meal. So, from then on out you decide to wear your underwear inside out whenever you go to a restaurant. The food you got was the primary reinforcer, and your inside­out underwear was the secondary reinforcer. ­ Tokens: rewards that are associated with (or lead to) primary reinforcers  Ex: Coupons! The little piece of paper you have is not the reward. Rather, the reward is whatever the coupon ends up providing you with (e.g. a burger). Other terms you’ll want to know for an upcoming quiz, and just to make sure you comprehend P&C chapters 8­10! Quiz yourself on these!  ­ Controlling stimulus: a stimulus that, when presented, changes the probability of an operant ­ Multiple schedule: multiple schedules are presented one after the other, each with its own unique stimulus ­ Superstitious behavior: behavior that is accidentally reinforced ­ Generalization: when stimulus control expands from one stimulus to another ­ Retroactive interference: circumstances that get in the way of/ interrupt rehearsal ­ Preference: the alternative of reinforcement most frequently chosen ­ Concurrent schedules of reinforcement: 2+ simultaneous schedules of reinforcement ­ Changeover delay: stops rapid switching between alternatives by providing a small delay before a reinforcer is available ­ Impulsive behavior: choosing a small and immediate reward over a larger and delayed reward ­ Dinsmoor’s findings: humans AND animals prefer the situation in which they are given the most information ­ Backward chaining: a way to train complex behaviors in which you start at the response that is closest to the primary reinforcer ­ Second­order schedule: 2+ schedules of reinforcement in which necessities of one schedule are reinforced in accordance with the necessities of the other schedule ­ Contingencies of aggression: reactions to aggression that reinforce aggressive behavior ­ Token reinforcement: reinforcement that can be exchanged for some other reward later on

Step 2 of 3

Chapter 21, Problem 17DQ is Solved
Step 3 of 3

Textbook: University Physics
Edition: 13
Author: Hugh D. Young, Roger A. Freedman
ISBN: 9780321675460

Other solutions

People also purchased

Related chapters

Unlock Textbook Solution

Enter your email below to unlock your verified solution to:

In Example 21.1 (Section 21.3) we saw that the electric