Schedules Of Reinforcement (20 Examples) (2024)

Reinforcement schedules govern the timing and frequency of rewards in operant conditioning, influencing behavior repetition. These schedules, categorized as continuous or partial, dictate when a behavior is reinforced, with consequences following every instance (continuous) or intermittently (partial). The latter includes fixed interval, fixed ratio, variable interval, and variable ratio schedules, each affecting behavior differently.

Schedules Of Reinforcement (20 Examples) (1)

What are schedules of reinforcement?

Schedules of reinforcement are the rules that control when and how often a behavior is rewarded to encourage it to repeat. Using a consequence to reinforce a behavior’s strength and likelihood of repeating is a learning process called operant conditioning. Reinforcement schedules can affect the effectiveness of operant conditioning.

Reinforcement schedules are contingency schedules. The reinforcers are only applied when the target behavior has occurred; therefore, the reinforcement is contingent on the desired behavior​.​1​

What are the types of reinforcement schedules?

There are two main types of reinforcement schedules, each with its own variations – continuous and partial. The main difference between continuous and partial reinforcement is that continuous schedules execute after each appearance of the target behavior. In contrast, partial schedules execute only some of the time after the target behavior occurs.

What is a continuous reinforcement schedule?

A continuous reinforcement schedule applies the reinforcer (consequence) after every performance of a target behavior. This non-intermittent type of schedule is the fastest in teaching a new behavior, but it is also the easiest to stop.

What are continuous reinforcement examples?

  1. Train a pet: When training a new puppy to sit, you might give it a treat every time it successfully sits on command. This continuous reinforcement helps the puppy learn the desired behavior quickly.
  2. Get paid hourly at a job: You receive money continuously every hour you work.
  3. Post on social media: If you have friends who like or share every time you post, that becomes a continuous reinforcement motivating more posting.
  4. Play video games: Every time you jump over an obstacle, you get a point. Playing games becomes a continuous reinforcement to keep playing.
  5. Study for tests: Every time you buckle down and study thoroughly for a test, you’re rewarded with a good grade.

What is a partial reinforcement schedule?

A partial reinforcement schedule applies the reinforcer occasionally after the target behavior appears, but not every single time. This intermittent type of schedule can strengthen and maintain a new behavior after it has been learned through continuous reinforcement. Behavior reinforced through a partial schedule is usually more robots and more extinction-resistant.

There are four types of intermittent reinforcement based on two dimensions – time elapsed (interval) or the number of responses made (ratio). Each dimension can be fixed or variable, resulting in 4 main types of intermittent schedules.​2,3​

  • Fixed interval schedule (FI)
  • Fixed ratio schedule (FR)
  • Variable interval schedule (VI)
  • Variable ratio schedule (VR)

What is a fixed interval schedule (FI)?

A fixed interval schedule delivers a reward when a set amount of time has elapsed since the previous reward. This schedule trains a person or animal to time the interval when performing the behavior.

The subject typically slows down or pauses right after a reward and then shows the behavior quickly toward the end of the interval as the next reward approaches. The break-run behavior creates scalloped patterns of responding, characteristic of fixed-interval reinforcement.​4​

What are fixed interval schedule examples?

Here are 5 fixed interval schedule examples in real life.

  1. Study for the final exam: College students often delay studying for final exams until the last minute, motivated by the fixed schedule of exams at the end of the semester.
  2. Look at the clock: Check the time more frequently as it approaches the end of the workday.
  3. Return library book: When the due date for borrowed books approaches, the reader quickly finishes reading the book and returns it.
  4. Wait for mail delivery: Increasingly looking out the window for the mail carrier as it gets closer to the usual mail delivery time.
  5. Prepare for meal: Getting hungry and preparing food as meal time approaches based on a habitual schedule.

What is a fixed ratio schedule (FR)?

Here are 5 fixed ratio schedule examples in daily life.

A fixed ratio schedule delivers a reward after a certain number of responses has been performed. Fixed ratio schedules produce a high response rate until a reward is received, followed by a pause in the behavior.

After receiving the reward, the subject is less motivated to act, creating a brief pause. The subject also learns the exact number of responses needed for a reward. Knowing they’ve ‘earned their break’, they pause briefly before continuing.

What are fixed ratio schedule examples?

Here are 5 fixed ratio schedule examples in everyday life.

  1. Work for piece rate: Workers receive a payment for every 5 units of product they produce or task they complete.
  2. Earn loyalty points: A free meal or coffee after every 10 purchases.
  3. Earn rewards in the classroom: Teachers use a token economy where students earn tokens for good behavior. After earning a specific number of tokens, they can exchange them for a prize or privilege.
  4. Read books: Libraries have summer reading programs where children receive prizes after reading a certain number of books.
  5. Unlock video game levels: In video games, players obtain a set number of points to unlock new levels, abilities, or items.

What is a variable interval schedule (VI)?

A variable interval schedule delivers a reward after a variable period of time has passed since the previous reward. This schedule generates a steady, persistent performance rate due to the uncertainty about the time of the next reward. This can lead to habit-forming and hard to extinct.

The subject knows that rewards occur after a certain time threshold has passed. This encourages individuals to perform the desired action consistently to check for rewards. The behavior pace is moderate because the subject doesn’t know precisely when the reward will come but knows they likely won’t get it too frequently.

What are variable interval schedule examples?

Here are 5 variable interval schedule examples in real life.

  1. Prepare for pop quizzes: Students study consistently throughout the semester to prepare for unpredictable pop quizzes.
  2. Check social media: Notifications can come anytime, leading people to check their accounts frequently.
  3. Wait for surprise sales or discounts: Stores may offer sales randomly, prompting customers to visit often.
  4. Take random drug tests: Employers may administer random drug tests to their employees to promote a consistent drug-free lifestyle.
  5. Conduct quality control: Factories or production lines perform quality checks at unpredictable intervals to ensure consistent product quality.

What is a variable ratio schedule (VR)?

A variable ratio schedule delivers a reward after a variable number of responses occurs. This schedule produces high and steady response rates. This type of schedule can be habit-forming and hard to stop.

The number of actions required to earn a reward changes each time, keeping the subject uncertain about when the next reward might occur. Since the next reward could be just one more action away, the subject remains highly motivated to keep performing the behavior.

What are variable ratio schedule examples?

Here are 5 variable ratio schedule examples in everyday life.

  1. Play the slot machine: You win money unpredictably after pulling the lever a random number of times. The payoff reinforces continued play.
  2. Go fishing: You catch a fish after casting your line an unpredictable number of times. Catching fish reinforces continued casting.
  3. Send spam emails: Companies keep sending promotional emails, encouraging customers to open the emails and shop.
  4. Check social media: Users check their social media accounts frequently, hoping to find something interesting, funny, or engaging.
  5. Sell products by cold-calling: The sales representative reaches out to prospective buyers via cold calls to identify those interested in purchasing their products.

How do different reinforcement schedules affect behavior?

Here are how different reinforcement schedules affect behavior.

Reinforcement ScheduleDescriptionEffects on Behavior
Fixed Interval (FI)Produces a high response rate, with a short pause after the reinforcement (post-reinforcement pause).Produces a “scalloped” pattern of responding, with responses increasing as the time for reinforcement approaches, followed by a pause after reinforcement.
Fixed Ratio (FR)A reinforcement is given after a set number of responses.Generates a moderate, steady rate of responding. Responses are more resistant to extinction than those under fixed interval schedules.
Variable Interval (VI)A reinforcement is given for the response after varying time intervals since the last reinforcement.This leads to a very high and steady rate of response. Responding is more resistant to extinction.
Variable Ratio (VR)A reinforcement is given after a random number of responses, around an average.Generates a moderate, steady rate of responding. Responses are more resistant to extinction than those under fixed-interval schedules.

What is extinction in operant conditioning?

Extinction in operant conditioning is the gradual decrease and eventual disappearance of a previously reinforced behavior after the reinforcement is stopped. How fast complete extinction happens depends partly on the reinforcement schedules used in the initial learning process.

Which reinforcement schedule is the most resistant to extinction?

The variable-ratio (VR) schedule is the most effective reinforcement schedule, according to a 2001 peer-reviewed study by University College London, published in the Journal of Experimental Psychology. Contrastingly, another study from the same year in the Journal of the Experimental Analysis of Behavior suggested that behaviors conditioned under variable-interval (VI) schedules exhibited greater resistance to change.​5,6​

B.F. Skinner’s 1936 research provides a potential explanation for these differences. Skinner noted that the impact of reinforcement schedules on behavior is not uniform but varies widely depending on the psychological factors at play, such as the individual’s drive. Therefore, it would be hard to determine precisely the most effective reinforcement schedule without additional details of a situation.​7​

What is schedule thinning?

Schedule thinning is gradually moving from a dense or continuous schedule of reinforcement to a thinner or partial schedule of reinforcement.

Why is schedule thinning important?

Schedule thinning is important because it matches reinforcement in the natural environment, as continuous reinforcement is often impractical.​8​

Schedules of Reinforcement in Parenting

Many parents use various types of reinforcement to teach new behavior, strengthen desired behavior, or reduce undesired behavior.

A continuous schedule of reinforcement is often the best in teaching a new behavior. Once the response has been learned, intermittent reinforcement can strengthen the learning.

Reinforcement Schedules Example

Let’s go back to the potty-training example.

When parents first introduce the concept of potty training, they may give the toddler candy whenever they use the potty successfully. That is a continuous schedule.

After the child has been using the potty consistently for a few days, the parents would transition to only reward the behavior intermittently using variable reinforcement schedules.

Sometimes, parents may unknowingly reinforce undesired behavior​.

Because such reinforcement is unintended, it is often delivered inconsistently. The inconsistency serves as a variable reinforcement schedule, leading to a learned behavior that is hard to stop even after the parents have stopped applying the reinforcement.

Variable Ratio Example in Parenting

Parents usually refuse to give in when a toddler throws a tantrum in the store. But occasionally, if they’re tired or in a hurry, they may decide to buy the candy, believing they will do it just that once.

But from the child’s perspective, such concession is a reinforcer encouraging tantrum-throwing. Because the reinforcement (candy buying) is delivered at a variable schedule, the toddler throws fit regularly for the next give-in.

This is one reason why consistency is important in disciplining children.

References

  1. 1.

    Case DA, Fantino E. THE DELAY‐REDUCTION HYPOTHESIS OF CONDITIONED REINFORCEMENT AND PUNISHMENT: OBSERVING BEHAVIOR. J Exper Analysis Behavior. Published online January 1981:93-108. doi:10.1901/jeab.1981.35-93

  2. 2.

    Thrailkill EA, Bouton ME. Contextual control of instrumental actions and habits. Journal of Experimental Psychology: Animal Learning and Cognition. Published online 2015:69-80. doi:10.1037/xan0000045

  3. 3.

    Schoenfeld W, Cumming W, Hearst E. ON THE CLASSIFICATION OF REINFORCEMENT SCHEDULES. Proc Natl Acad Sci U S A. 1956;42(8):563-570. doi:10.1073/pnas.42.8.563

  4. 4.

    Dews PB. STUDIES ON RESPONDING UNDER FIXED‐INTERVAL SCHEDULES OF REINFORCEMENT: II. THE SCALLOPED PATTERN OF THE CUMULATIVE RECORD. J Exper Analysis Behavior. Published online January 1978:67-75. doi:10.1901/jeab.1978.29-67

  5. 5.

    Reed P. Schedules of reinforcement as determinants of human causality judgments and response rates. Journal of Experimental Psychology: Animal Behavior Processes. Published online 2001:187-195. doi:10.1037/0097-7403.27.3.187

  6. 6.

    Nevin JA, Grace RC, Holland S, McLean AP. VARIABLE‐RATIO VERSUS VARIABLE‐INTERVAL SCHEDULES: RESPONSE RATE, RESISTANCE TO CHANGE, AND PREFERENCE. J Exper Analysis Behavior. Published online January 2001:43-74. doi:10.1901/jeab.2001.76-43

  7. 7.

    Skinner BF. Conditioning and Extinction and Their Relation to Drive. The Journal of General Psychology. Published online April 1936:296-317. doi:10.1080/00221309.1936.9713156

  8. 8.

    Hagopian LP, Boelter EW, Jarmolowicz DP. Reinforcement Schedule Thinning Following Functional Communication Training: Review and Recommendations. Behav Analysis Practice. Published online June 2011:4-16. doi:10.1007/bf03391770

Schedules Of Reinforcement (20 Examples) (2024)

FAQs

What are some schedules of reinforcement everyday examples? ›

Example in everyday context: You are studying for approximately 180 minutes per night. You provide yourself with reinforcement for every 10 minutes, 12 minutes,12 minutes, and 13 minutes spent studying. On average, you provide yourself with reinforcement every 11 minutes (VI 11).

What is an example of a schedule of reinforcement in ABA? ›

For example, a teacher following a “VR2” schedule of reinforcement might give reinforcement after 1 correct response, then after 3 more correct responses, then 2 more, then 1 more and finally after 3 more correct responses.

What is an example of a schedule of reinforcement in the classroom? ›

Examples of fixed ratio reinforcement are reinforcing a child after every fifth math sheet is completed or after every third time a child exhibits sharing behavior. Fixed ratio reinforcement is useful in establishing a contingency between behavior and reinforcement, when used consistently, because it is systematic.

What is an example of multiple schedules of reinforcement? ›

Multiple and Mixed Schedules of Reinforcement

Multiple Schedule Example: While in a classroom, a teacher puts a blue sticky note on the board (SD signaling the schedule) when an FR schedule is available, and then puts a red sticky note (SD) when a VR schedule is available.

What is a real life example of partial reinforcement schedule? ›

Organisms are tempted to persist in their behavior in hopes that they will eventually be rewarded. For instance, slot machines at casinos operate on partial schedules. They provide money (positive reinforcement) after an unpredictable number of plays (behavior).

What is a real life example of concurrent schedule of reinforcement? ›

Restaurant Loyalty Programs: Many restaurants offer loyalty programs where customers earn points or rewards for their purchases. These programs often use concurrent schedules of reinforcement by providing different rewards for different behaviors.

What is an example of reinforcement in the classroom? ›

An example of positive reinforcement is providing a sticker to a student once they've completed an assignment. An example of negative reinforcement is allowing the student to leave circle time for a five-minute break after they use a break card.

What is an example of a continuous reinforcement? ›

Continuous reinforcement requires the subject to receive positive rewards for behavior every time the behavior is exhibited. Example: Every time a child remembers to raise their hand in class, the teacher gives them a sticker. Partial reinforcement is when the subject receives rewards for behavior some of the time.

What is an example of a progressive schedule of reinforcement? ›

A schedule of delivering reinforcement in which the number of correct responses required before reinforcer delivery gradually increases. For example, a student might get a candy completing 3 math problems correctly, then be required to solve 5 math problems correctly for a candy, then seven corrects etc.

What is an example of a tandem schedule? ›

For example, in a tandem, fixed-interval 1-minute, fixed-ratio 10 schedule, the first response after 1 minute would initiate the fixed-ratio schedule, and the 10th response would result in reinforcement. Also called tandem schedule of reinforcement.

What is an example of a chained schedule of reinforcement? ›

The chain has to be completed in the correct order: Pull up to the microphone, place an order, pull up to the window, pay, receive order, drive away and then eat the order. Making hot chocolate requires doing the steps in the correct order. This is an example of a chained schedule of reinforcement.

What are schedules of reinforcement in children? ›

Intermittent reinforcement schedules can be either ratio or interval. Ratio schedules deliver reinforcement after the toddler does the skill/behavior. Interval schedule reinforcement is provided after a certain amount of time has passed. Both ratio and interval schedules can be fixed or variable.

What is the simplest schedule of reinforcement? ›

The continuous schedule of reinforcement involves the occurrence of a reinforcer every single time that a desired behavior is emitted. Behaviors are learned quickly with a continuous schedule of reinforcement and the schedule is simple to use.

What schedule of reinforcement do you feel is the most useful and why? ›

Among the reinforcement schedules, variable ratio is the most productive and the most resistant to extinction. Fixed interval is the least productive and the easiest to extinguish (Figure 1).

What are the 4 schedules of reinforcement? ›

There are four types of partial reinforcement schedules: fixed ratio, variable ratio, fixed interval and variable interval schedules.

What is an example of intermittent reinforcement in everyday life? ›

THE LOTTERY EXAMPLE

This same principle of Intermittent Reinforcement is how scratch-off lotteries work. The odds of winning the jackpot on a scratch-off lottery ticket are pretty small. But if you never won anything, no one would buy a ticket.

Top Articles
Latest Posts
Article information

Author: Golda Nolan II

Last Updated:

Views: 5633

Rating: 4.8 / 5 (78 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Golda Nolan II

Birthday: 1998-05-14

Address: Suite 369 9754 Roberts Pines, West Benitaburgh, NM 69180-7958

Phone: +522993866487

Job: Sales Executive

Hobby: Worldbuilding, Shopping, Quilting, Cooking, Homebrewing, Leather crafting, Pet

Introduction: My name is Golda Nolan II, I am a thoughtful, clever, cute, jolly, brave, powerful, splendid person who loves writing and wants to share my knowledge and understanding with you.