Introduction to Operant Conditioning
Operant conditioning is a theory of behaviorism, a learning perspective that focuses on changes in an individual's observable behaviors. In operant conditioning theory, new or continued behaviors are the result of new or continued consequences. Research regarding this principle of learning was first studied by Edward L. Thorndike in the late 1800's, then brought to popularity by B.F. Skinner in the mid-1900's. Much of this research informs current practices in human behavior and interaction.
Thorndike's Law of Effect
As with most early psychological research, the first testing of behaviorist learning theory was done on animals. Thorndike's most famous work involved cats trying to navigate through various home-made puzzle boxes. Cats were placed in a box that could be opened if the cat pressed a lever or pulled a loop. Thorndike discovered that with successive trials, cats would learn from previous behavior, limit ineffective actions, and escape from the box more quickly . He realized that not only were stimuli and responses associated, but also that behavior could be modified by its consequences. He used these findings to publish his now famous law of effect: the notion that pleasing after-effects strengthen the action that produced it, whereas displeasing after-effects weaken the likelihood it will be performed again, given the same situation.
Thorndike's initial research was highly influential on another psychologist, B.F. Skinner. Almost half a century after Thorndike's first publication of the principles of operant conditioning, Skinner attempted to prove an extension to this theory - that all behaviors were in some way a result of operant conditioning. Skinner theorized that if a behavior is followed by reinforcement, the behavior will be repeated, but if it is followed by punishment, the behavior will not be repeated. He also believed that this learned association could end, or become extinct, if the reinforcement or punishment was removed.
To prove this, he placed rats in a box with a lever, that when tapped, released a pellet of food. Over time, the amount of time it took for the rat to find the lever and press it became shorter and shorter, when finally the rat would spend most of its time near the lever eating. This behavior became less apparent when the relationship between the lever and the food was compromised. Skinner's radical behaviorism has fallen out of prominence, but his basic understanding of operant conditioning is still used by psychologists, scientists, and educators today.
Shaping, Reinforcement Principles, and Schedules of Reinforcement
Operant conditioning can be viewed as a process of action and consequence. Skinner used this basic principle to study the possible scope and scale of the influence of operant conditioning on animal behavior. His experiments used shaping, reinforcement, and reinforcement schedules in order to prove the importance of the relationship that animals form between behaviors and results.
All of these practices concern the set-up of an experiment. Shaping is the conditioning paradigm of an experiment. The form of the experiment in successive trials is gradually changed to elicit a desired target behavior. This is accomplished through reinforcement, or reward, of the segments of the target behavior, and can be tested using a large variety of actions and rewards.
The experiments were taken a step further to include different schedules of reinforcement that become more complicated as the trials continued. By testing different reinforcement schedules, Skinner learned valuable information about the best ways to encourage a specific behavior, or the most effective ways to create a long-lasting behavior. Much of this research has been replicated on humans, and now informs practices in various environments of human behavior.