The two famous psychologists responsible for developing our current understanding of operant conditioning are Thorndike and Skinner. Thorndike's law of effect and principles of operant conditioning informed Skinner's later and more progressive work in the subject.
Experiments within the study of operant conditioning utilize shaping, reinforcement, and reinforcement schedules to prove the connection between action and consequence, and to learn effective ways to harness this powerful connection to influence human behavior.
A class of behavior that produces consequences by operating (i.e., acting) upon the environment.
Operant conditioning is a theory of behaviorism, a learning perspective that focuses on changes in an individual's observable behaviors. In operant conditioning theory, new or continued behaviors are impacted by new or continued consequences. Research regarding this principle of learning was first studied by Edward L. Thorndike in the late 1800's, then brought to popularity by B.F. Skinner in the mid-1900's. Much of this research informs current practices in human behavior and interaction.
Thorndike's Law of Effect
As with most early psychological research, the first testing of behaviorist learning theory was done on animals. Thorndike's most famous work involved cats trying to navigate through various home-made puzzle boxes. Cats were placed in a box that could be opened if the cat pressed a lever or pulled a loop. Thorndike discovered that with successive trials, cats would learn from previous behavior, limit ineffective actions, and escape from the box more quickly. He realized not only that stimuli and responses were associated, but also that behavior could be modified by consequences. He used these findings to publish his now famous "law of effect" theory: the notion that pleasing after-effects strengthen the action that produced it, meaning they increase the probability the action will be performed again in the same situation, whereas displeasing after-effects weaken the likelihood it will be performed again.
Thorndike's initial research was highly influential on another psychologist, B.F. Skinner. Almost half a century after Thorndike's first publication of the principles of operant conditioning, Skinner attempted to prove an extension to this theory—that all behaviors were in some way a result of operant conditioning. Skinner theorized that if a behavior is followed by reinforcement, that behavior is more likely to be repeated, but if it is followed by punishment, it is less likely to be repeated. He also believed that this learned association could end, or become extinct, if the reinforcement or punishment was removed.
To prove this, he placed rats in a box with a lever that when tapped would release a pellet of food. Over time, the amount of time it took for the rat to find the lever and press it became shorter and shorter, until finally the rat would spend most of its time near the lever eating. This behavior became less consistent when the relationship between the lever and the food was compromised. This basic theory of operant conditioning is still used by psychologists, scientists, and educators today.
Shaping, Reinforcement Principles, and Schedules of Reinforcement
Operant conditioning can be viewed as a process of action and consequence. Skinner used this basic principle to study the possible scope and scale of the influence of operant conditioning on animal behavior. His experiments used shaping, reinforcement, and reinforcement schedules in order to prove the importance of the relationship that animals form between behaviors and results.
All of these practices concern the setup of an experiment. Shaping is the conditioning paradigm of an experiment. The form of the experiment in successive trials is gradually changed to elicit a desired target behavior. This is accomplished through reinforcement, or reward, of the segments of the target behavior, and can be tested using a large variety of actions and rewards.
The experiments were taken a step further to include different schedules of reinforcement that become more complicated as the trials continued. By testing different reinforcement schedules, Skinner learned valuable information about the best ways to encourage a specific behavior, or the most effective ways to create a long-lasting behavior. Much of this research has been replicated on humans, and now informs practices in various environments of human behavior.