Abstract
Operant conditioning refers to the process of behavior learning in which an individual receives external stimuli that can affect learning outcomes. The term was first used by Skinner, even though Watson and Thorndike already understood the procedures of operant conditioning, with the exception of continuous stimuli that were first applied by Skinner. The five procedures that explain the types of stimuli that can affect human behavior include positive reinforcement, negative reinforcement, positive punishment, negative punishment, and extinction. In practice, operant conditioning is used to teach behavior to children, modify behavior in psychotherapy patients, and eliminate old or introduce new habits. Even though operant conditioning principles were successfully used to modify human behavior and contribute to further scientific advances, such as the development of the verbal behavior theory, there are certain limitations to operant conditioning. External stimuli are not the only accountable factor for learning behavior, and radical behaviorism dismisses cognitive elements. The growing popularity of neurological concepts in psychology and cognitive behavioral therapy also contributed to a paradigm shift from observable behavior to internal factors, so it is possible that therapeutic alliances and physiological interventions may prove more effective than operant conditioning in the future.
Keywords: operant conditioning, law of effect, reinforcement, punishment
Even though Skinner is responsible for developing the term operant conditioning and introduced the concept of constant reinforcement to it, the principles and implications of conditioning were known before his research. Thorndike was the first psychologist who studied operant conditioning extensively in controlled environments and developed the theory called law of effect in 1905, which explains how different consequences determine future behavioral patterns.
The key difference between Thorndike’s law of effect and Skinner’s learning theories developed on the fundamental principles of operant conditioning are emphasis on different procedures and different approaches to reporting and interpreting results. For example, Thorndike studied the mental consequences associated with different operant conditioning procedures used and described them as satisfaction or unpleasant feelings while Skinner focused only on studying observable behavior.
However, the conclusions reached by both scientists are similar. Thorndike concluded that behavioral patterns followed by positive reinforcement will result in a sense of satisfaction, which causes the subject to repeat that form of behavior again, and behavioral patterns followed by positive punishment will cause an unpleasant feeling and less likely to be repeated. Skinner expanded on those theories while omitting the analysis of mental states in studies, but Skinner’s most significant contribution in behaviorism is applying operant conditioning principles to create the verbal behavior theory for explaining how verbal responses and stimuli determine behavior in children. Today, operant conditioning principles can be used in verbal behavior therapy and applied behavior analysis for treating various behavioral disorders.
Background
As a distinctive school of psychology, behaviorism was defined by Watson in “The Behaviorist Manifesto,” written in 1913. While first studies in conditioning were performed on animals, Watson was interested in child development and how behavioral development can be affected if children are exposed to conditioning. In the experiment Little Albert, Watson proved that parents can affect their children’s behavioral development when he taught a child to fear a rodent by associating it with an unpleasant sound (Goodwin, 2008).
Watson’s work developed the fundamentals of studies in behaviorism, but his methodology and theories were further developed by Skinner, who introduced operant conditioning and the theory of verbal development and behavior. Skinner (1950) was the first behaviorist who used constant reinforcement. That means positive or negative reinforcement were used continuously to shape the experimental subjects’ behavioral patterns. However, before Skinner introduced the term operant conditioning, Thorndike proposed a similar theory referred to as the law of effect.
According to the law of effect, behavioral patterns that are followed by satisfaction will more likely be repeated than those followed by dissatisfaction. This approach is dropped in some schools of behaviorism, especially in radical behaviorism that focuses exclusively on observable variables, but the view of operant learning as a system of valuation is an empirically supported theory (Shteingart, Neiman, & Loewenstein, 2013).
In a study by Shteingart et al. (2013), the researchers proved that operant learning affects behavior sequentially, which means that feelings associated with the first experience determines subsequent decision-making, which confirms that the law of effect is correct in predicting that people will use a valuation system to categorize feelings associated with behaviors and choose behaviors depending on those associations. However, the law of effects cannot account for all forms of behavior for two reasons.
First, behavior is not predictable if people underestimate small chances of failure, which eventually leads to unpredictable choices that makes people more prone to choose risky behavior, despite the potential consequences. Second, it is not possible to predict human behavior without initial data. In a study by Humphreys, Lee, and Tottenham (2012), it was found that sensation seeking and temperamental influences can be used to predict higher chances of undertaking risky or unwanted behavior. However, without a prior personality assessment or data regarding previous behaviors and experiences, it is not possible to make any accurate predictions or determine individual learning curves.
Those weaknesses suggest that private mental states and processes can significantly interfere with behavior, so it is possible to question Skinner’s emphasis on behavioral outcomes alone. Nevertheless, Skinner’s contribution to behaviorism was significant for developing the term operant conditioning, using constant stimuli to modify behavior, and developing the verbal learning theory, which is based on operant conditioning procedures.
Operant Conditioning Procedures
According to the principles of operant conditioning, learning occurs when subjects are exposed to reinforcement, punishment, or extinction. Reinforcement and punishment can be either positive or negative, which makes a total of five types of procedures that can be used to engage people in operant learning.
Positive reinforcement. When a certain type of behavior is followed by a reward, it is referred to as positive reinforcement, and the rewards provided result in recurring behavioral patterns associated with the reward. In modifying human behavior, positive reinforcement can be used to reinforce habits that help people achieve their goals, but it is also a subconscious mechanism that can result in various negative behaviors.
For example, people who want to quit smoking will benefit from social support and rewards for avoiding smoking. However, the smoking habit can also be created by external cues that remind people of pleasant feelings associated with smoking, and nicotine cravings will arise in those situations (Brewer, Elwafi, & Davis, 2012).
Negative reinforcement. While positive reinforcement is used upon completing a behavior successfully, negative reinforcement is used constantly until a desired behavior is performed. The operant conditioning chamber developed by Skinner would expose subjects to unpleasant stimuli that would cease when a desired response is achieved.
In a controlled environment, negative reinforcement can be continuously applied, but it is much more difficult to practically apply it. An example of negative reinforcement is removing cigarettes from the house to reduce the incentive for smoking in people who want to quit, and the only way to remove the craving is by persistence and dedication to achieving their goal.
Negative reinforcement can also shape negative habits. For example, if a smoker experiences relief upon smoking while exposed to stressful situations, smoking will become associated with removing the negative stimulus and will continue as a habit in similar situations (Brewer et al., 2012).
Positive punishment. Like positive reinforcement, positive punishment occurs upon completing a certain type of behavior. However, punishment is used to reduce the frequency of negative behaviors. For example, penalties can be introduced to people who do not adhere to wanted habits or fail to resist engaging in unwanted habits. However, positive punishment is rarely used in changing behavioral patterns. In smoking cessation, social support mechanisms are often used to help individuals maintain substitute activities or divert their attention from the cravings, but there are no penalties in formal treatment groups (Brewer et al., 2012).
Negative punishment. Removing a pleasant stimulus is an example of negative punishment because the goal is to create an unpleasant feeling associated with a certain behavior. Negative punishment has an important implication in learning new habits. For example, if people associate smoking with a pleasant mental or emotional state, they will resort to smoking whenever they are in stressful situations as a method for avoidance (Brewer et al., 2012). If they engage in a substitute behavior that provides the same relief, they will be able to replace a self-destructive behavioral pattern with a constructive one.
Extinction. Extinction occurs when a response to behavior is no longer applied. Eventually, the behavioral pattern will reduce in frequency and disappear. However, this procedure is questionable because Shteingart et al. (2013) found that the first experience in reinforcement learning has a long-term effect on subsequent behavioral decisions. Perhaps this difference can be contributed to the different memory mechanisms in humans and animals, so it is possible to suggest that substitution is a better behavior learning strategy for humans than extinction.
Practical Implications
There are many practical implications of operant conditioning in psychology because they can be used in different settings to achieve various aims. In the academic community, operant conditioning principles can be used to explain why human beings adapt their behavior and how they maintain it. In clinical settings, operant conditioning is often used in various types of behavioral therapy.
In behavioral therapy, therapists can utilize Pavlov’s classical conditioning principles or Skinner’s operant conditioning principles, depending on which type of behavior they want to target. While classical conditioning focuses on transforming instinctive responses, operant conditioning transforms voluntary behavior by creating an association between a type of behavior and a stimulus provided.
Although cognitive-behavioral therapy is often used because combining cognitive and behavioral interventions can enhance the pace and effectiveness of treatments, behavioral analysis and its varieties are still useful. For example, clinical behavior analysis (CBA) places a lot of emphasis on Skinner’s radical behaviorism principles. Because CBA focuses only on behavioral strategies, it works well on treating disorders characterized by frequently changing mental states, such as depression, substance abuse, or eating disorders (Kohlenberg, Bolling, Kanter, & Parker, 2002).
Although operant learning can be applied in a variety of cases, Skinner applied operant conditioning exclusively for verbal behavior learning. Verbal behavior is developed through interactions with others while non-verbal behavior is developed in response to the environment, and the theory explains how expectations for rewards or punishment in communication can affect language development. In practice, verbal reinforcement or punishment can be used to shape behavior in children, but the most significant implication of this theory is the development of treatment programs for enhancing learning in children with autism (Sundberg & Michael, 2001).
Limitations
Skinner investigated only external stimuli in his experiments because he had accepted Watson’s teaching that behavior is conditioned and predetermined. On the other hand, Tolman (1934) developed the latent learning theory, which suggested that personal choices and cognitive processes affect learning, so Tolman concluded that behavioral learning did not depend exclusively on external reinforcement. Along with the findings that suggest personal evaluation of experiences are relevant to decision-making regarding behavior (Shteingart et al., 2013), it is possible to suggest that subjective processes must not be discarded in studying and predicting human behavior.
Also, current clinical implications of operant conditioning are less frequent than cognitive-behavioral therapy (CBT), which is more common and favored over other approaches because of the wide variety of interventions that can be used to address different disorders (Beck & Fernandez, 1998). Unlike operant learning, CBT relies on building a therapeutic alliance to increase the chances of positive treatment outcomes, and it is often considered superior to other treatments (as cited in Beck & Fernandez, 1998). Although, operant learning will probably be used in the future, CBT and the therapeutic alliance are currently the dominant paradigms in psychotherapy.
Finally, the most important limitation for operant conditioning is the increased attention to mental processes in contemporary studies. While Skinner rejected the idea of analyzing mental processes in his research, current research apparatuses available to psychologists can allow them to study psychological phenomena and correlate them with physiological phenomena. That way it is possible observe reactions to reinforcement and punishment in hormonal activity and neural pathways (Bromberg-Martin, Matsumoto, & Hikosaka, 2010). It is not clear whether these factors are causal or reactive in their relationship with behavior modifications. However, if there is a corresponding physiological reaction to both rewards and punishment in operant learning, it is also possible that pharmacological interventions could potentially replace operant learning.
Conclusion
Operant conditioning was developed in the behaviorism school of psychology, and it usually discarded the notions of mental processes and other intangible variables in its research and theories. However, that approach is not correct. While Tolman considers external stimulus effective in facilitating learning, they are no considered the primary motivators for learning new behaviors. Therefore, in order to better understand human behavior and methods for modifying it, subjective internal processes must be taken in account when conducting operant learning. Although operant conditioning is no longer the dominant paradigm in psychology research and clinical practices, it remains an important element in psychology that clarifies human adaptation to external influences.
References
Beck, R., & Fernandez, E. (1998). Cognitive-behavioral therapy in the treatment of anger: A meta-analysis. Cognitive Therapy and Research, 22(1), 63-74.
Brewer, J. A., Elwafi, H. M., & Davis, J. H. (2012). Craving to quit: Psychological models and neurobiological mechanisms of mindfulness training as treatment for addictions. Psychology of Addictive Behaviors, 27(2), 366-379.
Bromberg-Martin, E. S., Matsumoto, M., & Hikosaka, O. (2010). Dopamine in motivational control: rewarding, aversive, and alerting. Neuron, 68(5), 815-834.
Goodwin, J. C. (2008). A history of modern psychology. (3rd ed.). Hoboken, NJ: John Wiley & Sons, Inc.
Humphreys, K. L., Lee, S. S., & Tottenham, N. (2012). Not all risk taking behavior is bad: Associative sensitivity predicts learning during risk taking among high sensation seekers. Personality and Individual Differences, 54(6), 709-715.
Kohlenberg, R. J., Bolling, M. Y., Kanter, J. W., & Parker, C. R. (2002). Clinical behavior analysis: Where it went wrong, how it was made good again, and why its future is so bright. The Behavior Analyst Today, 3(3), 248-254.
Shteingart, H., Neiman, T., & Loewenstein, Y. (2013). The role of first impression in operant learning. Journal of Experimental Psychology: General, 142(2), 476-488.
Skinner, B. F. (1950). Are theories of learning necessary? Psychological Review, 57, 193-216.
Sundberg, M. L., & Michael, J. (2001). The benefits of Skinner's analysis of verbal behavior for children with autism. Behavior Modification, 25(5), 698-724.
Tolman, E. C. (1934). Theories of learning. In F. A. Moss (Ed.), Comparative psychology (pp. 367-408). New York, NY: Prentice-Hall.