Updated: Feb 13, 2019
"It is impossible to design a system so perfect that no one needs to be good." – T.S. Eliot
Technology is advancing faster and further than our ability to keep up with the ethical implications. This applies also to the systems using them that: govern, manage, and operate the businesses we work for and this includes compliance.
The speed of technological change poses significant challenges for compliance and its function to regulate activities of an organization to stay within (or meet) all its regulatory requirements and voluntary obligations. Whether you consider compliance in terms of safety, quality, or professional conduct, these are all closely intertwined with ethics which are rooted in values, moral attitudes, uncertainty and ultimately decisions between what is right and wrong.
In this blog, I will explore what makes a compliance system good (or effective) and secondly, and more importantly, can it be made to be ethical assuming that is something that is desirable.
To answer these questions, we will dive into the topic of cybernetics and specifically the works of Roger C. Conant and W. Ross Ashby along with the more recent works by Mick Ashby.
To start, we need to define what cybernetics is and why it is important to this discussion.
What is Cybernetics?
Cybernetics is derived from the Geek word for "governance" or "to steer." Although this word may not be familiar to many, cybernetics is an active field of science involving a "nondisciplinary approach to exploring regulatory systems – their structures, constraints, and possibilities." This is where we derive much of our understanding of system dynamics, feedback, and control theory that we use to control mechanical and electrical systems. However, cybernetics extends far beyond engineering to: biology, computer science, management, psychology, sociology, and other areas.
At the basic level governance has three components: (1) the system that we wish to steer, (2) the governor (or regulator) which is the part that does the steering, and (3) the controller, the part that decides where to go.
The following diagram illustrates the basic functions of a system under regulation. In this example, we have an HVAC system used to maintain a constant temperature in a house:
A thermostat regulates the heating and conditioning sub systems which are controlled by the owner.
It is important to understand the difference between the controller and regulator roles. The thermostat cannot tell if it is too hot or too cold, it only knows the number for the temperature. It is the owner (acting as the controller) that must decide whether the temperature is comfortable or not.
This distinction is useful to better understand how companies need to be regulated. Regulatory bodies create regulations, however, it is each organization's responsibility to control and perform the function of regulation not the regulatory body. In a sense, each company must decide on the degree by which each compliance commitment is met (i.e. is it too high, is it to low, or is it just right).
What is a Good Regulator?
To govern, you need a way of steering, and that is the role of the regulator. A regulator adjusts the system under regulation so that its output states are within the allowable (or desirable) outcomes.
The Good Regulatory Theorem posited by Conant and Ashby states that "Every Good Regulator of a System Must be a Model of that System." Examples of models that we are more familiar with include: a city map which is a model of the actual city streets, a restaurant menu which is a model of the food that the restaurant prepares, a job description which is a model of an employee's roles and responsibilities, and so on. In more technical terms the model of the system and the regulator must be isomorphic.
The theorem does not state how accurate the model needs to be or the technical characteristics. Sometimes a simple list of directions can be more helpful than a detailed map where there is too much information.
The theorem is sufficiently general and is applicable to all regulating, self-regulating and homeostatic systems. What is necessary is sufficient understanding of how the system works to properly know how to regulate it. A critical characteristic to know is how much variety (or variation) exists in the output of the system under regulation.
The Law of Requisite Variety
The Law of Requisite Variety (posited by W. Ross Ashby) states that for a system to be stable, the number of states of its regulator mechanism must be greater than or equal to the number of states in the system being controlled. In other words, variety destroys variety which is what regulation does.
This law has significant implications when it comes to systems in general but also to management systems. For example, according to the law of requisite variety, a manager needs as many options as there are different disturbances (or variation) in the systems he is managing.
In addition, when systems are not able to meet compliance (for example), it may be due to a lack of sufficient variety in the controls systems. This may help explain why existing controls may not be as effective as we would like. There needs to be enough variation in the control actions to adjust the management system and stay within compliance be it performance, safety, quality, or otherwise.
What is an Ethical Regulator?
Now, that we have a sense of what regulation does and what is needed for it to work, we will consider what it means for the regulation function to be ethical.
First and foremost, we need to explain what it means to be ethical. By definition, something that is ethical is (1) related to ethics (ethical theories), (2) involved or expresses moral approval or disapproval (ethical judgments), or (3) conforms to accepted standards of conduct (ethical behavior).
According to Mick Ashby, a regulator could be considered ethical if meets nine requisite characteristics (six of which are only necessary for the regulator to be effective). An ethical regulator must have:
Truth about the past and present.
Variety of possible actions (greater than or equal to the number of states of the system under regulation)
Predictability of the future effects of actions.
Purpose expressed as unambiguously prioritized goals.
Ethics expressed as unambiguously prioritized rules.
Intelligence to choose the best actions.
Influence on the system being regulated.
Integrity of all subsystems.
Transparency of ethical behavior (this includes retrospectively)
The challenges to build such a system are many. However, there are three characteristics (indicated in bold) that are requisites for a regulator to be ethical. Interestingly, these are the areas where we have the greatest hurdles to overcome:
It is not yet possible to build ethical subroutines where goals are unambiguously prioritized
Transparency of ethical behavior is not possible when the rules are not visible or cannot be discovered. This is very much the case with current advances in machine learning and artificial intelligence systems were we don't even know what the rules are or how they work.
Systems do not have sufficient integrity to protect against tampering along with other ways they can be manipulated to produce undesired outcomes.
We can conclude that current limitations prohibit building systems that incorporate the necessary characteristics for the regulation function to be ethical as measured against the ethical regulator theorem.
Before we look at how these limitations can be addressed, there is another law that is important to understand for companies to have systems that are ethical.
The Law of Inevitable Ethical Inadequacy
This law is simply stated as, “If you don’t specify that you require a secure ethical system, what you get is an insecure unethical system." This means that unless the system specifies ethical goals it will regulate away from being ethical towards being only effective.
You can replace the word ethical with "safety" or "quality" or "environmental" which are more concrete examples of ethical-based programs that govern an organization. If they are not part of a value creation system, according to this law, the system will always optimize away from "quality", "safety", or environmental" goals. This may help explain the tensions that always exist between production and safety, or production and quality, and so on. The problem is that production is often only defined as being effective, but not ethical.
Perhaps this provides a form of proof that compliance cannot be a separate objective that is overlaid on top of production systems and processes. We know that quality must be designed in and we can conclude that this is also applies to all compliance goals.
Definition of Ethical Compliance
As previously mentioned, cybernetics is a governance function that at a basic level includes: the system under regulation, the regulator, and the controller. We also stated that compliance performs the role of regulation to steer a system towards meeting compliance obligations. When these obligations incorporate such things as quality, safety, and professional conduct, we are adding an ethical dimension to the compliance function.
Based on the laws of cybernetics along with the limitations previously discussed, we can now define "Ethical Compliance" as:
Ethical Compliance = Ethical System + Ethical Controller + Effective Regulator
The system under regulation must be ethical (i.e. must incorporate quality, safety, and other compliance goals.) – Law of Inevitable Ethical Inadequacy
The regulator must be a good regulator (i.e. must be a model of the system under regulation) – Good Regulator Theorem
The regulator must be effective (i.e. it must at least meet the 6 characteristics of the ethical regulator that make it effective) – Ethical Regulator Theorem
The controller must be human and ethical (as the regulator cannot be) – Ethical Regulator Theorem
The controller must be human and accountable (i.e. transparent, answerable, and with integrity) – Ethical Regulator Theorem, and Regulatory Statutes and Law
The last one is ultimately what makes compliance ethical and more than just codified values and principles. Taking responsibility and answering for our decisions is imperative for any ethical system. Machines are not accountable nor do they take responsibility for what they do. However, this is what humans do and must continue to.
1. Ethical Regulators - http://ashby.de/Ethical%20Regulators.pdf
2. Good Regulators - http://pespmc1.vub.ac.be/books/Conant_Ashby.pdf
3. Law of Requisite Variety - http://pespmc1.vub.ac.be/REQVAR.html
4. Requisite Organization and Requisite Variety, Christophe Lambert - https://vimeo.com/76660223