Book about analyzing “systems”: basically, any collection of things that are interconnected and produce emergent properties. In many situations you will be led astray if you only look at individual events or pieces of the system, you have to analyze the system as a whole to understand its behavior. Examples of systems are: stocks of inventory in a retail business, modeling a country’s population change, extraction of a non-renewable resource by capitalists, etc. A system can be modeled as a set of stocks (or reservoirs), inflows and outflows that modify the stocks, and feedback mechanisms that modify flows based on the system state. These feedback mechanisms may be designed to maintain a desirable steady system state, or in other cases, they may occur naturally in which case they may become a (usually undesirable) self-reinforcing feedback loop. When a stock is practically infinite, it is represented as a cloud where flows can originate or disappear.
In a system with only one stock and a feedback loop, in theory there should only be 3 outcomes: either a must grow exponentially to infinity, reach a steady state, or decay to zero. In practice, different feedback mechanisms start to dominate at different points, so it can never truly grow to infinity, there will always be some limiting factor. Delays in response time lead to oscillations: by the time the feedback mechanism kicks in it is too late, and often it overcorrects. Extraction of renewable or non-renewable resource can be modeled as a two-stock system with one stock for the resource and one for capital. Systems gain resilience by self-organizing into hierarchical systems that can self-correct to respond to changes.
Reasons we fail to understand the behavior of systems: (1) looking at individual events in isolation instead of the system as a whole; (2) assuming that effects are linear which is rarely the case; (3) neglecting boundaries and assuming things are infinite (nothing is truly infinite and will become a limiting factor with enough scale); (4) delays and inavailability of information, causing us to make suboptimal choices, or our inability to reason perfectly even with available information.
Ways that systems can go wrong: (1) implement a policy, but some groups disagree with it and try to compensate for it, thereby reversing the effects of the policy; (2) tragedy of the commons where a resource is shared between many people; (3) Feedback loop of escalation between competitors; (4) rich-get-richer system like Monopoly leading to increasing inequality; (5) intervention that corrects a problem in the short term, but creates a dependency on the intervention; (6) setting rules in a way that encourages reward hacking to circumvent the intent of the rule.
Systemic thinking can help you identify points of high leverage, where changing how the system functions can be a lot more effective than changing the state of the system itself. Such interventions include removing delays, modifying how the information flows, changing the rules and incentives, etc.
Overall, a fairly short read about systems thinking, the models presented are simple and can be applied to a lot of situations, however, they lack real explanatory power. All of the examples are anecdotal, and there is no attempt to model real-world data. The author proposes a model based on stocks and feedback loops, but I found they were either so simple as to be obvious, or complex yet without adding much insight, and this model is quietly dropped and never mentioned again in the second half of the book. What follows is a hodgepodge of assorted emergent phenomena that occurs in various systems.