Systems surely are a series of interrelated and interconnected components and parts that interact with each other to produce some sort of effect that is larger than their sum (holism principle or Ackoff’s Theory from his seminal journal article in 1979), yes. But…
Several answers I've seen up through the time I came to write this post have described systems as being designed around some purpose or end, however, this is only true generally for engineered and some biologic systems. Many systems exist that aren't engineered. And we still study and must consider these when designing engineered systems. One example is the rate of carbon capture on earth. We may observe that system and find some ways to control it via feedback and creating policies, but we did not design nor create it - we simply observe it. It is hard to describe that system as having a strict teleology, as there is no derivable end purpose or goal to that system - rather, it's just the way the universe works, physically, chemically, and otherwise. You could argue that a branch and leaf (a biologic system) have a teleology, objective, and use case. But could you argue that something such as the direction of ocean currents has a purpose? Not without some large jumps in logic, although some other systems may use that current as an enabling system to some end, such as for traversal or navigation. Within INCOSE meetings and online, I often see large disagreements break out about if even engineered systems have a purpose at all or if they are ‘organized emergence’, where a given purpose or end is a human social construct projected onto the system. Some take a strictly ontological approach, and some strictly teleological, and some people recognize that there is a large gray area in arguments you can make due to the level of abstraction and varying semantics and heuristics used across disciplines needed to describe a system. To be fair, most of my training and much SE training is strongly object-oriented in nature (think MBSE), and therefore many in SE tend to lean towards ontological frameworks and arguments.
Not all systems that are engineered or designed work towards a 'singular' goal, many work to fulfill multiple goals and objectives all consisting of several use cases, based on the results of the operational scenarios and needs determined. For example, the Mars Rover can perform chemical analysis of materials but can also use sensors to capture physical data and take pictures. Whether you semantically refer to these as use cases, objectives, or goals in whatever context you desire, it’s important to note that regarding the definition of a system, the emergent resultants of its underlying components can achieve many things for one SOI.
Another aspect of this question to take note of that was often missed is that systems themselves are always a part of other systems, and systems at a higher level of abstraction generally abstract away more detail. Systems such as enterprises themselves are systems of systems, and there are even systems of enterprises. There is a sort of infinite semiosis (see Charles Sanders Pierce) or ‘infinite nesting’ of information that exists when referring to our representations of systems. Systems having a greater effect than the sum of their components can also depend on interactions with enabling systems (think for example of a hydroelectric power plant - all of the components can't work together without the water cycle pulling water from the surface and generating surface water runoff into streams, which run into the hydroelectric plant - without water, the plant's components may not create a greater effect than the sum of their parts, at least as intended). There is the rule of 3, that systems come as a SOI (system of interest), contextual system, and enabling systems.
As we move through the class, be sure to think of the system principles presented by professor Eftekhari Shahroudi and how these play into our interpretation of systems in the real world.
INCOSE (2015) A Complexity Primer for Systems Engineers
I tend to like the definition for complex systems provided by this INCOSE whitepaper, influenced by HG Sillitto, and it largely reflects the viewpoints shared in class. This question is definitely a bit subjective in nature, since we’re not looking at any frameworks or measures of objective complexity – other than maybe introducing those as a way to label what may be categorized as complex.
“In ordinary language, we often call something complex when we can’t fully understand its structure or behavior: it is uncertain, unpredictable, complicated, or just plain difficult”
Think about this situation: What if for some engineered system, that you need to meet very strict requirements (scope, schedule, budget, restrictions on parts, challenging operating environment, very low threshold for expected fuel purity, system that tends to have low expertise available for maintenance, etc). The system might not have a relatively large number of elements, but what might make it complex is the fact that you have a very well designed ‘box’ you need to think within, which will require a lot of creativity, strategy, and understanding of the environment. This may mean detailed analysis of contextual and enabling systems to ensure proper operation, even if the system itself is relatively simple. In that way, it might be complex to consider and design, but not necessarily a complicated system, and may not have a lot of components.
In a sense, one way you can view it is that 'complicated' is just one measure of objective complexity (with others including but not limited to unpredictability and uncertainty) - whereas complexity as a word itself without context is often used subjectively, as in how complex it's perceived to be as opposed to how complex it 'really is' relative to some statistically derived or otherwise quantitative standard.
Systems engineers who study complex systems often need to take both subjective lens and objective lens on complexity. We need to view complexity objectively to help us analyze how implemented policies, processes, and methodologies (such as creating abstractions or models) can help to reduce it, thus making a system less difficult and easier to understand (if that's possible). We must also consider subjective complexity to understand where there is perceived complexity, as these can bias where we focus our efforts all the while ignoring other emergent behavior (remember the lecture videos, and how focusing on details can lead us to ignore certain emergent behaviors or structures in a system). In some cases, using expert opinions on subjective complexity can also help us to direct and bias our efforts in a positive manner - but without a great deal of interdisciplinary tacit knowledge and expert-level abductive reasoning, a lot of times our efforts may end up producing counter-intuitive results.
I mostly wanted to see that you might understand that there exists many viewpoints, both subjective and objective, of what complexity means. And to realize that both matter. Some general attributes of complexity are how complicated, uncertain, unpredictable, or difficult to understand something may be.
One researcher to investigate for learning about objective complexity measures and management of complexity in large-scale architectures is
HG Sillitto - you'll find his works heavily cited where you find papers on management of complex systems. This is probably one of his most cited papers:
HG Sillitto: On Systems Architects and Systems Architecting: some thoughts on explaining and improving the art and science of systems architecting
If you are into software, Grady Booch (Chief Scientist of Software Engineering at IBM) has a lot of learnings related to management of systems
architecture complexity. Here is a great podcast from IEEE Computer Society:
Grady Booch, IEEE Computer Society: Measuring Architectural Complexity
You might also look into John Gall's (1975) Systemantics: How Systems Really Work and How They Fail, which refers to something a lot of system architects
now employ to control emergent behavior and inherent complexity in systems by starting with a simple system and integrating new components while observing
and managing their behavior, so that the 'level of understanding' of the system is maintained.
John Gall: Systemantics - How Systems Really Work and How They Fail
There is also the Cynefin Framework (introduced to me by my peer Andrew Stearns), in 1999 by Dave Snowden of IBM, which is easy to understand the
fundamentals of, but can provide a nice framework for managing the development of complex adaptive systems as they scale:
LAS Conference 2013 - Keynote Dave Snowden - Making Sense of Complexity
Complex Adaptive Systems - Dave Snowden - DDD Europe 2018
Of course, I would be remiss if I didn’t also link this paper by Russell Ackoff, which focuses on different approaches to problem solving developed at Wharton. I think that these kinds of
approaches are very important to help us develop some tacit mechanism to question the design of systems and potential complexities rather than finding some quick, mitigating solution.
Russell Ackoff: The Art and Science of Mess Management
This next conference paper focuses on the terms growth and development, and what each means or implies. Consider that growth is often times
counterintuitive to development – whether that be at the same level or some greater level of abstraction (e.g., department level vs enterprise level).
Russell Ackoff: Transforming the Systems Movement
And here is the aforementioned seminal 1979 paper where his ‘Ackoff Theory’, and much of the holism principle, is derived:
Russell Ackoff: The Future of Operational Research is Past
© Hillier Engineering | 2024