Key Takeaways
- Development is not and will never be a linear process
- The use of scaffolding and emergence in delivery can be used to support a more organic approach that is focused on value
- Scaffolding can not only be used to support building but also be used to bootstrap knowledge
- Emergence is poorly understood but has value as it allows the contextualisation of value
- Both scafolding and emergence can be accommodated by tweaking existing practices
This article outlines a different approach to the development of a system based on scaffolding and emergence. There are three key tenets that we used but the underlying changes we made were based on taking an emergent approach and leveraging scaffolding:
- We ensured the focus on the business needs and have no product owner as they are often only a proxy for the business. We acknowledge that the customer needs are often emergent and poorly understood so we would rather agree on a direction and explore what makes sense as the next step.
- We move forward incrementally from stable state to stable state. In doing so we are making a series of promises, moving forward in the desired direction along with having the ability to pivot or stop if the outcome is not desirable. The stable states allow us to ‘ratchet up’ and to deliver the next stable state which has business value.
- We look to exploit scaffolding to accelerate learning and delivery. In addition to supporting delivery, we also look to scaffold knowledge as this also helps. The focus is not just technology but also knowledge.
The embracing of emergence allows for unarticulated needs to be explored in a collaborative manner. As we already mentioned we don’t privilege a ‘product owner’ here but try and deal directly with the needs of the business and explore what is needed from their perspective. The business value can be direct or indirect. By direct we mean it is part of the sale or service and charged directly to the customer. By indirectly the value may be in accelerating delivery or supporting the maintenance of the service that is offered by the company. Engaging directly with the business allows the type of value and the business benefit to be explored.
The use of scaffolding not only allows us to deliver faster, it also allows for deferring the development of explicit knowledge of the problem domain. This scaffolding of ‘knowledge’ proved to be extremely useful and allowed time for us to develop a deep understanding of the underlying technology and communication protocols related to the problem domain of the example use case.
We think that these practises would be useful and have value to other development initiatives. We view aspects of this way of working as complementary to some other approaches such as continuous architecture.
Emergence
This is poorly understood but at the basic level, we need to acknowledge that we don’t know everything to start with. It is only by engaging and exploring can we learn what has utility and this is what we mean by emergence. While the needs may be emergent we typically have some idea of what we would like and/or the direction we would like to move in and this can be used to set guiding constraints.
If you contrast this with the traditional approach of defining a set of requirements; these are often based on a series of unsubstantiated assumptions so may be incorrect and may also be incomplete with gaps. It is true that we do need to know what we are building before we start coding. So we need to be able to bridge these gaps while still allowing for the emergence of unarticulated needs. The latter only become apparent by using the system.
We need to ensure there is an idea of ‘done’ before starting development and the articulation of this will be dependent upon context. It is that old saying that a problem well defined is half solved. Lean has this idea of ‘complete’ kit where a task is not started before all the relevant information and material is available.
While tacit knowledge can be possessed by itself, explicit knowledge must rely on being tacitly understood and applied. Hence all knowledge is either tacit or rooted in tacit knowledge. A wholly explicit knowledge is unthinkable. Polanyi
We can also look at this from a needs perspective where I believe we have three types that map to the clear, complicated and complex domains of Cynefin. These are
- Those that are known, or known knowns
- Those that are unknown, or known unknowns
- And those that are unknown unknowns
Some would argue that known knowns should be simple to deal with and easily elaborate but even here caution is needed. I can recall working on the development of a new card settlement system where we needed to be able to deal with blacklisted cards. The assumption was that a card would be either black listed or not but we were advised that the current system could return ‘yes’, ‘no’, or a ‘maybe’ and no one could explain the latter. We had made the mistake of assuming this was clear and obvious but it really was a complicated issue and resolution was both time-consuming and costly.
We have a large number of experiences addressing the second type of need: known unknowns and you could argue that agile practises accommodate articulation of these needs and related practises such as innovation games help here. This is broadly the case and iterative development is helpful as it allows us to articulate these elements and to incorporate them.
The challenge comes when dealing with unarticulated needs or unknown unknowns. If we don’t know that they are unknown then we need a way of dealing with their emergence; one that traditional development approaches are poor at managing. We also need to acknowledge that realisation requires articulation of the need as we need to have an outline to the approach or solution in order to be able to write code. Here we see the value in the liminal state of Cynefin where we hold open the options exploring what has utility, committing to development only when we have consensus. In practice, we found that the customer is often clear on the ask from their perspective but we need to spike things to look at realisation and to ensure we have clarity before committing. This not only provides confirmation of the ask but makes the needs clear to support a definition of ‘done’.
Related to this we are curious but are not preoccupied with interfaces as the focus is mainly on functional areas due to the business focus. The scaffolding may help here and offer predetermined natural boundaries that need to be observed. The functional areas may become domains but that is not a given.
From a delivery perspective, we engaged in small pieces of work that both allow us to explore the system architecture and the functionality that needs to be supported. These are value steps and we use epics as these are understood by the agile community and offer an appropriate level of granularity. These are in turn composed of stories and tasks as appropriate. A story could be looking at a particular interface that is needed to support desired functionality while acknowledging that both of these may evolve over time and that we can predict little. An example of an epic may be monitoring of the system as these do not need to be about the functional aspect but may also encompass the non-functional elements.
At the story level, we may explore the options for providing some functionality or capability. The idea is to explore it in a low-cost and quick manner that will in turn allow value to be delivered. We don’t load this into the feature itself as we want to have an idea of how it will be supported and delivered before we make a commitment to add it to an epic. So, the system architecture is being explored and elaborated in each step and each of these is a ‘stable state’ that allows value to be racketed up.
This avoids the need for big upfront planning while allowing us to focus on what has value to the business and also keeping the requirements open. We do not maintain a large backlog which also helped with prioritisation and allowed the business to be truly agile and responsible to customer needs which had not been anticipated.
Scaffolding
To support the use of an emergent approach we also make use of scaffolding which helps bootstrap the system up and provides the first point of stability. There are different types of scaffolding but at a high level, they can be internal or external, temporary or permanent in nature. Internal scaffolding typically provides a structure that you can build on and by definition, the scaffolding may become part of the structure and therefore permanent in nature. External scaffolding tends to constrain the system but supports the move to the next stable state but as it is external it is typically temporary in nature. Longer-term it usually becomes redundant as the internal structure can support itself.
People will often think of scaffolding as simply something that allows you to defer effort but what is often overlooked is the ability for the scaffolding to allow us to defer knowledge exploration as it already contains ‘encoding’ of domain knowledge. This is not just knowledge of how we build something but direct business knowledge which would otherwise take us time to gain. If this is the case then we can exploit this to deliver business value early without the need to become knowledgeable in the business domain (we are deferring and allowing time for the development of tacit knowledge).
We can also use scaffolding to bind or bridge elements of the system. Here we can use existing tools that allow for the exchange of information in a suitable canonical structure as they are more efficient than some common formats. These become integral parts of the system and are long-lived.
An extension of this aspect is that it allows integration of the core elements early in the delivery cycle and therefore supports exploration of any assumptions that have been made. Combined with trunk based development this means that every change is implicitly evaluated in a holistic manner and therefore we don’t need explicit integration testing.
The scaffolding in this case was a collection of open source, selected based on a simple function decomposition of the high-level business requirements and a suitable level of abstraction of the underlying technology domain. As already mentioned this means that we do not need to undertake elaborate or extensive analysis of the business requirements and can often gain sufficient information from a short conversation with the business.
Once we have a basic understanding of the business needs then we can look at the options to support bootstrapping the system. This may also require more than one component; we may find some code that addresses a specific business need that is essential but we may still need a means of storing and managing data.
The key point we have used for selection (putting aside language preferences for now) is a suitable level of abstraction of the problem domain which comes back to the point of scaffolding knowledge. We need the interface to be simple and intuitive in nature to avoid the need to develop tacit domain knowledge unless we already have it. In the case where we are familiar with a particular capability, such as data presentation then we can use that to guide the initial selection of the technology or code base.
Case Example - KNX Monitoring
The case example was the ask to develop a KNX monitoring system that could be used to present and analyse all the devices deployed in an installation. KNX is binary protocol, defined any an ISO standard that supports the automation and monitoring of all the elements of homes and offices. In this case, we had a basic functional breakdown that allowed us to understand the direction that we wanted to evolve in but lacked any domain knowledge. The general direction was that we needed to be able to collect and present metrics from deployments and then to be able to raise alerts, and finally close the loop so metrics would give rise to actions. We also needed to be able to support this at scale as there could be a large number of deployments.
These functions are unique to this situation and therefore in a different case, you would need to go through a similar process. Here it is worth noting that these functional areas came out of the discussion and were identified over time and therefore emerged. We did not look to undertake a long or detailed analysis but started to move forward as soon as we had established the initial features and the first value step. These initial features were collection and presentation as these provided the visibility desired and covered most of the technical issues such as connectivity, collection of the device events, and presentation. A summary of the key capabilities that were established over time are:
Collection - based on threading for multiple locations and integration of KNX ETS files so device types are known and can be mapped.
Presentation - visualisation of the events, metrics, and state for the devices and groups that make up an installation
Processing - analysis of the events and raising alerts based on conditions
Acting - sending alerts to support action and also automating actions in the system
The basic scaffolding support was done by leveraging an open source for the KNX DPT (Datapoint Types) element and for the main data storage and processing with a view that these elements could be swapped out later if issues of a commercial or performance nature arose. We also took an ‘event sourcing’ approach and ensured that all events were captured. Not only does this support observability it also ensures that we can recreate any particular view that is needed allowing for deferral of specific design decisions.
In terms of open-source, we are primarily interested in leveraging MIT licensing due to the flexibility it provides, but are happy to leverage Apache licenses in isolating parts that are unlikely to require change. The idea was to get the system architecture build-out in a short time while reducing the learning cost with regards to the complicated nature of KNX and it’s devices and group address structure. We complemented this with the Influx TICK stack for data collection and processing. In time we can switch out elements of the stack or recode them if they become constraints but the core stack is good for a few thousand events a second and we are seeing the order of 800-1000 events an hour from a typical KNX deployment. This means the current rate is less than an event a second on average for a location.
As noted above we did not start with a predefined series of requirements but explored these in a collaborative manner which allows for learning. We have structured the WoE (Ways of Engagement) based on Epics which are short pieces of work (days to a week) that deliver some new features. The features reflect the client’s needs, and this means that we can take into account new needs and also allow them to experience the system before committing to future development. This is iterative in nature and means that development is always focused on what is needed. It also allows for learning with regards to the stack and libraries we are using but to date, they have proved to be suitable.
These don’t have to be purely functional and one example was the monitoring and recovering of the installation connections. The basic epic covered the recovery of the connections where we the connection errored but we added a story for hung connections to address a state that we observed. Addressing this means the system is largely self-healing and largely runs itself.
We have also simplified the code and testing approach, using a master/trunk (now commonly referred to as ‘main’) branch development model so all code is committed and tested on a daily basis when we are doing feature development. If we need to we can use feature toggles to limit the use of a feature but as these are generally additive. Feature toggling was used to support the migration from InfluxDB V1 to V2.
Closing
We believe that the practises we have explored to support emergence and scaffolding have generic applicability for system delivery. The practises around emergence avoided the need for formal requirements and offered a simple way of working and engagement with the client. The use of scaffolding to address the technology via open-source meant that we did not need to develop a deep understanding of the KNX technology initially and this translated directly to early business benefit. This meant we could be both time-efficient and responsive to business needs as they became clear.