Faster, Better, Higher – But How?
One of the main challenges when designing software architecture is the consideration of quality attributes. Not only their design turns out to be difficult, but also their specification. Consequently, many problems in software systems are directly related to the specification and design of quality attributes such as modifiability or performance, to name just a few. Quality attributes have a main influence on customer and end user acceptance. To address them in a systematic and appropriate way is an important but challenging endeavor.
The Mess with Quality Attributes
Quality attributes are difficult to handle, because they need to be treated differently and typically affect not only parts of a system but the system as a whole. For example, engineers can’t constrain security or performance attributes to one single place in the system. Such attempts turn out to be impossible in most contexts, because the concerns are cross-cutting and often even invasive, that is, they require software engineers to inject design and code into existing components. With other words, most quality attributes are systemic and need global and strategic treatment.
Grady Booch once said that software architecture is about everything costly to change which applies particularly well to quality attributes. And this is also in the sense of Martin Fowler who mentioned that software architecture is about the important things - whatever that means.
Understanding is Key for successful Design
But how can software engineers cope with this complexity? The first step is to make sure that all requirements are well understood and prioritized. It is not really helpful to deal with requirement specifications such as "should have high performance" or “needs to offer high security”. These statements might appear logical from a higher view. But what exactly do they mean in practice?
The first challenge and precondition for any good design is to create a good requirements specification. But, what the hell, is a good requirement? According to ISO a good requirement reveals the following properties:
- cohesive: the requirement should only address one thing.
- complete: it should be fully stated in one place with no information missing.
- consistent: the requirement is supposed not to contradict other requirements and should be fully compliant with documentation.
- correct: all business needs of the stakeholders are met.
- current: the requirements has not been made obsolete during the project.
- externally observable: The requirement specifies a characteristic of the product that is externally observable or experienced by the user. "Requirements" that specify internal architecture, design, implementation, or testing decisions are properly constraints, and should be clearly articulated in the Constraints section of the Requirements document.
- feasible: the requirement can be implemented within the constraints of the project.
- unambiguous: The requirement is concisely stated without recourse to technical jargon, acronyms (unless defined elsewhere in the Requirements document), or other esoteric verbiage. It expresses objective facts, not subjective opinions.
- mandatory: The requirement represents a stakeholder-defined characteristic the absence of which will result in a deficiency that cannot be ameliorated.
- verifiable: The implementation of the requirement can be determined through one of four possible methods: inspection, analysis, demonstration, or test.
Whenever they receive a requirements specification, software engineers should ensure its quality. A bad requirement specification will inevitably lead to a bad software system, no matter how good the designers and implementers happen to be. Thus, testing and assessing the requirement specification is of uttermost importance.
Beware of Hidden Secrets
It is important to note in this context that architects should also be aware of hidden or implicit requirements. It might not be documented anywhere, but a smart phone should be responsive anytime. And the keyboard and the display should be located on the same side. These requirements might seem obvious, but that does not hold for all implicit requirements. It is out of the scope of this article but a KANO analysis might be helpful to address this issue.
Another important point is that a stakeholder should get what she needs, not what she wants. There is the old story of the car owner who complains to his dealer that he needs better bumpers. Whenever he is driving along the highway there is this damned rabbit crossing the lane. Whenever trying not to hit the rabbit, the car starts sliding on the wet highway and hitting a tree instead. Of course, the dealer could sell a car with better bumpers, but it makes a lot more sense to sell a car with ABS to the customer.
Last but not least, it is always essential to consider the context when addressing quality attributes. Suppose, your system is required to offer high security, whatever that means. Even if your application is the most secure system in the universe, it won’t succeed if you need to integrate legacy code that is not secure and just opens a backdoor in your overall system. Remember, the quality of any system is always determined by its weakest part.
The World isn’t Flat
When dealing with quality attributes an important thing to consider is the multi dimensional universe they create. For example, a quality such as performance reveals multiple facets - the categorization has been proposed by an IBM paper:
- it might address throughput, for example the number of messages a communication middleware is able to transmit per second.
- It also could specify response time, the time between a request and its response.
- I/O speed refers to the bulk data processes by a system.
- Perceived performance is the speed a user experiences when interacting with a system.
- It might also cover start-up time which is the time a system requires from system start-up to full operability respectively availability.
- Last but not least, it could also define scalability – albeit some might disagree this really should be related to performance.
Thus, when dealing with performance software engineers should elaborate on what facets of performance they are actually referring to. Dealing with I/O speed might imply other consequences than dealing with throughput.
A good way to document, model and express quality attributes in a more specific way comprises a scenario-based approach as introduced by architecture evaluation methods such as ATAM. All important qualities should be specified using scenarios. In a scenario an external user or external system is triggering an event on the system to be developed or one of its parts.
For instance, a bot net might fire a high rate of requests to a Web server in order to make it unavailable. The system respectively the web server in our case might also be in one particular state, for example being connected or disconnected to the network. Whenever the event arrives, the system should react in a specific way such as blocking network access when a distributed denial of service attack (DDoS) is detected by analysis of traffic patterns. And we should also quantify our system’s response by a response measure. For example, a DDoS attack should be detected and addressed within 3 minutes. The system is considered a black or grey box in this approach.
Scenarios provide many benefits such as:
- They are concrete, specific and concise in that they address one specific aspect of a quality attribute applied to one specific part of the system.
- Scenarios help to approach quality attributes in a qualitative but also in a quantitative way.
- Maybe, the most important advantage of scenarios is their understandability by stakeholders. Both, business stakeholders and engineers can understand, define and discuss scenarios.
- Last but not least, scenarios help test managers to specify the tests for ensuring the quality attributes are really achieved in a product or solution.
Utility Trees are our Friends
The next step is to create a utility tree that specifies the quality attributes in a tree-like order with the leafs containing the scenarios. This utility tree can either be jointly defined by business and development stakeholders or prepared as a base for discussion by architects. In a workshop, business and development stakeholders assign a metrics to each scenario. For example, business stakeholders might rate the importance of the scenario as high, low, medium while engineers might rate the complexity of its implementation. Both values together help prioritizing the scenarios. Obviously, a highly important scenario that is complex to implement should have higher priority than a scenario with low business relevance.
The nodes directly located under the root of the utility tree are typically the high level quality attributes such as availability or modifiability. The intermediate nodes in the utility tree represent the facets as previously shown for performance while the leafs are the scenarios.
Strategic and Tactical Design
For each scenario starting with the most important, that is strategic ones, so called design tactics might be applied.
Let us consider a design tactics diagram for performance. In order to address security in an application, software engineers might use different strategies such as an appropriate resource management strategy. For each strategy we can apply design tactics. In resource management we might leverage tactics such as lazy evaluation or caching. A design tactics diagram shows us many strategies and tactics to address a particular quality attribute.
After we have chosen the design tactics to implement a scenario, we normally dive into more details. Some design tactics represent design patterns themselves, while for others multiple design patterns might exist that support the implementation of the respective design tactic. Thus, selecting the right patterns or idioms is the main responsibility of software architects and developers in this phase.
Essence of Traceability
One observation is that this overall approach “automagically” provides requirements traceability. Remember that we started by specifying quality attributes precisely via scenarios which we then prioritized. Then, we concretized scenario by scenario with descending priorities using design tactics for which we eventually introduced design patterns. Now, we got a bidirectional relation between architecture drivers and architecture decisions. Whenever a scenario is being modified or reprioritized, we can track down all affected architecture components. And upside-down: When modifying an architecture component, we might check which requirements are affected by the modification.
In ATAM two important terms have been coined. A sensitivity point in an architecture denotes an architecture decision or component that is directly related to a quality attribute. A tradeoff point is an architecture decision or component that is a sensitivity point for at least two quality attributes. It is important, to know the sensitivity and tradeoff points. They are the levers which architects can use, but they must be handled with care.
What we have just learned is what the CMU SEI is calling ADD (Attribute-Driven Design). If you are interested in more details, just visit their web site.
Bubbles don’t crash
In practice, it is not sufficient to only consider architectural aspects when dealing with quality attributes. Bertrand Meyer once said “Bubbles don’t crash” and “All you need is code.”
Technology decisions have a high impact on achieving qualities. The kind of application server, persistence store, or messaging middleware used might be of high relevance.
Thus, architecture design should be supplemented by feasibility prototypes or simulations. With feasibility prototypes we can evaluate for specific quality attributes whether it is possible achieve these attributes. Simulations are surrogates for missing parts such as it is typical for system engineering where hardware development cycles are slower than software development cycles.
While functional requirements might be challenging, quality attributes are even more difficult to implement as they mostly represent crosscutting concerns. Thus, they should be handled with care. Any ad-hoc treatment inevitably leads to bad design as premature performance optimization illustrates in many projects.
For handling quality attributes in a systematic way, they need to be specified appropriately, understood, prioritized, mapped to architecture decisions, and verified by Quality Assurance. One effective way for such systematic design is leveraging a scenario-based method which starts with a utility tree, helps modeling quality attributes using scenarios, and introduces design tactics and patterns for the actual design of the scenarios. Applied in the correct way, projects will profit from bidirectional traceability between architecture drivers and architecture decisions, in particular its sensitivity and tradeoff points. Scenarios are also helpful for defining risk-based testing methods to verify the achievement of quality attributes.
For readers interested in more details, reading literature such as Architecture in Practice is the right way to proceed.
About the Author
Michael Stal is a Principal Engineer at SIEMENS as well as a professor at the University of Groningen. He coaches and mentors customers on software architecture and distributed system technologies for large systems, Michael also has a background in programming paradigms and platforms. At SIEMENS he is trainer in the education programs for (senior) software architects. He co-authored the first two volumes of the book series Pattern-Oriented-Software-Architecture (POSA). Currently, he is experiencing the joy of functional programming and serves as editor-in-chief of the german JavaSPEKTRUM magazine. In his spare time, Michael enjoys running, biking, literature, and digital photography.
A Guide to REST and API DesignCA Technologies