Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles A Reference Architecture for the Internet of Things (Part 2)

A Reference Architecture for the Internet of Things (Part 2)


In our article from mid of January we introduced our reference architecture RILA (Reference IoT Layered Architecture) that tries to break down an abstract IoT reference architecture to a more understandable level. We introduced each layer and promised you a follow-up article where we would show how to map the architecture’s layers to actual real-world use cases.

Voilà, this is the promised follow up article where we go from reference architecture towards concrete architecture and implementation ideas. Keep in mind that we should not just take a reference architecture and try to “implement it” right away, but we can take it as a “pattern” that defines what components we need in an IoT “system”. At first glance a concrete implementation might differ quite a bit from the reference architecture’s structure, but if we carefully map the architecture to the implementation components we will find the same components in the end.

In order to map the RILA to a concrete use case we picked two use cases from the industrial sector, that might soon become reality. The following figure shows the first use case - let’s call it the “fridge” use case:

The basic idea of the use case is to reduce the danger of energy peaks (power overload) in electrical power plants by triggering a number of fridges (can be a complete city or region) to cool whenever a peak is likely to occur. The goal is thus to trigger the action “cool” by the power plant on a number of smart fridges. The dashed line represents a feedback loop (optional) from the fridge back to the power plant. The power plant can then evaluate how many fridges will actually trigger cooling and can decide if it is enough to reduce the peak or if other measures are necessary (additionally). The following table shows the use case actors, goal, pre-condition and high level success scenario and post-condition:



fridge vendor

energy provider

fridge on-board system

power-plant control system


The power-plant control system sends a cooling-action to the fridge on-board system that triggers the fridge to cool at a certain, controlled point in time, so that an energy peak is avoided.

Pre- condition

The fridge vendor and the energy provider have agreed on a certain communication protocol that allows the fridge to receive actions and send the fridge’s context situation.

High Level

Success Scenario

1. The end-user buys a fridge produced by the fridge vendor.

2. The end-user configures the fridge to be connected to the internet.

3. The fridge on-board system connects to the power-plant control system.

4. The energy provider’s power-plant control system notices an energy peak and sends an action to the fridge on-board system.

5. The fridge on-board system receives the action and decides if it will actually cool or not.

6. The fridge on-board system provides its Context Situation to the power-plant control system that indicates that it will cool.

7. The power-plant control system receives the Context Situation from the fridge on-board system, processes and stores it.

8. The fridge on-board system triggers the cooling aggregate of the fridge and starts cooling.



The fridge-on board system started cooling.

The power-plant control system knows that the fridge started cooling.

Both the fridge on-board system and the power-plant control systems have to build and monitor their context situation. We have two context situations in the use-case:

  1. The context situation of the power-plant control system
  2. The context situation of the fridge on-board system

The fridge-on board system has a rather simple context management; the context situation created has to enable it to decide if it wants to cool or not. The power-plant control system has a more complex context situation. It also has to be able to consume the context situation of the fridge and decide according to it. However, the context situation of the power plant does not have to be published in our scenario - it is enough to send an action event to the fridge.

Now that we defined the first use case let’s map our reference architecture RILA on it. Remember that we defined 6 layers:

  1. Application Integration
  2. Thing Integration
  3. Context Management
  4. Data Management
  5. Device Management
  6. Device Integration

The following figure shows how the layers are mapped to the current use case.

The layers in the figure are depicted in different colors, some are black and some are gray. Layers depicted in gray will most likely be designed in a very simple way or even be obsolete in this scenario.

Generally we can see, that all layers exist on both sides - both the fridge and the power plant will implement all layers in one way or the other. However, the implementation complexity will depend on the thing at hand and on the functional use case. We have to understand each thing (or the domain of each thing, if you want to think in domain-driven design) in order to decide to what extent we want to design and implement each layer on each thing. Note that the scenario in the figure above could be depicted differently - context management on the fridge could for example be considered more complex and thus should not be gray. This would require a “smarter” fridge, and we defined our fridge as rather “dumb” in our use-case. The mapping of the reference architecture on the scenario is one thing, designing each layer is another. In the following paragraphs we interpret the design for our scenario.

In our scenario we let the user communicate with the fridge using his smart phone. The application integration layer is definitely needed on the fridge as we have to implement the communication with the smart-phone app. In a way we might also have to implement a certain level of application integration on the energy provider’s side. It is, however, questionable if this will concern the use case at hand (where we only focus on the power plant control system that sends impulses to the fridge).

Thing integration is needed on both sides - but not in a very complex form. For a first prototype the thing discovery module can be rather simple, as one can assume that the fridge always starts the communication with the power plant. Establishing the actual connection is already very implementation specific.

When it comes to context management we first have to agree on a context situation for the fridge and an action that is sent to the fridge. The context management is complex on the power plant side but rather simple on the fridge. The power plant side can be seen as a black-box here because we will most likely have to integrate already existing systems that detect and predict peaks. Once a peak is detected the action for the fridges triggers and is passed to the thing integration that distributes it to registered fridges. The fridge then just receives the action, decides if it wants to cool (this can be implemented with simple time constraints in a first prototype) and replies to the power plant if it will cool or not.

Similarly data management is very simple on the fridge, but more complex on the power plant side. The fridge basically just has to remember when it cooled and when it wants to cool again (a temperature sensor will be part of the decision). The power plant has to decide if the cooling power of the fridges, that will actually cool, will be enough to reduce the peak. If not enough additional actions will have to be started.

Device management and device integration will be definitely needed on the fridge side. On the power plant side we can assume that there is already a system that handles the actual peak prediction and decisions - this will have to be integrated (either on the application integration or thing integration level).

Note that once the design of a scenario becomes reality, we would have to talk to domain experts from both sides (the fridge vendor and the energy provider) to understand both domains. Only then a good design can be developed.

Even though our design is far from completion let’s still have a look at implementation ideas (maybe for a first rapid prototype). As we mentioned above we want to keep the setup simple for the beginning. There are fridges that offer displays and a whole set of functionality, such as the new Samsung Family Hub. Such a model is already too smart for our use case (even though it could be used). In our scenario the vendor does not offer a complete platform on the fridge, but offers a smartphone-app that communicates with the fridge. The fridge has to have:

  1. Its own Internet connection and communication interface, so it can communicate with the power plant at any time (Thing Integration)
  2. An interface to the user’s smartphone that allows the user to switch the power plant communication feature on and off (Application Integration)
  3. Some sensors + logic that allows the fridge to decide whether it is OK for it to cool whenever it gets an impulse from the power plant.

A possible platform for implementing a first prototype would be for example the Google App Engine together with Google Brillo. Although Brillo is not officially available yet we can imagine a fridge based on the Brillo Operating system. Google Cloud Messaging could be used for communication between Smart-Phone, Cloud, Fridge and Power-Plant. The following figure shows a simplified setup with Google Brillo and Cloud Messaging. Note that Google just serves as an example here, one could most likely achieve similar implementations with the Apple HomeKit, Windows Azure, IBM Bluemix or any other (IoT) platform-as-a-service provider.

On the fridge side we pack our complete stack into Brillo. For the communication on the Thing Integration level we can use the Cloud Messaging API. The power plant is depicted as black-box because it actually does not matter what we put there (most likely there is an already existing system operative anyway) as long as we ensure that the power-plant control system (or an integration component on top of it) implements the communication standards defined by Brillo and the Cloud Messaging API.

Surely one could also implement the complete system independently. Platforms like Google Brillo offer the advantage that a certain standardization is provided out of the box and you can scale the system easily up and down.

At this point we are as far as we can go in this article for our first use-case. To show the flexibility of our reference architecture we take a look at a second use case now. Again we will see how the “inevitable IoT components” defined by RILA appear in the scenario.

In our second use case we have an insurance company that sells car insurances and wants to predict more clearly which customers are “good” and which are “bad” (from the insurance point of view). The insurance wants to use driving behavior data to achieve this (the buzzword is data-science). The following figure outlines the use-case.

In a first scenario the insurance just needs a lot of data to be able to use data science for defining classes of drivers that will be “good” or “bad” for the insurance. The data does not have to be personalized to the actual driver, anonymous data is good enough. The more data the better. Thus the insurance tries to work together with the car vendors to retrieve the data.

In a second scenario (extending the first scenario) the insurance could go as far as to personalize the insurance policy for each of its car insurance holders according to the personalized driving behavior of the insurance holder. This scenario is depicted by the dashed arrow in the figure above.

The following table shows the use case actors, goal, pre-condition and high level success scenario and post-condition:


insurance company

insurance system


car vendor system

car owner

car on-board computer


The insurance company receives as much anonymous driving behavior data from certain car models as possible so it can adapt the insurance policies by driving behavior for a certain car model.

Pre- condition

The insurance company and the car vendor have agreed on a data exchange policy and the car models in focus. The insurance company pays a certain amount for the data provided by the car vendor.

The car owner gets something out of sharing the anonymous data (e.g. cheaper car service from the car vendor).

High Level

Success Scenario

1. The Owner buys a car.

2. The car’s on-board computer asks the car owner if he wants to share anonymous driving behavior with the car vendor (and possible third parties).

3. The car owner agrees to sharing certain data.

4. The car’s on-board computer sends anonymous driving behaviour to the car vendor system in defined, regular time intervals.

5. The car vendor system stores the driving behaviour information and contacts the insurance system that new data is available for a certain the certain car model.

6. The insurance system collects the driving behavior from the car vendor system and feeds it into the data pool for decision making.

7. The insurance system integrates the new data as feature into the prediction model for the insurance policy.



The insurance company can utilize the driving behaviour data to calculate the insurance policy in more detail. (This can then serve as e.g. a guideline for the insurance company’s salesmen.)

We want to focus on the first scenario here. Similarly to the fridge use case we can map the RILA layers onto the system actors as shown below.

For the insurance use case only the car will implement the complete RILA layer stack as it contains devices that have to be integrated. The other things act on a data-transfer level. At this point one could question our definition of “thing”. Does a thing have to have devices (where device is a sensor, actuator or tag) to be classified as a thing? Our definition says no, it does not. But then, the logical conclusion is, that we don’t need the device integration and device management layer for each thing (no devices mean no device integration and management).

The car requires some interface on the application integration layer as the car owner has to communicate with the system in some way (this will most certainly already come with the On-Board system of the car). Once the data was transmitted to the car vendor system only a certain context management, data management and thing integration will be necessary to fulfill the use case. The insurance system (as well as the car vendor system) might also require application integration, as the data will be used to perform certain tasks. E.g. the software used for predictive modeling will have to access the data in some way.

An implementation idea for the insurance use case would be, that the car’s onboard computer serves as an app-platform. The user can then download an app provided by the insurance (in cooperation with the car vendor) that allows the user to control what he want to share or not. Depending on the user’s willingness to share car-data, goodies can be offered to the car owner from the insurance or car vendor side (which would give us an idea of a business model). Once we go towards personalized, context-aware insurance policy calculations we reach the business model of “pay as you drive” and can extend it to “pay how you drive”.

We are aware that the two use cases presented in this paper are by far not fully specified. Besides the outlined scenarios there are a lot of other scenarios that can be constructed out of the basic ideas of the presented use cases. Domain specific expert knowledge would be necessary to really create a valuable design for the presented use-cases.

The inevitable IoT components that are defined by reference architectures (such as RILA) can be identified once the use case scenarios are clearly defined. The more we can standardize the communication and security measures the easier can we integrate an IoT system with other IoT systems later. Be it another insurance or car vendor in the insurance use case or another power plant or fridge (vendor) in the fridge use case. If we already have the right schema-validation (e.g. a certification mechanism like in Google Brillo) at hand the integration will be easy. Reference architectures help to provide general patterns to avoid “missing” an important component or design fact.

In conclusion we want to stress out, that it is most important to define the use-case to be implemented on a functional level at first, before going into detail with the technical specification.

Actors and their goals have to be clearly defined in order to understand what you really want to achieve with your system. Even though this is a well known paradigm, it it is especially important when developing applications and systems for the IoT world because the use cases are often more complex and contain more diverse scenarios. Domain-driven design guidelines can help to achieve more valuable and flexible designs.

Through reference architectures like RILA we know that there are always certain inevitable components present when implementing an IoT application. With the clear specification of the functional use case we can define how the reference architecture’s components have to be actually designed. With the combination of use-cases and design on a functional level we can work towards the technical specification and implementation. In combination with technical expertise we can decide where already existing platforms can be used to provide the features of one or several reference-architecture-components or even fill in for the complete stack for a certain use case.

About the Authors

Hannelore Marginean is a developer at Senacor Technologies in Germany. She likes to discover new innovative technologies and has spent time researching about the possibilities, the security risks and the benefits of IoT. In her spare time she likes to paint with acrylics and play the guitar.


Daniel Karzel is a technical consultant at Senacor Technologies in Germany. Already during his master’s studies in Mobile Computing he was dealing with Context Aware Computing and the Internet of Things. Besides thinking about software architectures for the future he enjoys playing the accordion and traveling China in his free time.

Tuan-Si Tran is a software developer at Senacor Techologies in Germany. He is a frontend enthusiast and is interested in "bleeding edge" technology. In his spare time he likes to play tennis.

Rate this Article