BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles InfoQ Mobile and IoT Trends Report 2022

InfoQ Mobile and IoT Trends Report 2022

Key Takeaways

  • Building declarative UIs has clearly established itself as a trend in both the iOS and Android worlds thanks to the growing maturity and adoption of SwiftUI and Jetpack Compose.
  • The cross-platform story for mobile apps is also slowly but steadily showing a growing interest and effort for native crossplatform toolkits such as Dart+Flutter, Multiplatform Kotlin and Compose Multiplatform, and Swift for Android. This sums up with new opportunities open by the possibility of running mobile apps on the Desktop.
  • Another trend we see in the mobile app and werable arena is the one towards advanced UIs that rely on AR/VR as well as on machine learning and computer vision. Additionally, we see a new nascent paradigm of gestural and pose-based UI and a growing interest and value proposition associated to so called smart glasses. Overall, this promises to enable entirely new user experiences.
  • The rising complexities of mobile apps and IoT appliances is fueling a strong interest for methodologies aimed to ensure a timely and secure deployment of new features using Mobile DevSecOps and Reliability Engineering practices. Similarly, the idea of organizing developer teams around the idea of a "platform team" is gaining acceptance, especially for larger projects.
  • IoT devices are becoming ever more "intelligent" by shifting from an ML-in-the-Cloud model to Edge-ML and even On-device ML, which may provide great benefits in terms of reduced latency and data privacy.
  • Still in the IoT world, the WebOfThings proposal attempts to pave the way to a new generation of devices that are able to seaminglessly communicate with one another. IOTA  offers the perspective of leveraging Blockchain to foster large-scale adoption of IoT tech.

One of the most compelling InfoQ features are our topic graphs, which synthesizes our understanding of how different topics stack up in the technology adoption curve. They are immensely useful as a guide to prioritize different and competing interests when it's time to decide what we want to cover from an editorial perspective, but we also believe that sharing them can help our readers to better understand the current and future tech landscape and help inform their decision process.

Topic graphs build upon the well-known framework Geoffrey Moore developed in his book “Crossing the Chasm." Moore’s framework describes five stages that describe how technology adoption evolves in time, through the “innovators”, “early adopters”, “early majority”, “late majority”, and "laggard" stages.

InfoQ has a leaning towards identifying those ideas and technologies that belong to the innovators, early adopters, and early majority stages. We also strive to acknowledge topics that we consider as having already crossed into late majority. You will generally find plenty of content on InfoQ about the late majority and laggards phases, as artifacts of our previous coverage. 

For our readers, having five different phases means they can more easily regulate their attention and decide for themselves what deserves to be explored right now, or wait to see how it unfolds.

This report summarizes the views of the InfoQ editorial team and of several practitioners from the software industry about emerging trends in a number of areas that we collectively label the mobile and IoT space. This is a rather heterogeneous space comprising devices and gadgets from smartphones to smart watches, from IoT appliances to smart glasses, voice-driven assistants, and so on.

What all of these devices have in common lies with their being "connected computing machines under a disguise". In some cases, their computing power has grown to levels comparable to that of personal computers, such as with smartphones and tablets. In other cases, both their computing power as well as the functionality they offer may appear much more constrained. In all cases, we have connected devices with some peculiar form factors. As an additional element that brings them together, we may also consider human computer interaction (HCI) concerns. In fact, while distinct categories of devices in this space follow distinct HCI paradigms, what they all have in common is their moving away from the keyboard-mouse, or textual-point and click paradigms that are prevalent in other areas for the software industry.

All of the devices that belong to the mobile and IoT space have a significant hardware component that makes them possible or useful. Yet, our report will not overly focus on the hardware aspect but consider what the implications are in a software development perspective, in keeping with InfoQ’s mission. For example, while foldable devices surely bring a lot of technical innovations with them, we will be more interested in how you can program their UI, which brings us to the rise of declarative user interfaces, and so on.

Late Majority and Laggards

In the late majority stage it is easy to identify a number of well-established approaches to building applications and solutions in the mobile space. They represent broadly accepted, almost standardized ways to accomplish things, where we perfectly understand what the pros and cons are, why and where they are beneficial, and so on.

For example, native mobile apps fall in this category. That means building mobile apps using the native SDKs provided by either Android or iOS with their corresponding programming languages of choice, i.e., Kotlin/Java or Swift/Objective-C. According to figures from AppBrain, over 80% of the top 500 Android apps are written in Kotlin and over 75% of all Android apps use the native Android framework.

We believe that the use of hybrid app development frameworks as a way to go cross-platform should be seen as belonging to the laggard stage. Hybrid apps are mobile apps embedded inside a WebView or similar component and written using web technologies. This has two main motivations: using a unique stack to develop, say, both your mobile and web apps, as well as creating a mobile app that uses a single code base to run on all mobile platforms. That does not mean hybrid apps are not relevant today. Rather, it means there are alternative approaches to address those two concerns that are gathering more traction thanks to their advantages, such as React Native and Flutter, which will be discussed later on.

Continuing on the topic of mobile app development, two practices that are also well established and belong to the late majority stage are the usage of continuous integration/continuous deployment tools and leveraging of device farms for testing. A tool like fastlane, for example, goes a long way to relieve developers from tedious chores such as taking snapshots, beta and pre-review deployment through the relevant app stores, and so on. Similarly, there are a number of companies providing access to device farms to run your app automated tests, which, given the plethora of distinct smartphones in the market, seems a reasonable way to ensure your app reliability.

As a final note about the late majority stage, we also consider late majority topics like Siri/Alexa/Google Assistant devices, fitness-oriented wearables, and smart homes. This choice has not really to do with how widely those technologies are in use today, rather with the understanding we have of them in general terms and with the fact they have reached some kind of maturity state when it comes for the kind of features they provide.

Early majority

In the early majority stage we see technologies and approaches that have already come a long way to support the needs of developers, but are not yet dominant or still in flux in some way.

Declarative User Interfaces (SwiftUI)

A good example of that is the use of SwiftUI to create UIs for iOS native apps. SwiftUI, which has reached its third iteration, is a modern declarative framework that relies on some advanced syntactic features enabled by Swift to provide a completely novel experience to iOS developers.

SwiftUI is, indeed, fully declarative and reactive. With SwiftUI, you do not build your UI piece by piece, rather you describe what it looks like using a textual abstraction and defining how each of its components interact with your model. Thanks to its design, SwiftUI enables an interactive development style in Xcode, where you can preview your UI and tweak its parameters in realtime without having to compile the full app.

Respect to Storyboards or UIKit programming, SwiftUI has undoubtedly a cogent value proposition and, if you start a new iOS project, it would be hard not to evaluate it as a candidate UI framework. This does not mean, though, that Storyboard and UIKit have no place in new apps, only that SwiftUI is maturing technically, growing in adoption, and seems to go in the direction of becoming the de-facto way to go for iOS UI development.

Native Cross-platform Apps

In regards to cross-platform mobile apps, there are a number of approaches, including React Native, Flutter and Xamarin, that should be considered early majority. Of course, it is hardly thinkable that React Native, Flutter, or any other existing cross-platform solution will easily replace native development. So, their inclusion in the early majority means they are rapidly gaining ground within the realm of cross-platform mobile app development, mostly at the expense of the hybrid app approach.

In fact, if your reason to prefer this kind of approach is the leverage your investment in the Web stack, i.e., HTML, CSS, JavaScript and related tools, it is hard to justify using a hybrid approach when you can have React Native, which gives you the advantage of a native, more performant user experience. For Xamarin, we can apply the same reasoning, only under the umbrella of Microsoft tech stack rather than the Web's.

On the other hand, if your motivation is to save on development effort by only writing your app once, then you also have the possibility of using Flutter, which will not give you a native user experience, but you might prefer for other considerations, including the use of a compiled, strictly-typed language.

Cloud-based Machine Learning

We include in this stage also the use of Cloud-based machine learning services, such as you can find in apps like Snapchat, Tinder, and many others to, e.g., classify pictures or detect objects doing the computation on the Cloud and just transmitting the result back to the app.

IoT Cybersecurity

On the IoT and IIoT front, we consider cybersecurity as early majority. Ideally, we would like to have this under the late majority stage, but it is sadly true that the landscape of home appliance security, including the ubiquitous ADSL routers most people use to connect to the Internet, is less than reassuring. Besides that, the importance of protecting home appliances and IoT devices through automatic firmware updates, secure boot and communication, and user authentication is well understood and efforts are growing to put all of that into practice.

Controlled rollout

Speaking of mobile app deployment, a few techniques that have come into use are feature flags, incremental release and A/B testing, both supported on Google Play Store, and force-update of apps.

These all fall into the category of controlled rollout, which aims to reduce the risk associated with a new deployment. In fact, unlike with server or web applications, mistakes in mobile apps are hard to revert, once the apps are released.

Feature flags enable controlling the set of features provided by an app using specific flags that can enable or disable specific features. Forced-update allows developers to retire older versions of the apps, while incremental release is useful to reduce the impact of a potentially risky change to a subset of the user base.

MiniApps

Also popular have become so called MiniApps, also known as SuperApps or Mobile MicroFrontends, which are platform agnostic apps developed as plugins or extensions to native apps. Popularized by WeChat, Alipay, and other apps, they are usually implemented as PWA or React native modules and rely on their native container to provide access to OS-level features through a micro-platform or microapp-bridge.

The main benefit of MicroApps is their independence from the review/release process of the App Store and Play Store, which reduces development cost and time.

Mobile platform teams

The need for platformization of the core components are essential in any software development and mobile apps are no exception. For example, logging, analytics, architectural frameworks, etc., all fall into the category of components that naturally lead themselves to the creation of a platform on top of which to build the rest of features required by distinct apps.

In this scenario, it acquires relevance to think in terms of specific responsibilities when building such a platform. Anticipating client needs, defining standard best practices, choosing the right tech stack, evaluating tools and so on would thus become the responsibility of a dedicated platform team.

This approach promises to provide clear-cut abstraction, while guiding the whole organization to maintain a consistent style of development with essential guardrails. It surely requires having a sufficiently large mobile team for this approach to be feasible, as it happens in several large organizations that adopted it, like Uber, Twitter, Amazon and others.

Early Adopters

When it comes to the early adopters stage, we are referring to technologies and approaches to software development that are gathering more attention and opening up new possibilities for developers.

On-device Machine Learning, Edge-ML

To start with, we like to mention here on-device or edge machine learning, where you actually run a pre-trained ML model directly on a mobile device or on the Edge—as opposed to running it on the Cloud.

This approach is getting traction thanks to solutions like TensorflowLite, PyTorch Mobile and others. These solutions significantly reduce the overhead and latency associated with a Cloud request and enable entire new categories of apps where real-time predictions are key.

An additional, of no minor importance advantage is that user data has never to leave the device, which could also be a key concern in a number or use-cases, such as health applications.

Augmented Reality and Virtual Reality

The application of Augmented and Virtual Reality is also growing. In particular, both iOS and Android provide great support for a number of AR features like surfaces and planes detection, occlusion, face tracking, and so on.

The use of AR is not ubiquitous but surely eliciting growing interest because it does not require specialized hardware and is relatively simple to integrate into an app. Virtual reality, on the other hand, is mostly targeting specialized headsets like the Oculus, Sony PlayStation VR, HP Reverb, and others, which are mostly focused on gaming. A new impulse to this arena could come from the development of smart glasses, too.

Voice-driven mobile apps and home appliances

Both AR and VR foster the exploration of new HCI paradigms, which are more aptly classified in the innovator stage. But new HCI approaches are seeing some traction in the early majority stage, too, thanks to the evolution of voice-based interfaces.

In this case we are not talking of specialized devices like Alexa, or Siri/Google Assistant running as an interface to the OS. Rather, we are referring to the integration of voice capabilities into mobile apps and IoT devices themselves.

Running mobile apps on the desktop

Another opportunity that is becoming available for mobile developers is running their mobile apps on the desktop, thanks to technologies like Apple Catalyst. In particular a number of system macOS apps are implemented by Apple through Catalyst and Xcode and the App Store supports it. Microsoft is also providing a somewhat similar solution for Android apps on Windows 10, whereby apps run on the phone and are mirrored inside a window on your desktop machine.

Centralized logging

Centralized logging also deserves a mention here as a practice that aims to collect in a single store all logs generated by a system. The use of centralized logging corresponds to a significant trend for Cloud-based systems, but this approach is increasingly used also for mobile apps.

One of the major advantages of centralized logging applied to mobile apps is it helps to see in real-time what happens to a customer's app, thus contributing to troubleshooting their problems and improving customer satisfaction.

The adoption of this practice is enabled by a multitude of services including AWS Central Logging, SolarWinds Centralized Log Management, and more.

Persistent connections

As a final note regarding the early adopters stage, we mention the use of persistent connections between client and server. Initially popularized by messaging apps, this is now ever more used in e-commerce apps, for example Halodoc and GoJek, as well as in mobility, and other areas.

Persistent connections tend to replace push notification as well as network polling, with the aim to reduce access latency and power consumption.

A similar trend is also developing for IoT devices with regards to lightweight protocols such as MQTT and gRPC.

An interesting sub-trend to observe closely is the eventual creation of standardized protocols and/or specialized third-party solutions to make persistent connection as easy as plug-and-play.

Declarative User Interfaces (Jetpack Compose)

Jetpack Compose, which recently reached 1.0, is Google's Kotlin-based declarative UI framework for Android.

Regarding the benefit that declarative UIs bring to development, much the same can be said of Jetpack Compose as of SwiftUI, covered above. Yet, SwiftUI has already reached its third major iteration and the iOS developer community has already largely adopted it, while Jetpack Compose is still in an initial stage of adoption.

Innovators

Cross-platform mobile apps

While still a minority, cross-platform apps surely represent the answer to a set of development requirements and constraints. Historically, hybrid-web apps and, more recently, approaches like Reach Native, NativeScript, and Flutter have tried to provide a solution for them.

An even more recent attempt to tackle the problem of building a cross-platform mobile app is represented by projects like Swift for Android and Multiplatform Kotlin. This approach leads you to select one reference platform, i.e., iOS or Android, and use its tech stack to build your app both for your reference platform and, as much as possible, for the other.

On the UI front, Swift for Android offers Crystal, a cross-platform, high-performance graphics engine to build native UIs. With Multiplatform Kotlin, you have the option to use Multiplatform-Compose, which is still highly experimental, though. JetBrains has recently released the beta for the similarly named Compose Multiplatform, which aims to bring declarative UI programming to Multiplatform Kotlin, but there is no support yet for iOS.

While both solutions provide good language interoperability, so you can surely share a part of your code base across the two platforms, your mileage may vary when it comes to OS-dependent code. For example, Swift for Android provides Fusion, which is a collection of auto-generated Swift APIs that provide to some extent idiomatic access to Android APIs.

Mobile Reliability Engineering (MRE)

Continuously delivering features on mobile apps at scale is a real challenge. Multiple teams have to coordinate themselves to deliver the features, and to adopt streamlined best practices, processes and principles.

Software reliability engineering (SRE) was born with the aim to achieve reliability for large-scale distributed systems and recently has been acquiring visibility as a useful approach for mobile apps, too.

Still at a nascent stage of adoption, MRE aims to foster the adoption of best-practices across an organization. As of now, several established organizations and startups follow this approach, albeit not explicitly, with the help of various tools, processes, and organizational dynamics with the aim to make feature delivery a more agile process.

Gestural- and Pose-based UIs

Both AR and VR enable new possibilities of interaction with an app and the environment, which leads to new approaches to human computer interaction, specifically the possibility of using gesture recognition or 2D pose detection. While we classified AR and VR under the early adopter stage, there is also a trend to bring these approaches to HCI to mobile apps not related to VR or AR.

At the foundation of these approaches are ML and computer vision algorithms for hand and human body pose detection. For example, Apple provides support for that through Core ML, while Google has its own MLKit for both Android and iOS.

A number of apps already exist that use these technologies, mostly focused on fitness, for example counting squats, or movement skills, as in dancing or doing yoga. It is easy to predict that having hand gesture and body pose detection available at the SDK level may only foster the development of additional apps extending these UI approaches to more fields.

Voice-driven UIs

While devices like Alexa and intelligent assistants like Siri, Cortana, and Google Assistant have popularized the idea of controlling devices using your voice, native voice-driven UIs have only recently started to gain traction. This is a trend that is powered by recent machine learning advances in several fields, including speech recognition, NLP, question answering systems, etc.

Among the benefits of voice-driven interfaces is the convenience of interacting with a machine/program using your voice in a number of different contexts, such as driving, cooking, walking, etc. Additionally, voice can be a huge help to people with certain disabilities.

A number of different technologies enable the integration of voice-driven UIs into mobile apps and IoT devices, either based on a Cloud-based model or using an embedded model. For example, Google has its Text-to-Speech API as well as Dialogflow, while AWS provides its Alexa Voice Service integrated with AWS IoT.

Web of Things

Web of Things is a web standard of the Internet of Things to enable communication between smart things and Web-based applications. It attempts to provide an answer to the highly heterogeneous world of IoT devices by defining a way for them to interoperate with other devices and the Web.

While the definition of the Web of Things standard has been ongoing for several years now, it is still true that the majority of IoT appliances have their own management interfaces and apps. Each of those UIs and apps understand the low-level network protocols and standard that each manufacturer adopts. This leads users to a less than optimal situation where they cannot control all of their devices from a single access point. Additionally, devices cannot talk with one another.

Solutions like the Mozilla WebThing gateway, AWS IoT, and others promise to accelerate the adoption of the Web of Things protocol.

IOTA

IOTA attempts to leverage Blockchain technology to address a number of challenges that hamper the large-scale adoption of IoT, including heterogeneity, network complexity, poor interoperability, resource constraints, privacy concerns, security, etc.

While traditional blockchain systems like Bitcoin and Ethereum use a chain of sequential blocks with multiple transactions within a block, IOTA uses a multipath Directed Acyclic Graph (DAG), named Tangle. Some other protocols like Byteball, Avalanche, also use Tangle with certain modifications. One of the goals of these protocols is to accommodate IoT data in a distributed measure with improved performance, scalability and traceability than the linear block chain.

IOTA is pitched as a fee-less, miner and staker-less, highly scalable block chain solution. It promises to achieve the same benefits as other blockchain-based distributed ledgers, including decentralization, distribution, immutability, and trust but without the downsides of wasted resources and transaction costs. 

Smart glasses

Smart glasses appear to be the next thing when it comes to wearable computing. In fact, predictions and previsions about the rise of smart glasses have been present for a number of years now, starting at least with Google Glass, a project that failed to meet any significant success but helped raise consciousness about the potential privacy concerns associated with the use of smart glasses.

From an HCI point of view, smart glasses are a huge play-field for the advancement of new methods and techniques, including speech and gesture recognition, eye tracking, and brain-computer interface.

While it is true that a number of different manufacturers have been relatively successful with their smart glasses, including Microsoft HoloLens, Oculus Rift, Vuzix, and others, the technology seems to be waiting for a more compelling value proposition that could make it more pervasive as predictions would like to have it. Still, the interest around this technology is growing, with several large companies having recently entered this arena, for example, Facebook with its Ray-Ban Stories, and others rumored to be developing new products, including Apple, Xiaomi, and others.

Conclusions

As it often is the case in the tech world, the pace at which innovation happens is staggering high also in the Mobile and IoT space. We have tried to convey a very broad picture of where the technology landscape is at the moment and where it could head in the next year. Only time will tell which of the newest trends have come into the picture to stay, and which ones will rapidly fade away or turn into nothing. Our team at InfoQ will keep on with its mission to provide a practitioner-first view and coverage of the Mobile and IoT fields.

About the Authors

Rate this Article

Adoption
Style

BT