BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Virtual Panel: The Current State of Integration Platform as a Service (iPaaS)

Virtual Panel: The Current State of Integration Platform as a Service (iPaaS)

Key takeaways

  • iPaaS is no longer just about integration occurring in the cloud. iPaaS takes advantage of inherent cloud platform capabilities such as dynamic scale, containers, resource management and self-service.
  • SaaS connectors are still relevant and a preferred approach over direct API calls for productivity, simplicity, abstraction and security reasons.
  • SOAP, once largely entrenched in ESBs and Integration Brokers, has given way to REST and JSON in iPaaS.
  • The future of iPaaS is connecting and enabling other high value business outcomes related to analytics, machine learning and big data.
  • Digital transformation is a driver for organizations to move from traditional on-premises middleware to iPaaS in order to drive down costs and increase velocity.

Integration Platform as a Service (iPaaS) is becoming a mainstream way of connecting mobile, SaaS, IoT, Big Data and on premises, Line of Business (LOB) systems. As customers move workloads to the cloud, iPaaS platforms are taking a center stage in how integration is being performed.

This Virtual Panel focuses on some of the current trends in iPaaS and where this model of delivering integration services is headed. To help out, InfoQ has reached out to thought leaders from MuleSoft, SnapLogic and Microsoft to participate in this dialogue. 

Participants in this panel include:

  • Dan Diephouse - Director of Product Management at MuleSoft where he is part of the team that launched MuleSoft’s iPaaS offering: CloudHub.
  • Darren Cunningham - Vice President of Marketing at SnapLogic where he focuses on product management and outbound product marketing.
  • Jim Harrer - Principal Group Program Manager in the Cloud & Enterprise division, at Microsoft, where his team is responsible for Program Management for BizTalk Server and Microsoft’s iPaaS offering: Azure Logic Apps.

InfoQ provided our panel with a series of 8 different questions, Here is what we found out:

InfoQ: Most SaaS vendors provide RESTful APIs as a way to integrate with their applications. How relevant are application specific connectors? Why should developers use connectors over vendor-provided APIs? What are some of the more popular connectors that your customers are using?

Dan Diephouse (MuleSoft): RESTful APIS are changing the integration landscape. Having these APIs, not only for SaaS apps, but all apps, makes it easy for everyone to connect and build innovative stuff on top of them. Because of this, MuleSoft has been investing a lot in this area to make it easy for developers to design, find and consume RESTful APIs. We joined the RAML workgroup to help make it easier to document RESTful APIs, and have seen tremendous uptake of the RAML specification. The 1.0 version, recently released, makes it easy to do things that before were extremely difficult. Ever try to write JSON or XML schema by hand? No thanks. Check out RAML data types and you’ll see what I mean. We’re trying to make it very easy for customers to get started defining and cataloging APIs so they can be reused.

Once defined, APIs can be discovered in Anypoint Exchange, our home for connectivity artifacts (see some examples here). This experience is built right into Anypoint Studio, our IDE - so developers can explore the complete set of organizational APIs from one place. And then we use RAML specs to provide metadata at design time so you can see data types, get auto-completion on resources, etc.

But, REST APIs aren’t enough - application specific connectors are also extremely important. You need them to connect to legacy systems or systems that don’t have a REST API. Our database, FTP, and SAP connectors are popular with customers.

Realistically, vendor created application connectors and RESTful APIs are not enough. MuleSoft recommends customers place an API layer over core business systems and processes. Connecting directly to systems can have a number of side effects as your application scales:

  • One change can break any number of downstream integrations
  • Unexpected load which systems are not enabled to handle
  • Decreased visibility into who needs access to what data
  • Poor data security

Creating your own APIs enables you to do versioning, security, throttling, analytics, and more, solving many of the problems of traditional data integration.

Darren Cunningham (SnapLogic): It’s true that you can hire developers knowledgeable in specific apps and APIs and wire up systems, but a platform approach to data and application integration delivers long-term benefits in terms of maintenance costs, productivity, business agility, scalability, and reusability. Specifically on the advantages of intelligent connectors, what we call Snaps, they should be easy to use, reliable and performant. Taking full advantage of SaaS vendor’s RESTful APIs, Snaps encapsulate the interfaces and optimizations. This provides an abstraction layer to users, insulating them from changes and from having to be a developer to figure things out. While it’s important to have native REST connectivity options, there is no intelligence built into a REST connector. Furthermore, REST APIs can be quite complicated with many options and methods of invocation. SnapLogic determines the best way to utilize REST APIs and we encode best practices into our Snaps. Finally, by using Snaps we can also insulate the user from changes in the target APIs.

There are many categories of Snaps that customers use, including enterprise applications (Workday, Salesforce, ServiceNow, SAP), relational, noSQL and cloud-based databases (Microsoft, Oracle, Cassandra, MongoDB, AWS Redshift), big data technologies, (HDFS, Kafka, Parquet), social media, IoT and other sources, transformations and protocols.

Jim Harrer (Microsoft):  If a SaaS vendor provides a RESTful API then Logic Apps consumes that API directly (no “connector” code is required for consuming that RESTful API). That being said, the advantage of using a Logic App as opposed to writing C# or Java code that consumes a vendor-specific RESTful API is about achieving more, faster.

Integration scenarios which used to take 6+ months or more, have been dramatically improved, often completed in days or weeks. Logic Apps centralizes authentication and connection management and makes orchestrating across these APIs as easy as point and click. In addition, the access to these APIs are managed to make them easy for reuse across various workflows.

That being said, today, many of the more interesting services and protocols that are used in the integration space today do not have clean RESTful APIs to integrate with. For example, when doing the typical B2B workload or VETER, there are not simple vendor-provided APIs. In these cases, Logic App connectors have a large amount of non-trivial logic to make integration, point-and-click. These connectors are hosted in Azure and scale easily.

The popularity of connectors varies with the audience.  IT Pros and Integration developers are using Azure Service Bus, SQL Database, Dynamics CRM and Azure Storage while our Citizen Integrators rely heavily on Office 365, SharePoint, DropBox and Twitter.  Today we have over 35 popular connectors with more added each week.  

InfoQ: As more interfaces move to iPaaS, how are customers dealing with low latency use cases that involve on-premises systems?

Jim Harrer (Microsoft):  The most common method customers leverage to integrate from iPaaS to on-premises systems is an Azure Hybrid Connection directly to the system.  This is especially effective when integration is required during a step in a workflow to integrate with an application that resides on-premises.  If users are looking to integrate with on-premises orchestrations and processes, very often customers make use of a message broker (primarily Azure Service Bus) to integrate.  The on-premises system can be listening to the queue or topic and begin work as soon as work is available.  Asynchronous patterns help compensate for any latency the cloud may introduce, and if synchronous orchestration is required, responses can be sent via a Webhook or a service bus response queue.  

Dan Diephouse (MuleSoft): It’s first worth noting that we don’t necessarily see cloud and on-premises as different. As companies move toward cloud everywhere, we see a blurring between public and private cloud. Companies connect their applications together across private cloud, public cloud and traditional data centers. Gartner recommends implementing a Hybrid Integration Platform, defined as “a platform that is comprised of one or many solutions which combines on-premises and in-the-cloud integration and API management capabilities” to address this blurring.

In this world, dealing with low latency use cases is important everywhere. To do this, you first need to have a low latency, real time engine. Mule is the runtime engine for Anypoint Platform and has been tuned over the years to provide very high performance. We can horizontally scale out with very low latency.

Second, you need to have fast access to the systems that you’re connecting to. In our public cloud, we provide VPN connections to your other data centers. You can also use Amazon’s Direct Connect feature to directly wire you data center to our cloud, making latency a non-issue for all but the most demanding use cases.

Finally, your business applications themselves need to be able to deliver low latency. Most applications, even the cloud ones, are not designed for this. And this is where people get into a lot of trouble. Anypoint Platform makes this easier. We provide the ability to cache data so you don’t need to hammer your applications nonstop. We’ve also provide Anypoint MQ, which allows you to queue up requests (e.g. new orders) and handle them in a more controlled manner by your backend when it has the capacity to do so. 

Darren Cunningham (SnapLogic): The right iPaaS solution must deliver hybrid execution, respecting data gravity by running integrations in the cloud or in a customer's data center as needed. The solution should also be able to handle traditionally batch-oriented data integration as well as low-latency real-time integration requirements. SnapLogic achieves low latency through persistence - always-on pipelines that can respond to external invocations. These “Ultra” pipelines are distributed, allowing for multiple instances of the same data flow to run separate nodes so they can load balance. If you have a high-throughput or low latency requirement, this type of pipeline can scale out on-premises to handle the load.

InfoQ: Many public cloud providers provide elastic tiers that dynamically scale out based upon system resource thresholds.  Is dynamic scale-out relevant in iPaaS solutions and if so what message durability challenges emerge?

Darren Cunningham (SnapLogic): Yes, with a hybrid execution engine, an iPaaS solution should have the flexibility to elastically scale in the cloud so customers don’t have to pre-allocate resources based on unforeseen demand. The right platform will also bring elastic scale to on premises deployments and be able to run natively in a Hadoop cluster as a YARN application. Handling message durability is an important part of the overall elastic framework. Typically this is handled by using a buffering mechanism such as a queue to store messages to handle latencies and spinning up compute engines to handle spikes in messaging. Ultra pipelines in Snaps have a built in queueing mechanism for handling such circumstances.

Jim Harrer (Microsoft):  iPaaS solutions will automatically scale out to meet the workload demands and scale back when not needed.  Dynamic scale out allows customers not to have to worry about building out infrastructure that would handle peak loads and monitoring the system to ensure that the infrastructure is meeting their needs when peaks are hit.  Message durability scales to meet the throughput demands of the applications.  As the system scales down, message durability is not affected as message content as a single application does not need to span multiple scale units.

Dan Diephouse (MuleSoft): I think in the early days of iPaaS, dynamic scale-out wasn’t something people thought about. iPaaS started as just “cloud integration”, which primarily focused on back office integration of SaaS apps. But now, iPaaS is much more than that. Organizations must think about API design and implementation, messaging, order processing, etc.  In this context, scalability, message durability, reliability and uptime are incredibly important. Your APIs and integrations cannot miss a beat or you’re going to lose customers or stop being able to operate. We’ve learned this through direct customer interaction:  our cloud platform is powering the day to day business of very large, global corporations such as Unilever, Coca Cola, etc.

We achieve this through a number of different means:

  • Statelessness: We try to push out all the state from our runtime engine. We’ve built services for messaging (Anypoint MQ) and caching (ObjectStore), which we scale and manage independently of the runtime.
  • Redundancy: all our services are available in multiple data centers, and nearly every service is available in multiple regions now.
  • Self-healing: our platform is able to detect failures and rehydrate runtimes in different data centers if need be.

InfoQ: What are some of the main drivers for customers to move to iPaaS solutions over traditional, on-premises Integration Brokers/ESBs?

Dan Diephouse (MuleSoft): I think there’s two pieces here. First, why cloud? Cloud has a huge ROI for our customers. They no longer need to worry about provisioning servers, maintaining operating system updates, testing scalability, rolling out updates to our runtime, or putting in security best practices for infrastructure. This has resulted in not only significant cost savings for our customers, but allows them to focus on innovation - not infrastructure. One team we worked with recently said that getting something provisioned took 4 months before. Now it’s instant. This accelerates delivery and value to the business.

It’s worth noting that there’s a huge difference between this model and the model that some legacy vendors pursue which is “we put it on Amazon, therefore it is cloud.” The former is multi-tenant and has all the benefits of multi-tenancy. The second is managed services, where you still need to worry about many of the issues listed above.

There’s also a second question in here: why a more modern integration solution? The world has changed a lot in the last 5 years, and integration solutions need to keep pace:

  • With SaaS, enterprises need a hybrid solution that can span both cloud and on-prem applications, so they can run workloads wherever the “center of gravity” is.
  • The world is moving toward RESTful services, and you need a solution that has been designed to help you design, implement and consume APIs in an easy manner. Without this you’re just connecting stuff directly, and not driving reuse, and not getting the efficiency out of your business that you could.
  • Out of the box application connectivity is increasingly important as SaaS applications proliferate.
  • Different modes of connectivity are converging. Most customers have a mix of needs across application and data integration, APIs, B2B, and other integration styles.
  • The world is no longer SOAP/XML centric. Customers need a format agnostic solution, not have a normalized model which converts everything to XML.

Many vendors just haven’t kept up with these changes, and the relevancy of their solutions have paid a price as a result.

Darren Cunningham (SnapLogic): Traditional integration brokers/ESBs have come to the end of their lifecycle. They are heavy-weight, XML-centric technologies built for legacy business applications that are upgraded every few years. These integration technologies are heavily customized, difficult to implement, use and upgrade and struggle to keep up with the pace of modern cloud applications and the volume, variety, velocity of big data. Digital transformation is the primary driver to move to an iPaaS solution. This typically entails the adoption of cloud and big data technologies and a re-thinking of your analytics infrastructure to become more predictive and real time. Other drivers include speed and agility and line of business self-service. An enterprise iPaaS solution must deliver unified application and data integration capabilities, ease of use to allow more people to develop and manage integrations, and the platform must be built on modern standards (for example REST and JSON). 

Jim Harrer (Microsoft): We clearly see CIOs and CTOs pushing Enterprise Architects to simplify their data centers and application footprint. It started with servers (OS and patching) with the focus on the application layer, which is why so many companies have adopted an “API First” mindset being driven by SOA and Microservices.  As on-premises applications are retired for new SaaS contenders, it’s logical these integrations should also be done in the cloud, where they can be easily managed and scale while enjoying the economics of only paying for consumption.  Our customers are using BizTalk for on-premises integrations and Azure Logic Apps for SaaS to SaaS integrations when appropriate but have the flexibility to connect both scenarios with our native adapters between both products. 

InfoQ: Containers, such as Docker, are gaining traction in the enterprise.  Do containers complement, or compete, with iPaaS platforms? If so, how?

Darren Cunningham (SnapLogic): Containers are complementary to an iPaaS platform. In fact, SnapLogic has added a new capability to “containerize” hybrid cloud and big data integration. This capability allows customers to deploy a just-in-time SnapLogic Snaplex — the elastically-scalable data processing component of the SnapLogic platform — via a Docker Container. These Snaplex Containers can be deployed in any cloud environment that can host Docker containers, and can run in data centers running Docker Swarm, Kubernetes or Mesos. Using these containers it will be easy and quick to deploy and take down entire Snaplex clusters that efficiently utilize servers.

Jim Harrer (Microsoft): We see containers as complementary to Microsoft’s iPaaS offerings.

For the core functionality in iPaaS – like orchestrating different business processes, exposing new APIs, integrating with business partners, or connecting to key line-of-business systems—containers do not provide that natively. Instead, containers are much more about simplifying deployment and management of developer code in production – which ends up being complementary.

A lot of our customer’s solutions involve writing + deploying custom code in addition to leveraging the iPaaS components like Azure Logic Apps or API Management. When it comes to deploying this code / logic to their respective environments (production, dev/test, pre-production), containers are another tool for managing the application lifecycle.

This relationship is something that Microsoft is able to provide in a unique way, since Azure includes so many development tools and hosting options that seamlessly connect to Logic Apps. We view the relationship as very similar to how server less compute (like Azure Functions or AWS Lambda), PaaS (like Azure App Service), or IaaS (like Azure VMs or AWS EC2) complement iPaaS offerings.

Dan Diephouse (MuleSoft): Containers are an enabling technology for us. They provide many benefits for customers in terms security, manageability and resource optimization. Because of this, many of our current customers are already putting the Mule runtime engine into Docker containers which they manage. We’re also in the process of adding Docker support as a first class citizen into our platform.

InfoQ: The ‘Citizen Integrator’ has started to gain traction in analyst and vendor communities.  Where do you see this phenomenon headed? What strategies should customers employ in order to provide governance with these type of interfaces developed by citizens?

Jim Harrer (Microsoft): We see integration as ultimately becoming as ubiquitous as using an Excel spreadsheet to organize data. Today information workers use a myriad of SaaS’s to get their job done, and it’s up to integration solutions to be able to connect these services together. Although the integration professional will be well served by iPaaS solutions, our goal is to cater to the Citizen Integrators through iSaaS solutions, like Microsoft Flow. Microsoft Flow is built on top of Logic Apps, but as a full-fledged SaaS instead of a platform service. This enables us to cater to the DIY scenarios that the Citizen Integrator will start with.

With Microsoft Flow and Logic apps we offer a smooth grow-up path from iSaaS to iPaaS, which will enable central control and governance of integration solutions that may have been started by a citizen integrator but become business critical. In addition, we are building organization-wide management into Microsoft Flow which enables the administrators to monitor and govern whatever the citizen integrators create (even if those processes are not business critical).

Darren Cunningham (SnapLogic): The key here is self-service. A modern iPaaS must provide a web-based designer, not just eclipse-based tools for developers. With a decentralized model, there will be a spectrum of users - from advanced to citizen integrators, so while the user experience has to be simplified, it cannot sacrifice the ability to also handle complex, multipoint requirements. Along with security, scalability and high performance, iPaaS solution providers who want to go beyond simple point-to-point use cases must also focus on providing strong administrative capabilities in order to govern what users can access what level of functionality at any given time. SnapLogic customers typically set up a federated, shared-services model with a central team enabling the rest of the organization to do their own integrations via a self-service portal managed by IT.

Dan Diephouse (MuleSoft): The problem with the term citizen integrator is that it means different things to different people. I really like the way that Gartner defines this space. They split it across 3 personas:

1) integration specialists - those who do integration all day long in specialized tools

2) ad-hoc integrators - those who do integration sometimes, are still technical, and typically reside in the line of business (LOB)

3) citizen integrators - users who are not technical

It’s key to look at all these personas holistically if you’re really going to tackle the problem of integration. Otherwise you can quickly end up with a mess.

For example, a key part of enabling ad-hoc integrators and citizen integrators is making sure they can access the data. Integration specialists play a key role here in opening up APIs to core systems, so they can be consumed by the line of business. This is key from a governance angle, since APIs give you a way to control access to data, ensure that downstream users don’t crash your systems due to inadvertent load (e.g. through a rate limiting policy), enforce tight security controls, and ensure that downstream integrations don’t break when the underlying data structure changes.

For ad-hoc integrators, speed of delivery is important. We’ve got a lot of pre-built templates and examples for them to get started quickly with some guard rails and best practices. We’re making it easy for them to discover APIs to consume through portals. And we’re continually building easier to use tools.

Then, looking to citizen integrators, there’s a lot of potential. But, most of the activity so far has only been tangentially related to iPaaS, in my opinion. The simplifications a lot of companies have made to make citizen integrator tools attractive to business users lend themselves great for solving small business or personal automation problems (e.g. send me a text when my boss sends me an email). Awesome stuff. But, are these tools going to help you integrate Workday and your benefits system? Or create an API for your mobile app? No. The data transformations will be too complex.

So, the question is how we make it relevant to the enterprise? One example of how we’ve done this is dataloader.io. We took one specific integration task, data loading, and made it widely available to a broad set of users. This offloads IT from a complex task that a business user can do. Another thing we’ve done is Data Gateway, which makes it possible for a business user to connect Salesforce to SAP or a database. We believe there’s a lot more opportunity here in terms of simplification and collaboration across these personas, and it’s something we’re investing heavily in.

InfoQ: With more interfaces being built for iPaaS, and potentially authored in a Web Browser, how has testing changed? Are automated test cases still important?

Dan Diephouse (MuleSoft): Testing is timeless!

OK, so you may argue, that for certain “citizen integrator” use cases testing is not important. Sure. Just like you don’t need to test your auto-reply (except, for that one time when I told everyone I was out of office in a year into the future….). I consider these more configuration than “code” that needs to move through the full software development lifecycle.

We also need to think about this more broadly than just testing, and more around continuous delivery. You need to be able to automate everything across the development lifecycle, including testing, promotion, and governance. We’ve invested heavily in this area, providing tools to make it easy to automate deployment using Jenkins/Maven, testing, coverage reports, etc.  

Darren Cunningham (SnapLogic):  With the advent of more APIs, testing has changed more towards programmatic harness-based testing rather than the old "record and playback" mode of browser interface testing. For maintaining agility, automated validation of data flow pipelines is critical to ensure that changes can be quickly tested for regressions. SnapLogic pipelines expose expression and conditional logic to detect data correctness. Since pipelines can also be called through REST APIs, they can be easily tied into existing test harnesses.

Jim Harrer (Microsoft): If the iPaaS component is being used for core business activity, then it does need automated testing as well as robust application lifecycle management. We actually view the need for testing / source control as one of the big differentiators between “citizen integrators” and the iPaaS users.

To be more specific, the UX / web based tools are great for getting started and learning the platform, but key components will still end up in source control and with automated tests. This is something we see for a lot of our major customers that are running mission critical workloads on Azure Logic Apps today.

That is why Azure Logic Apps supports a “code view” of the definition + Visual Studio support (in addition to a visual designer in the browser), and Azure API Management supports exporting its config / policy in a format that can be stored in source control (and has git support built in).

InfoQ: What is the next “big thing” in iPaaS?

Jim Harrer (Microsoft): iPaaS is expanding in scope and capabilities due to business demands to add more value and speed when integrating applications.  Workflow automation, API Management, Enterprise Service Bus capabilities and managed connectivity to common commercial SaaS applications is a great foundation.

However, our customers want us to help them do more.  We’re answering this challenge by making it possible to connect easily to over 50+ Azure Services, with Azure Machine Learning, Azure Cognitive Services and Azure IoT providing high-value opportunities for integration developers to do more in their workflow. 

A recent example of this was a partner who works with a global leader in large fitness centers, where they were asked to integrate the member management system with Marketo.  For the POC, the partner also integrated the test data with Azure Machine Learning and built a PowerBI dashboard to give the client detailed insights on member churn, including at-risk members that Azure Machine Learning identified as members who were most likely to cancel their membership in the next 60 days.  The partner exceeded the customer’s expectation and won the business. 

We will continue to expand our iPaaS solution which includes Logic Apps, API Management, Service Bus and On-Premises connectivity via BizTalk Server and find unique opportunities to create first class experiences with all 50+ of our Azure Services.  We believe this gives our customers limitless possibilities at a very aggressive price point.

Dan Diephouse (MuleSoft): At MuleSoft, we’ve been working with our customers on this concept of application networks. Too often, as you connect your company, it results in a giant mess. Brittle connections. Very little reuse. The data is not accessible to others for innovation. And the net result is more complexity, not less. With the right organization paradigms - reuse of services, reuse of access, the right API abstractions - over time, you get a reusable application network that provides you leverage. It enables innovation by making it faster and faster to connect your apps, data and devices, while still providing governance. This is a major area of focus for us. In the near and long term, we’re enriching our hybrid iPaaS capabilities to better support the building of application networks: from expanding our design tooling to reach a broader set of an organization’s developers to helping organizations get smarter and more secure via network analytics and intelligence.

Darren Cunningham (SnapLogic):  The next “big thing” for iPaaS is big data. Until now, most of the energy around iPaaS has focused on real-time hybrid cloud application integration use cases. But apps are finite. Data is infinite. We’ll continue to see convergence in the enterprise around a unified platform approach to multiple, historically distinct, integration use cases. In the multi-cloud, big data and IoT era, we’ll see more and more enterprise IT organizations adopt iPaaS technologies as the key enabler of digital transformation. Technologies built in the last 3-5 years will prove to be better suited for the new analytical and operational integration requirements. 

About the Panelists

Dan Diephouse is Director of Product Management at MuleSoft. Dan is part of the team that created and launched CloudHub, MuleSoft’s integration platform as a service.  He also created and launched dataloader.io, *the* dataloader for Salesforce. #1 on AppExchange since it launched.

 

Jim Harrer is Principal Group Program Manager at Microsoft. Jim leads the Integration Program Management efforts within the Cloud & Enterprise division at Microsoft. Jim’s team is currently redefining Microsoft's Enterprise Integration strategy which includes BizTalk Server and Azure Logic Apps.

 

Darren Cunningham is VP Marketing at SnapLogic. Darren works for SnapLogic, a cloud integration pioneer and award-winning innovator. At SnapLogic, Darren focuses on product management and outbound product marketing.

 

 

 

Questions or feedback for the panel?  Please use comments text box below.

 

 

Rate this Article

Adoption
Style

BT