BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles InfoQ Virtual Panel: A Practical Approach to Serverless Computing

InfoQ Virtual Panel: A Practical Approach to Serverless Computing

Bookmarks

Key Takeaways

  • The sweet spot for serverless functions is where operational costs are unnecessarily high. Developers typically target greenfield apps versus rebuild existing ones in a serverless fashion.
  • High volume compute is where serverless functions should be able to give really substantial cost benefits. But, there is no guarantee you're going to save more money on compute by adopting a serverless architecture.
  • Connect your code to the services you want to be locked into. Serverless products today have many convenient--but proprietary--hooks into other cloud products.
  • Spend time in the proof-of-concept phase, and see whether serverless actually seems like something that would be meaningfully better for things you are working on.

Add serverless computing to the growing list of options developers have when building software. Serverless products—more accurately referred to as Functions-as-a-Service—offer incredible simplicity, but at a cost. What should developers know before moving to this stateless, asynchronous paradigm? How should companies evaluate any perceived or actual platform lock-in that accompanies these services? To learn more about this exciting space and the practical implications, InfoQ reached out to three experienced technologists:

  • Joe Emison, the CIO of Xceligent
  • Kenny Bastani, Spring Advocate at Pivotal
  • Fintan Ryan, industry analyst at RedMonk

InfoQ: Thanks to each of you for joining this virtual panel. Let's jump right in. If we assume that most enterprises do not (yet) have sophisticated event-driven architectures in place, are serverless functions a drop-in replacement for many existing components? What realistic use cases should developers and operators look at when introducing functions to their environment?

Kenny Bastani: Based on what I've seen, serverless functions are not a drop-in replacement for existing components. I see two roads to adoption for FaaS: green field applications and supplemental on-demand workloads. 

For green field applications, building serverless applications dramatically lowers the cost of experimentation. A business or IT might have many ideas they would like to try out, and may have the development resources to execute on them. But until now, the blocker for trying out many ideas has been misappropriation of critical resources. By critical resources, I mean infrastructure and operations. The infrastructure and support costs that tap into existing operations pipelines for hosting enterprise applications is usually too high to support continuous experimentation. Serverless applications offer enterprises the ability to try out new ideas that they would normally say no to. If some of the new ideas end up failing, it didn't come at the cost of derailing other ongoing business critical projects. On the other hand, if one of the ideas ends up to be a big success, the business can more easily justify investing additional resources.

The other road to adoption for serverless is supplemental on-demand workloads. These kinds of workloads are those that are not meant to be long living, and do not need a permanent home on-premises. One example could be a function that is responsible for moving data between a third-party SaaS product and an application that is hosted on-premises. A serverless function could be invoked only when needed and perform an integration task that bridges the two systems. The function would be able to use the developer APIs that are provided by a SaaS vendor to insert or update data in a third-party system using a trigger that is hosted on-premises. 

Joe Emison: I do think that if you have a microservices-based architecture, you can replace an entire microservice with a single or a cluster of serverless functions, and depending upon how big the microservice is, it might feel fairly trivial.  For example, I have built server-based microservices that are called by single-page web apps (SPA) to pull in content for the SPA, and the only reason they were server-based were (a) needing to keep a credential hidden, and/or (b) CORS issues with the service endpoint.  In these cases, replacing the microservice with a serverless function was trivial.  But outside of this type of situation, I agree with Kenny that you’re not going migrate a monolithic application to serverless functions meaningfully.

I disagree with Kenny that serverless functions lower experimentation cost dramatically.  I think he both overstates the cost of experimentation without serverless functions, as well as understates the pain of at least the most dominant serverless function service, AWS Lambda. 

Most experimentation (and development) happens on developer workstations (laptops). I (and my teams) try out ideas all the time, and it never involves dependencies upon other departments just to test these things properly. And this is true with basically all of the enterprises I work with, which includes a decent number in the Fortune 1000 and government agencies. Evaluation speed has not been a huge issue in my experience over the past 3-4 years at least.

I also find Lambda to be painful in the extreme from a startup standpoint—which is exactly where you want speed in testing an idea.  The insane number of services you have to use (IAM, API Gateway, DynamoDB, SNS, SQS, and on and on) is the opposite of fast startup and testing. And if your enterprise allows you to use Lambda, it presumably also allows you to use Heroku or similar services (Digital Ocean? Lightsail if you’re AWS only?), and those are much better known and easier to launch code and experiment on than Lambda.

I think the sweet spot for serverless functions is where your operational costs are unnecessarily high (either as a direct cost or time or as an opportunity cost), and by eliminating server maintenance, you dramatically reduce those costs. If you have any application (probably a small one) that is getting stalled or has too high of a cost because of what your organization requires for operational buy-in and support to get it live to customers, then serverless functions may be a great way to handle thing. 

I would be very wary of writing large greenfield applications in serverless functions today. And to be clear—I mean where the serverless function set is large. If you’re writing a large native app or SPA, and it relies upon a small set of serverless functions—that’s I do today and have been doing for more than 18 months, so I certainly recommend that.  But if I were planning on building an application that will likely have more than 10,000 lines of code on the back end / middle tier (and so would be in serverless functions), I would absolutely avoid serverless functions for that application. We just don’t know enough about the development and architectural patterns and how we should be structuring those applications as they get big. The likelihood that we build something that is unmaintainable is too high.

Bastani: I'm not one to disagree with the experience and viewpoints of others, we all bring our own perspective. I'm basing my opinion off of what I've learned from industry leaders who are experts in their field. After hosting this year's Serverless track at GOTO Chicago, I was able to hear the first-hand experiences of those who have been successful with using serverless for continuous experimentation. Namely, Mike Roberts, who writes about the definition of Serverless and FaaS on Martin Fowler's website. In his talk at GOTO Chicago he talked about how serverless enables continuous experimentation, not just for developers to test things on their local machines, but to stand up production applications that actual users will interact with. There is no value in experimentation if there are no users to provide feedback. Experimenting with technology is one thing, continuously delivering experimental products and features to customers is what matters, and serverless enables that.

With regards to AWS Lambda, I've used the system extensively myself and I do realize there are some competencies that you must pick up. That's why I have experimented and created reference architectures that allow you to completely bypass using AWS's services with your serverless functions. In this case, you would be able to create a "micro-repo" of serverless functions that are in the source of your microservice application, in this case Spring Boot. You can use Concourse to continuously deliver changes to your serverless functions, and inject in credentials of third-party services that are managed by other platforms, such as Cloud Foundry.

Finally, I have had opportunity to talk to developers working in the Fortune 100 who are building experimental applications with AWS Lambda. Just because what you've seen doesn't match with the perspective of others, does not mean it is not happening. 

Emison: I didn’t say that the Fortune 100 isn’t experimenting with Lambda.  What I said is that I don’t see a huge experimentation pain in those in the Fortune 1000 who aren’t using Lambda.  That is: I don’t see Lambda (or serverless functions) as a way to get dramatically better at experimentation. Experimentation happens on development machines, which developers have. Serverless functions—as far as I have seen—don’t give a huge boost to experimentation and testing beyond what we already have with Heroku and just plain old EC2 on AWS. And I see plenty of great testing and experimentation in Fortune 1000 companies without dependency pain on operations.

It would certainly be interesting to hear Mike Roberts talking about what he thinks Lambda gets one above and beyond traditional VM- and container-based ways of deploying code as far as experimentation goes. His article limits his “experimentation” benefit to simple functions where the organization is already using core AWS services.  Of note, where he talks about the development agility benefit of serverless, he specifically mentions my Serverlessconf talk where I focused on Serviceful architectures (as opposed to serverless functions).

Bastani: I think the misunderstanding here is you may not know what continuous experimentation is. It's a term that was spun out of Etsy that means being able to use continuous delivery to test assumptions (AB test, experimental features, etc.) If the customer doesn't find the feature is valuable it can be rolled back.

Here is a good talk about continuous experimentation at Etsy. Also, another good talk by Mike Roberts.

Emison: Just watched it.  I find it to be incredibly similar to the speech I gave at the first Serverlessconf on the same topic (and Mike cites this talk of mine in his article that I linked below).

In particular, his examples are more on the Serviceful track than on the serverless function track (he gives my same examples: Firebase, Auth0, etc.). He’s very concerned about doing product/product management correctly (which is also my central focus), and proper CI/CD.  By my rough count, he only spent about 1-2 minutes of the 30-minute talk addressing serverless functions at all. He talks about blameless postmortems more than serverless functions.

So, I don’t really see how his speech addresses the core issue of how serverless functions really change the game vs. Heroku or similar container-based/container-like options for rapid testing/rapid idea-to-deployment.

I would reiterate my argument that I think that where serverless functions really change the game is in situation where you have significant “costs”—and this include opportunity costs and dependency costs—related to production operations.

Bastani: I should have watched your talk before today. Thanks for linking it.

I'm in violent agreement with your last two statements.

In many ways, platforms like Heroku and Cloud Foundry are similar to serverless. Each of these platforms give developers a way to deploy their applications without worrying about setting up and managing servers. The big difference of course is the cost model. Now developers can create workloads that are long-living but not always on—but are still available. That opens up a range of use cases that developers are just now starting to think about in an enterprise setting.

Fintan Ryan: Most of the FaaS development we have seen has been green field applications that require some form of basic event handling and initial offload. There are numerous examples of FaaS being used in other ways, but the majority of applications at scale are falling into this relatively simple use case.

As of now we have not seen FaaS appearing as a replacement for existing components in any significant manner for legacy apps. We do see some replacement of components in newer applications that are already adopting a microservices approach.

Ultimately though FaaS is just another tool to be used for certain types of workloads, to make proper use of it teams need a lot of other pieces in place. The current userbase is, still, very technically savvy, and reasonably well versed in good software practices.

InfoQ: Let's poke further at Joe's comment about costs. One of the characteristics of serverless that people talk about a lot is "pay only for what you use." Do the compute cost savings (versus using a virtual machine or container instance) come into play more for infrequently called services, or high volume ones? Or is this an overrated talking point, and do the "costs" that matter relate more to operational cost, as Joe states?

Emison: I don't think that infrequently-called services get any cost benefit from serverless functions (and I don't think they get architectural or many time-to-live benefits either from serverless functions either).  Digital Ocean/Vultur/even AWS Lighsail are really cheap options for running code that has a small amount of traffic. Even from an availability standpoint, it's pretty cheap and simple to make these highly available (think multi-az rds and two app servers in a load balancer). We know systems like this are easy to maintain and we can bring in consultants/new developers to manage them fairly easily.

High volume--especially unpredictably high volume--compute is where serverless functions should be able to give really substantial cost benefits, especially where the actual amount of processing you need to do is fairly simple to encapsulate (e.g., a single function) and doesn't take more than a couple minutes at most. Many people have been using Hadoop streaming for these cases, but serverless functions are a much better/simpler fit than having to cram something into the map/reduce logic just to get the failover and scheduling benefits that Hadoop gives.

And note that scheduling itself can have a really high cost--if you run into situations where you have to schedule (e.g., you have fewer computer resources than demand for those resources), managing scheduling is usually very expensive (people, opportunity cost, uncertainty cost, actual time to develop/deploy a solution cost). Serverless functions give you an incredibly cheap way to "pay only for your compute" *along with* an unlimited amount of resources so that you don't have to worry about scheduling anything, and thus all scheduling costs go away.

All that said, I do generally believe that the costs of compute are overstated, and the costs of people are under-considered in the extreme.  I can't tell you how many times I've been in conversations about "how do we lower the amazon bill" but also be told that we can't cut IT jobs, even though the cuts to the amazon bill would be, at best, half an FTE.  We live in a day and age where you just don't need many (if any) sysadmins or network admins or DBAs, and the vast majority of organizations would be better off if they invested more and spent more on cloud and cloud automation in the interest of removing some of the people in those jobs that can now handle orders of magnitude more infrastructure (if done to modern best practices).

Bastani: I think that talking points around "costs" as an adoption criteria for serverless should be nuanced. I've actually heard stories from AWS Lambda early adopters that a serverless architecture ended up costing more in compute than the architecture they migrated away from. The compute costs of the new system became cost prohibitive. Because of this, they ended up migrating back to the old system. So, there is no guarantee you're going to save more money on compute by adopting a serverless architecture. If anything, it becomes harder to predictably measure the compute costs over time, in the same way that it becomes difficult to accurately predict demand over time. If there is a spike in traffic, it may equate to a spike in compute costs.

The discussion around costs for adopting serverless should be more about how it complements the qualities of your current architecture. It's good to ask yourself questions like "How will adopting serverless help me deliver features faster to production?" or "Will serverless make it easier to maintain my architecture over time and eliminate sources of technical debt?". The answer to most of these questions will really depend on a modular design. If you're bad at architecture, everything will be more expensive in the long run.

InfoQ: The growing consensus about "lock-in" and serverless platforms seems to be that the code itself is portable, but it's the MANY ancillary services that pin you to a certain platform. Do you agree? Does it matter? And what should those who aren't ready to pick winners and losers in the serverless space to do in order to maintain host flexibility?

Emison: I think the ancillary services are only part of the issue, and probably unique to Amazon right now.  Amazon has baked Lambda so thoroughly into its ecosystem that you can’t actually use Lambda without connecting it to several (maybe even half a dozen) different AWS services.  And it works best / is meant to be used within that ecosystem.  But the same thing is also somewhat true with something like Google Cloud Functions, which works really well with Firebase, and there really is not a good Firebase equivalent at AWS (Dynamo is a poor competitor to Firebase right now), so that has a lock-in element at Google.

Beyond ancillary services, it’s certainly possible to get quite locked into a FaaS framework by the actual code you have, framework(s) you are using, how you have it organized in your code repository, and how you deploy it.  For example, many people on Lambda are choosing the Serverless framework, and may be choosing to execute arbitrary Linux code on Lambda. Switching costs from Lambda and the Serverless framework will be high—assuming that the target FaaS platform even supports the language / code you want to run on it.

I think the vendor lock-in issue with FaaS is much less scary than the architectural lock-in you’ll have that will likely go hand-in-hand with the vendor lock-in.  That is: the application will get increasingly harder to add on to, to debug, and to train new developers on.  FaaS essentially requires a microservices architecture, and to a large extent, each function should be viewed as its own microservice. So instead of reading all of the functions in a monolithic code base (of which we have good style guides on how to organize and developers are generally familiar with), you will have a large collection of functions that likely don’t have coherent overarching organization or documentation that explains how you’re supposed to work with all of them.  (See., e.g., this Register piece on BA’s 200 systems in the critical path.

As a side note, I think that FaaS will follow a similar path to database normalization fifteen years ago, which is that everyone kept seeking true third-normal-form database structures as a best practice.  We now don’t do that anymore, because it created unmaintainable garbage that was incredibly hard to write good interfaces for, or import data into, etc.  Instead, we will selectively denormalize pieces of databases for sanity (at least using enumerated values in tables instead of lookup tables). We’re about to see the over-function-ification of software development with FaaS, where—like you’d end up with 350 tables in a database structure—you’ll end up with 350 functions in an application that would have only had 75-100 in a monolithic application (or in well-constructed microservices not on FaaS).

All this is to say that the best way to maintain flexibility in this world is to reduce the actual amount of code (and certainly your cyclomatic complexity of applications) within FaaS platforms.  I don’t see huge problems with using great services like RDS or Firebase—they give you competitive advantage; who cares if they’re proprietary? But you can and should certainly avoid building large applications that call tons of different functions within any FaaS platform today—I think it is a near certainty that you’ll regret it in 18-24 months.

Ryan: I think Joe hits the nail on the head here re architecture which I'll return to in a moment.

There are certainly concerns about ancillary services, but to Joes point earlier - for most people that are using FaaS at scale this far from a concern, if anything it is an advantage. Products like RDS, Firebase etc., simplify the longer term operational maintenance, albeit at a lock in and cost. Many of the heavy users of FaaS we have spoken with are more than happy to take that trade off in at least the medium term for the efficiency benefits they are currently gaining.

On the architecture aspect, a jumble of code is still a jumble of code if it is not well thought out; written in a coherent and consistent manner; and structured properly. There is already a tendency towards over complicating things with tonnes of different functions making up an application, and the application management overhead that that entails. 

Bastani: For a while now I've thought that the idea that you have to use the ancillary services of a provider with FaaS is a myth. A while ago, I decided to do some compatibility experiments with AWS Lambda and Cloud Foundry. One of the things that I've found is that you can absolutely use services provisioned using Cloud Foundry with your functions deployed to AWS Lambda.

I disagree with Joe that you have to use AWS's services with Lambda. But I do understand why most people think that this is the case. First of all, AWS makes it quite easy for you to use their services inside of a Lambda function. Secondly, the event sources that are available to Lambda are mostly all attached to AWS services. But the truth is that it's entirely possible to use a CI/CD tool like Concourse to stage service credentials provided by Cloud Foundry inside of the Lambda function that you're deploying. You can even cache the connection details of a third-party service between function invocations, which keeps things performant.

As for portability, with Cloud Foundry you're given a portable abstraction that can run on top of multiple cloud providers. That's because under the hood there is a tool called BOSH that speaks to the IaaS of the different providers. If you want to switch vendors, BOSH will allow you to move your VMs to a different provider. That is where portability with FaaS becomes tricky.

At face value, Lambda appears to be a platform service. That is, Lambda is not a resource abstraction at the IaaS layer. I think there is a good reason for this. Because the execution model for FaaS has a unique cost model, functions as a resource cannot be a part of the IaaS. That is the same reason that we don't use containers as a core abstraction of an IaaS, but instead, we use VMs. Why FaaS is unique is that it is a service that only becomes possible to provide when you control the pricing model of the underlying IaaS. Other than that, the code that you host with a provider is portable, just like the code that sits inside of a container is portable. Your functions are portable as long as the services that you connect to it are also portable.

My recommendation is to connect your code to the services you want to be locked into. If you use a platform solution like Cloud Foundry, you're still locked into whatever sits above the IaaS layer, but you're not necessarily locked-in to the underlying provider.

Emison: Just to clarify, when I say you have to use AWS’s services with Lambda, I mean that in order to make a web-accessible endpoint, you at least have to use IAM and API Gateway—otherwise your code is unreachable.  And if you want to have any kind of authenticated access across a set of users, you’ll have to use Cognito as well.  If you want to have any kind of state, you’ll need to write and read data, likely from either S3 or Dynamo or RDS.  And you’ll almost certainly end up using both SNS and SQS if you’re using multiple functions. And if you want to debug, you likely end up at least testing X-Ray.  And this is by design—AWS is essentially a collection of IaaS microservices that AWS spends a lot of time and money (and its partners help) making life much more functional if you just embrace and adopt lots of them.

Bastani: Thanks for clarifying. Yes, IAM and API Gateway are required. Authentication can be handled by the invoking application. If you're using Spring Boot, you can invoke the function through API Gateway while still managing security as a part of the Spring Boot application. I do however think that security should be handled inside the function, which is being made possible by a new project called Spring Cloud Function. As for SNS and SQS, you can instead use your own messaging broker by injecting in access credentials to the function from Cloud Foundry. The main problem with this approach is that you'll have to use an always-on event source, such as a Spring Boot application, that will invoke Lambda functions in response to messages being inserted to a queue. In this kind of architecture, the Spring Boot application becomes the event source. 

InfoQ: What programming language do you prefer to use in serverless platforms--keeping in mind that only Node.js is universally supported in AWS Lambda, Google Cloud Functions, and Azure Functions? Are there attributes of one language or another that complement the serverless mindset? Are there languages you'd like to see more readily available in serverless platforms?

Emison: I’m increasingly of the belief that JavaScript—for all of its awfulness—has won, and since we can’t beat it, we should join it. We could choose to write in a language like Coffeescript that transpiles to JavaScript to avoid some of the awfulness. Or just have strict style guides and linters and do your best.

JavaScript also lends itself to serverless platforms, because it has functional elements (although it is not a functional language)—at least more so than other commonly-used languages for web development like PHP and Ruby and Python.  By this, I mean that JavaScript feels more natural than PHP or Ruby or Python to write stateless functions in.

I suppose that true functional languages might make even more sense to support on FaaS platforms, and I know that at least Lambda has some tutorials for running Clojure code, but you’d have to have a pretty compelling reason to pick a language that has so many fewer people who can write in it…

Ryan: JavaScript is by far the most popular at the moment, but Python is definitely making some inroads. We also see Java appearing in different guises, be it on AWS or in newer platforms such as Funcatron.

I think that the fact that we now have first class functions in Java 8 could ease the transition to serverless for many enterprise developers. For the hipsters, there is definitely some interest in using Go (e.g. with the Sparta framework), but like all things Go related the skillsets are not exactly widely available, and this will slow any shift in that direction.

Finally, we shouldn't discount C# - Amazon didn't invest in making it a first-class citizen for no reason.

Bastani: I think Joe and Fintan have covered the best points. I would add that JavaScript is a bit of a pain when you have complex functions that have a multi-step sequential database transaction where the results of the previous step affect the outcome of the next step. That's because functions, at least on AWS Lambda, require that you callback to a function provided as a parameter to the entry point when done.

Let's think about that for a second. With an asynchronous flow control in Node.js, you're going to need to keep track of the entry point's callback function and then pass that function along to each asynchronous method call. You can of course assign the callback method to a global variable, but that's a bit of an anti-pattern in JavaScript (or in any language for that matter). Each database transaction is also an asynchronous method, so if you have two transactions that are serial then you're going to need to nest each callback function as a parameter without losing the one you started with. I've found this to be a severe deficit in practice, as you'll end up trying to debug the flow control through log statements that become riddled throughout an asynchronous cascade of callback functions. The symptom of this is that your Lambda function will just timeout, without any information about where the flow control was before the invocation ended.

There are ways to work around the callback problems with JavaScript, but each one is a tradeoff that sacrifices the quality of an easy to maintain modular function.

InfoQ: For developers and system admins getting started with serverless functions, is their pre-work that you recommend? Should they dig into event-driven architecture patterns? Decide on an instrumentation strategy? Set up a CI/CD chain?

Emison: I would sign up for webtask.io and do their super basic tutorial (which if you have done node.js or really any Linux development, will take you less than five minutes to build and deploy Hello World), and then also their slack tutorial, which lets you build a chatbot in about five minutes as well. All of this is free, and you can easily do it while you’re waiting for people to join on a conference call, and you’ll get the idea and power behind serverless immediately.

I would then recommend reading Mike Roberts’ great page on serverless that is on martinfowler.com.  Mike does the best job out there at talking about pros and cons and concerns you should have going into serverless.  This won’t take long, and is well worth looking at before you commit tons of time to serverless.

Once you’ve done those things, which are quick and (honestly) delightful, then I’d recommend checking out serverless.com and all of its great quick starts and demos and extensive documentation (starting here).  Most of the “serious” deployments will look more like this and have to deal with the complexities herein (although the Serverless framework does a good job of trying to make it all a lot better).

I would then spend a decent amount of time in the proof-of-concept phase, and see whether serverless actually seems like something that would be meaningfully better for things you are working on, as opposed to something cool and in the upward swing on a hype cycle.

Ryan: The only thing I can add to Joe's comments is to ensure they have CI/CD setup when getting started.

Instrumentation is definitely something that should be thought about from the outset for any serious production deployment, but right now we are very limited in options.

Bastani: Joe's list of resources is an excellent starting point.

I do recommend to anyone who wants to get started with serverless to experiment with an application you can have fun with. One of the more exciting use cases of serverless functions I've come across is conversational interfaces (or chatbots). Amazon Lex is a conversational interface for building chatbots using either voice or text. What I really love about this service is that it takes something that was relatively inaccessible on the client-side and bakes it into a programming model that thrives in a serverless environment. Lex is great because it frames a set of problems that are well-suited to a serverless architecture. Lex's programming model is an excellent playground for those who are interested in figuring out where else serverless is a valuable fit.

My last piece of advice is that serverless is not a panacea. Things like CI/CD and testing are not easy when your execution model is truly "cloud-first." Rapid feedback loops in a development environment remain critical to developer productivity. When you shift your execution model to the cloud and start needing to coordinate a deployment of multiple serverless functions – just to test a small change – you can lose your ability to iterate in your local environment rapidly.

Serverless can help you deploy code to production much faster, but can also lengthen the time it takes to iterate on changes locally.

About the panelists

Kenny Bastani works at Pivotal as a Spring Developer Advocate. As an open source contributor and blogger, Kenny engages a community of passionate developers on topics ranging from graph databases to microservices. He is also a co-author of O’Reilly’s Cloud Native Java: Designing Resilient Systems with Spring Boot, Spring Cloud, and Cloud Foundry.

Joe Emison is a serial technical entrepreneur, most recently founding BuildFax in 2008, and has consulted with many other companies on their own cloud launches and migrations. Joe oversees all technology and product management for Xceligent, and regularly contributes articles to The New Stack and InformationWeek on software development and the cloud. Joe graduated with degrees in English and Mathematics from Williams College and has a law degree from Yale Law School.

Fintan Ryan is an industry analyst at RedMonk, the developer focused industry analyst firm. Fintan’s research focuses on all things related to developers, from tooling to methodologies and the organizational aspects of software development. His primary research areas for 2016 include cloud native computing architectures, data and analytics, software defined storage, DevOps and machine learning. He is an accomplished keynote speaker and panel moderator.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT