BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations The Fast Track to AI with JavaScript and Serverless

The Fast Track to AI with JavaScript and Serverless

Bookmarks
40:54

Summary

Peter Elger explores how to get started building AI enabled platforms and services using full stack JavaScript and Serverless technologies. With practical examples drawn from real world projects, he initiates running AI using basic Node.js knowledge - no PhD required.

Bio

Peter Elger is co-founder and CEO of fourTheorem, a company providing expertise on next generation cloud architecture, Agile development, AI and machine learning. He was previously co-founder and CTO of Stitcher Ads, a social advertising platform and nearForm, a Node.js consultancy. He is a co-author of the Node Cookbook as well as several academic papers on software methodology.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Elger: I'm going to talk to you about AI as a service, fast track to AI with serverless. I'm not going to go deep into training models and all of that stuff. Really, the takeaway I hope you get from this talk is that adopting machine learning in your day-to-day work is really not as difficult as you might think. That you maybe come away from this talk able to go and start experimenting at a low cost with AI as a service. Because in a lot of cases, the ability to do machine learning or to run inferences is, these days, just an API call away. At fourTheorem, we do work in the serverless space, obviously, and we work in machine learning. My own particular area of research with regard to machine learning is, how do we apply machine learning to the process of software transformation? Decomposing monolithic systems into microservices. Although, as we heard, the monolith is not the enemy, but can that be treated as a big data problem? It's a very interesting area. I'm not going to talk to you about that.

Commoditization

This is breakfast. Some sausages, some rashers, black pudding, eggs, wash it down with some coffee and some orange juice. That's a good traditional Irish or English breakfast. I want to go to the shops. I want to buy myself that breakfast. It's going to cost me about £15, £16 to buy these. I'll probably get two breakfasts out of it as well, and I can reuse the coffee. Let's say that I wanted to DIY my own breakfast. I want to build it from scratch. What does that look like? It looks significantly more expensive. I've got to get into animal husbandry. I need the associated equipment. If I want to do black pudding, I'm going to need to get some oats, and need to grow those oats, maybe a scythe if I'm not doing it in industrial scale, and a grinder. If I want some eggs, I've got to purchase a henhouse. I got to keep my hens happy while they lay eggs. Get a juicer. Turns out that oranges don't grow so well in the UK. Maybe I'll pop over to Spain to get my oranges. For my nice Colombian roast, return flights to Bogota are not cheap. That comes in at a whopping £3500 compared to my shop bought breakfast. What's the point of all that? It's commoditization. My shop bought breakfast, I still get it faster. It's certainly going to be cheaper. Often, it's going to be better than what I can do from scratch.

If you've been around this industry long enough, as I have. I'm a little long in the tooth these days. You'll be familiar with commoditization in our industry. Back at the turn of the century, we still cared about hardware. We still racked kit. Probably, some people still do rack kit, but it is becoming a smaller scale. We cared about the entire application stack right way from the hardware, through the OS, right up to the client. As we move through the first decade, virtualization was all the rage. A lot of that was going on on-premise, maybe in a data center as well. We were managing virtual servers. We'd come up a level of abstraction. As we moved through 2010 through to the first part of the decade, we gave away some of that control, and commoditized that part of the stack out to the cloud vendors. Infrastructure as a Service became very popular. No one really wanted to rack their own kits unless they had specialist applications they needed to do. The stack commoditized up. We came up a level of abstraction. What people are really running at the moment is a lot of container workloads, Kubernetes and so on. That's commoditizing. I don't necessarily want to run my own Kubernetes cluster. I'm more than happy to use a managed service to take away some of that pain for me. As we move on through 2020 and beyond, we're going to see much more commoditization. The stack is going to commoditize again, up and up. We're going to go up another level of abstraction. We just care about running code. We just want to get our functions into production quickly. We can let someone else take care of all the heavy lifting, taken away all that work.

If you want to get started with machine learning, artificial intelligence, and so on, do you necessarily have to do it yourself? Or, do you even have the desire to do it yourself? Because there's an awful lot to understand. I thought Jay's talk was excellent in trying to explain the concept at an easy level. When you get into the detail, there's an awful lot to learn. My suggestion to you is, don't be like poor, old Webster here. Don't blow your cerebral cortex, if there's an easier way to do this. Turns out there is, which is why we've just written this book. It's called, "AI as a Service." In the same way that the rest of the stack is commoditizing out, AI and machine learning is commoditizing as well. What we wanted to do was to share some of our experience of building these types of systems to other developers. This is really an engineer's guide of how you get on board with commoditized AI services.

It's Just Code, and It's Only an API Call Away

It's just code, and it's only an API call away. That's the key message I'd like you to take from this talk. Pick your poison, AWS, Google, or Azure. Sorry if your cloud isn't on here. I know there are other clouds. These are the three major vendors, obviously. You can pick your language as well. Because at the end of the day, generally, what you're able to do is just call APIs, so Python, C#, Java, JavaScript, whatever language you like. You don't necessarily have to learn Python to be effective with machine learning services. The examples I'm going to give will be using AWS, for no other reason than AWS is the market leader. I'm pretty cloud neutral myself. JavaScript, purely because JavaScript is my favorite language at the moment. Although I'm fond of Python as well.

Forces behind Commoditization

What are some of the forces behind this commoditization? Obviously, growth of compute power is continuing exponentially. We're still seeing Moore's Law apply. Availability of data is another huge thing that's driven machine learning, and the power of machine learning. Improved model building techniques. If you couple that with the business pressure to reduce release cycles, to shorten iteration times, and scale down the unit of deployment so that we can get features into production quicker, that leads to two things. If you couple that with cloud economics, it leads to commoditized AI and the rise of serverless. I think those are two forces that we should all be paying a lot of attention to in our careers going forward.

I think the consequences are that serverless will increase and will become a standard approach for enterprise development. Moving to full utility computing through increased commoditization. Increasingly, we're going to see more AI and machine learning services. The range and capability of those is going to grow. Increasingly, as developers, we'll be incorporating those into the business systems and business solutions that we build for our clients or for our internal clients. It was good to hear Newton referenced. One of my favorite quotes by Newton is that we stand on the shoulders of giants. My suggestion is that we stand on the shoulders of these giants in order to rapidly build systems and solutions.

Some evidence behind that. I did this survey a couple of times, recently in December 2019. I didn't a get chance to update it for this talk, unfortunately. This is almost certainly out of date. I haven't checked. That's the pace of innovation here. If you look across these three major vendors, they all have a huge range of services available, consumed through API. These are serverless services: compute, data and storage, network, developer support, and of course AI and machine learning. The green numbers there is the increase over 2018. Of course, Microsoft is the leader there because the marketing machine is ever present. Some of the services here are a little bit wafer thin. There's real innovation and a bit of an arms race going on that we can take advantage of.

If we dig in to the AI and machine learning set, you'll see that all the vendors provide a similar range of services, so AWS, Google, and Azure, across image recognition, recommender systems, speech to text, text to speech, chatbot, predictive analytics, language and NLP, support for training your own models, custom search, and then developer support services. Pick your poison. They'll all do similar things for you.

When Should It Be Used?

When should you use this? How many people here train their own models and data scientists? Not so many. Some people when they see this, say, actually, I can do better. Maybe you can in certain instances. For solve problems, commoditized problems that are fairly well known, you're probably not going to do better than what the large vendors can do. Because they have large teams of people that are building these services, and time to collect data, and time to train all of these models. When the problem is commodity, it's well understood, then we should be looking to adapt and combine and consume these services to solve our problems. We can also cross train some of these services for our own particular domain. Where the problem is not well understood, for example, in Susan's talk where she had some very specific needs. The data was shaped in such a specific way that you probably couldn't have used the commodity services. Then you need to go back and start applying tools like TensorFlow, Python, and so on to build custom models for your solution. Look at the commoditized solutions first.

Of course, building a model is only a small part of building a system. You may have got your data, trained your model, and it's working and so on. Those are bits lying around on a lab desk almost. You've still got to host it somewhere. You still got to get data in and results out. Presumably, you're going to want some form of user interface on it. You got to worry about scaling, security, monitoring, and optimizing performance. Doing machine learning is a small part of the overall delivery of an ML system. Of course, you want to deploy updates. In the similar way that we want CI/CD pipelines for our software components that we build, you need some form of pipeline to push updates to your model as you iteratively and continue to improve it. That can all be done through serverless and AI services.

Architectural Context

When we were writing the book, we wanted to put down some form of architectural context that most developers would understand to bring it close to what most people work on in a web development environment. This is our take on it. At the top, we have a web application tier, tools like CloudFront for content delivery, API gateway, and some form of firewalling. A layer of synchronous and asynchronous services. By synchronous service, I mean responding typically over HTTP to calls that are made in through the API gateway. Asynchronous is usually something like message parsing, some form of message bus, maybe Kafka, maybe EventBridge, or another bus. Underpinning that then are our AI services that we can call to generate business results. Of course, under that we need some datastore. In the systems we build, we tend to use serverless databases, so DynamoDB, Aurora, or S3. We also need development support services, things like CloudFormation and CodePipeline, so we want to apply infrastructure as code paradigms to all of this. Those tools allow us to do that, and also allow us to use the services to build CI/CD pipelines. Of course, we need to monitor it. We need things like CloudWatch and CloudTrail to monitor and alert, or observability.

Build a Cat Detector System in a Day

As we were writing the book, we set ourselves a challenge, which was, could we build a cat detector system in a day? Because after all, cat detector systems are really the Hello World of AI. Can we take all this technology in a fairly long day, build a cat detector system? I'm a big fan of JavaScript, but I figured I had to get some Python in this talk somewhere.

Yes, turns out you can build it in a day. API gateway at the top. This is the user interface for it here. We parse in the URL that gets kicked off to a work queue, into a crawler service, which goes and harvests the images from the site. Then enqueues each of those images for analysis. Calls our AI service, which is recognition. It's an image recognition service in this case, and returns a bunch of results that we can render on the frontend.

I can show you this running. This is just a little example site with some images on it. This is our cat detector system. I've put a couple of searches in here. This is the response from just that simple site. There we go. It works a little flaky. Word cloud and word density. Some of my friends who are big into image recognition systems tell me that giraffes can often fool models. I did a Google search for giraffes. Sure enough, it's actually able to identify giraffes, which is good. Probably because the people that build these services know that giraffes are difficult, and therefore, they make sure they train for giraffes. We're able to leverage their skills and expertise through calling these services. Anybody want to kick out a search term and we'll see how it does. Anyone want to suggest something?

Participant 1: Pineapples.

Elger: Pineapples. Copy that. It's just going to run an analysis so it shouldn't take very long. It's just running the analysis now. It's analyzed. Let's see how we do. Pineapple plant, fruit, so not too bad. The other one I tried earlier was terminator of course. Let me throw that in there. Analyze that again. Take a second. We've analyzed that. There we go. We've got human, person, or power, or clothing. Seems reasonable. What seems unreasonable to me is that this is a piece of sculpture apparently, and not a killer robot. I wonder what they're hiding. Of course there's a reason for that, and we'll come to that later.

What else can you do? If we just move over to my Jupyter Lab here. Many of you might be familiar with Jupyter Notebooks. This is Jupyter Lab using an IJavaScript extension, because I like JavaScript. I can build my notebook in a similar way as I can just writing Python. I just prefer JavaScript. I've pushed some images up onto an S3 bucket. I can just run this piece of code here. This will show me my images. First off, the cat detection system you just saw, uses a service called recognition. At the core of that, it's really as simple as a single API call. I create an instance of recognition here. I point it at the image that I want to run the detection on. I call detect labels. It's as simple as that. It's something that most software developers do on a day-to-day basis, is consume APIs. If I run that, you'll see that we get a response back. I've told it that I only want 10 labels and I only want anything with a confidence level of over 80. The response I get back is cat, mammal, animal. You'll see that I also get a confidence level. It's important when you're consuming these services, that you look at the confidence levels coming out and determine whether that confidence level is enough to allow you to use the result or whether you need to kick out to a human. In this case, 93% is a fairly high confidence.

What else can we do off the shelf with this service? Here's a bunch of very happy looking people at an office meeting. Detecting faces is a commodity service now. Feed the image in, call detect faces, and add a little bit of code to draw some boxes. Off we go. Here we go. It gives me some rectangles, confidence levels, and I can box those faces. Celebrity detection is an off the shelf service. Who knew? When I first looked at this, I thought, why would that be important? For news organizations, it's not just celebrities. It's people who are newsworthy, can we detect them in this image? Here's me hanging out with my celebrity buddies, of whom I have none, I'm afraid, in a recent Oscar ceremony. We're going to kick this into the recognize celebrities API. This is off the shelf. One call. Kick that through and some boxing. The response we get is it's identified Daniel Day-Lewis, Marion Cotillard, Tilda Swinton. Unfortunately, not the nice, bald chap at the end. I'll have to try harder, I'm afraid. In just the same way that we can do string searching, we can now do face searching as a commodity. Here's Jean-Luc Picard. I agree with him here, actually, I think. I do think Kirk was a better captain, not a better actor. It was the cross acting that actually made it worth watching. What we're going to do is we're going to take this image here, and we're going to see just calling a commodity API. Can we detect Jean-Luc in this picture here? Let's kick that off, and have a little think. You should be able to see there that it's found him and we've drawn a box around his face. Face search as a commodity in the same way as you can do string searches. A lot of power.

Customization. Training a model takes a lot of work and effort. It also takes a lot of infrastructure. Using AI as a service, we can have that infrastructure provided for us and kick off our training with an API call. This is an example of how you would do that. I call the single API recognition, create project version, parsing it a test dataset and a training dataset. Of course, your problem now lies in getting the dataset and labeling it. That's always a problem. At least if I can have my infrastructure provided for me, I can do this with very little effort. Then it's just a simple case of calling the custom model. Of course, with cloud services you need to be aware of the costs. Image recognition is 0.1 cents to run one image recognition, $1 per hour of training, and $4 per custom inference hour. It sounds cheap. It's fairly cheap. The problem is when you start to do this at scale, the cost can mount up. Always keep an eye on your cost base. We had a case recently of one client that left a large training compute cluster over the weekend, and came back in on Monday morning with a bill for $6,000. Keep an eye on the cost is the lesson there. That's the cat detector.

That's fine because we're challenging ourselves to build a Hello World system. It's all nice, and Greenfield. There's no legacy or anything. Of course, the real world is a bit messy. In fact, it's very messy. I'm sure most people in this room have to deal with technology estates that look a bit like this, with ETL jobs flying all over the place, and lots of different databases, and no source of truth, and so on. How do you start to apply these machine learning algorithms and techniques to the real world? As with all good computer science problems, we just introduce another level of abstraction.

Synchronous and Asynchronous API

If we treat our technology estate, or our legacy estate as a black box, we can just bridge in to our cloud system, send the appropriate data over and get a response back through an API. If the process is fast enough, like it's a quick image recognition or extracting some text, and can be done quickly, request and response pattern is fine. If it takes a little bit longer to run, then you might consider an asynchronous pattern. In this case, we're making a call to our API, but we're not expecting any responses back. How do we get the data back out? Maybe we'll build another line of business system, put another client on that can be integrated into our workflow. Maybe we call out to external APIs or some other asynchronous communication mechanism, maybe Slack, maybe email.

Streaming

The third approach, of course, is to do a much tighter integration using streaming. I'm a big fan of Kafka. Putting a Kafka cluster, maybe you're on-premise, and then streaming up to a managed Kafka service to put data back and forth. There are options by abstracting this out and treating this as a clean build.

For a second example, in the book, we wanted to take a much larger piece of functionality, and see if we could put that together in a few days using commodity AI services. We took the domain of social CRM. This was an area where a lot of money was spent in this, a lot of money invested in this to do brand monitoring and brand control across various social channels. Four, five years ago, when people were building these systems, it cost a lot of money to do and a lot of hardware to do as well. Can you do that off the shelf? We have a company. We've got multiple product departments operating in multiple territories. How are we going to triage that and figure out that we've got some review data or something that is negative, and which department should we be sending it to? We need to detect the language, first of all. We maybe want to translate that into English. We want to run some sentiment detection. Incidentally, this type of data is quite amenable to off the shelf sentiment detection. Then we want to figure out which department we should send it to, and then wish it onwards. Is it possible to build that, off the shelf, fairly quickly? It is. It looks a bit like this. On this end, we've got our various input channels, so maybe Facebook, Twitter, web forms, email, pushing into our API gateway. That then goes in through a Kinesis stream. Then we call detect language. We then translate, forward through another stream, run our sentiment detection. Then run our classification. In this case, we just output into a data bucket.

AWS Comprehend

Let me show you that working. This is our pipeline. This is using this service called Comprehend, which is an NLP service from AWS. Calling the language detection. This is a single API call. Detect language that will figure out the dominant language and return me a language code. Doing translation is a single API call. I'm going to call a service called translate. Tell it to translate text, giving it the detected language code and the target that I want to translate to. Sentiment analysis is a single API call. You need to figure out how you process the results. This is a case for classification, where we actually cross trained this service, Comprehend, onto a specific domain, because the document classification that came out of the box wasn't quite there. In order to do that, what we did is we said let's take some open data from Stanford. A lot of huge open source datasets there. Let's look at the Amazon Review Product graph of a few dimensions, so automotive, beauty, office, and pet. We download about half a million records. That comes in json_out format, split that into test and training, convert to CSV, push it up to training bucket, and then call an API to train. That takes about two hours to run your training, and you're done. You now have a custom classifier. It's as simple as that API call to run your training. Then to execute, you just call classify document. Of course, the problem is getting the right data, labeling the data, and making sure you have accuracies and eliminating bias. The point I'm trying to make here is of course that this infrastructure is there, and you can take advantage of it to experiment very rapidly and at fairly low cost.

What we're going to do now is I'm going to pump some data through the pipeline. This is just the HTTP API up here. It's sitting up on AWS. I have a little bit of code here, which is just going to look at our test dataset, pick out some random reviewed data and post it up to that API. We kick about 10 of those up there. We'll do that now. There's office negative, positive beauty, positive, negative. We kick another bunch up as well. There we go. Let's have a look at how it's done. The end result of piping all of that through our streams, it's ended up in data buckets. Each bucket is keyed on auto, or beauty, or whatever the classification is. You can see there that we've got some categorizations, and we've also got some unclassified as well.

Let's have a quick look at how we interpret these results. Let me print that classification there. This piece of code is just going to pull that back and render it in, so we can see how it's done. That looks fairly automotive. Notice that the sentiment there is positive. I'll explain why it's positive there in a moment. That looks like beauty. Yes, that looks fairly good. We got a positive there, and some unclassified as well. This is a case where the classifier looked at it and couldn't figure out which department to put it into. A note on handling results. When you make a call to this API to detect sentiment, you're going to get this JSON back, or very similar to this. It's going to come back with an overall sentiment indicator, so negative, positive, neutral, or mixed. Then it's going to come back with a scoring confidence level for each of those. In this case, the way we're handling that is to say, if it's neutral or mixed or negative, then we're going to treat it as negative. We're going to err on the side of caution. If it's positive, but the confidence level is less than 85%, we're just going to put it in the negative bucket as well. Of course, if you can't get a classification score, if your classification level is less than 95%, leave it as unclassified. The point of this is, what you're able to do now is to handle the bulk of the workload through these automated techniques, and then have a human in the loop for the cases where you can't cope with those.

Some other things that you can do with Comprehend. Entity detection, we heard earlier about entity detection. This is a piece from the BBC recently on Virgin Galactic. I'm just going to run this through entity detection. Let's kick that through there. It's figured out the entities in this block of text are Virgin Galactic, SpaceX, we've got Richard Branson as a person in there. Some numeric quantities there, some dates, and locations. Of course, that type of processing is very valuable when you want to figure out what this document is about quickly. News organizations want to get summaries of text it's going through. That's available as a commodity. I'm sure maybe some of you can figure out how that might be appropriate to where you work. Another API is key phrase detection. This is another article here, in this case, it's on the Mars rover. We'll just kick that through. We can see that we've got things bump, Mars. It's actually on Mars quakes, isn't it? It's about the Mars rover detecting Mars quakes. Costs are fairly reasonable. Fractions of a cent for sentiment detection, language detection, and key phrase detection, costs about $15 USD for each million characters in language translation, and about $3 per training hour. Beware because these costs look very low, once you start to do it at scale, they start to rack up. Always bear that in mind.

I'm going to close now with a few real world examples. This is a project that we worked on way back in 2017, when the world was much different. How do you automatically process builds and extract the information from them? Back then, we had to build this ourselves using an open source OCR library that gives you a block line and word structure. Do some math to figure out the boxes, given that you've got the coordinates of the text. Then feed that text through your own custom classifiers to say, this looks like a name. This looks like an address. We've now automatically extracted that information from the form. For over two years, and this has been commoditized out. What we built in four, five weeks, could be built in a day because it's all been commoditized into a service. Textract, Form Recognizer, or Google Cloud Vision OCR, will do exactly that job without us needing to charge clients a lot of money to do it.

Textract

Here's an example of that, using Textract. What I've got here is passport images that I've uploaded onto S3. Obviously, it's a fake passport. I'm just going to kick that through Textract analyze document API. I'm telling it that I want it to treat it as a form. We'll just run that now. It's come back. It's a similar pattern every time. You've got your data and a confidence level. Pretty confident that we got a passport number at 99%. In fact, the surname is correct, but not so confident, only a 75% confidence level on that. Maybe you might need to bring a human in the loop depending on what those confidence levels are. This automated processing certainly would replace what we did a couple of years ago. Just increased commoditization.

Another example is room rate pricing. We work with some guys that actually do price optimization using people. People with tacit knowledge, they know the industry. They tend to use things like historical room rates, historical occupancy levels. What's the weather like? Are there any local events on? Is there a big concert on in town this week? That's fine, but it's very human intensive. Using a service called Forecast from AWS, we can actually start to automate that process. We can feed in these different types of data, do some cross training in Forecast to apply it to our specific domain, and then push back, write off some recommendations into the system. That's removing a lot of slow, laborious human work and replacing it with automation. Obviously, the key reason for doing that is it allows the company to scale up. Don't have to bring so many people in and train them up. They can automate this and scale.

Third example is in the agritech space. This is a case of taking a custom model that these guys have trained, very smart chaps. They trained this model to help farmers with nitrate spreading. When is the optimal time to spread fertilizer on a field? Because there are very strict EU regulations. There are strict regulations on when you can spread nitrates. Obviously, you want to optimize this for your best result. These guys place a sensor in the field, literally in a muddy field with grass in it. It takes nitrate levels, temperature, rainfall, and so on. It also takes images. That then gets fed up to serverless commoditized IoT services, stored, and then fed through their deep learning models which are running on SageMaker. Custom train models, but using this serverless infrastructure to scale and run the solution. We're able to take a prototype from bits on the bench to production in two weeks. We're continuously obviously helping to update and drive the system forward. You get the benefits of scaling and all of these kinds of stuff. Again, standing on the shoulders of giants.

Summary

I believe that serverless computing will increasingly become a standard enterprise development tool, and incorporating lots of AI components. Developers will increasingly consume and combine these components to produce business results rapidly without necessarily needing a PhD in machine learning to be effective. AI is not just about the model. You've got to operationalize it. I believe the fastest, most economic route to do that is through serverless technologies.

Questions and Answers

Participant 2: Usually, a business problem doesn't exactly match the APIs. I don't have any of the background knowledge on machine learning, I'm just an ordinary Dev, where do I get started if I still have to customize or do some training myself? How do I do that?

Elger: One of the key things here is you can actually start to experiment at very low cost. If you're just running at a small scale, experimenting with these services doesn't cost an awful lot. You can start to get a feel for how these services work, how they interact, and how they might align with your problem domain. If you want to start doing cross training, have a look at some of the public datasets and just try out, experiment. If it's only going to cost you a few dollars to do some training.

Participant 2: $6,000.

Elger: That was a mistake. Keep an eye on the cost. You can do fairly low cost experimentation here. That might start to help bootstrap you up into how you can consume or combine these services.

Participant 3: A lot of examples you talk about come from deep learning based models, but a lot of the models that can be applied in business are more traditional neural nets. In those cases, do you think the commodities still make sense, or do you think compiling your trained model from your neural net into some target language and then just chucking that into a Lambda function makes more sense? Or, would you just go straight to the commodity?

Elger: It depends. If I can fit a commodity solution to my problem, I would start there. If it doesn't quite fit, if I can maybe adapt it with some transfer learning, then I would go there. The third step would be actually to do custom training. Then I will be starting to use some of the tools that are available. All three providers have a great suite of tools to help you with training. It will be probably in that order, 1, 2, 3, if that makes sense.

Participant 4: In your experimentation, did you come across any of these services that you felt aren't really commoditized yet or felt half baked?

Elger: To date, not really. I know there's been some recent releases from AWS, for example, fraud detection is a new service that has gone out. I haven't tried that. One would imagine that it would be at a reasonable enough level. It's a curse in the sense that if you're adopting version 1 of any technology, it's never going to be perfect. One of the other benefits, of course, is that if I'm consuming a service, under the hood, that's going to be constantly improved. I get the benefit of those constant improvements under the hood without needing to do anything. For me, that's a real selling point. If it's not perfect, it doesn't fit right now, do I go off and invest an enormous amount of money in training my own solutions or do I just wait for version 1.1 to come along and solve the problem for me? It's just another way of thinking about the problem.

 

See more presentations with transcripts

 

Recorded at:

Jul 31, 2020

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT