Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles Machine Learning and Cognitive Computing

Machine Learning and Cognitive Computing

Leia em Português


This article first appeared in IEEE IT Professional magazine. IEEE IT Professional offers solid, peer-reviewed information about today's strategic technology issues. To meet the challenges of running reliable, flexible enterprises, IT managers and technical leads rely on IT Pro for state-of-the-art solutions.


Machine learning and cognitive computing are today’s newest buzzwords, and there is a lot of hype surrounding them in the market. This article is based on a recent webinar on analytics produced by IT Professional, the Journal of Applied Marketing Analytics, and consultancy Technology Business Research (TBR), along with the Content Wrangler; it was hosted by Earley Information Science (EIS). Video of the webinar is available here.  The goal is to help organizations understand what’s practical and what’s possible in the fast- growing fields of machine learning and cognitive computing, and how these fields are related to artificial intelligence (AI).

Earley: Let’s first talk a little bit about machine learning, cognitive computing, the AI that underlies them, and where those fields have been going. Machine learning is a technique for detecting patterns and surfacing information, using many different mechanisms based on statistics and mathematical models. One good example is search technology, which provides entity extraction, clustering, and classification. Machine learning algorithms pull back information that’s relevant to the user by looking for patterns, improving the search results. For example, Amazon uses machine learning algorithms to provide suggestions about other products that are related to the ones a user has viewed or selected.

Cognitive computing is a newer, emerging field. It’s about making computers more user friendly, with an interface that understands more of what the user wants. It takes signals about what the user is trying to do and provides an appropriate response. Siri, for example, can answer questions but also understands context— whether the user is in a car or at home, moving quickly and therefore driving, or more slowly while walking. This information contextualizes the potential range of responses, which are therefore more personalized.

AI encompasses all of these tools and solves a wide variety of problems—everything from writing articles to driving cars, detecting fraud, and diagnosing diseases. A lot of decisions that have typically been made by human beings can now be made by means of AI. So, there are many developments in the AI field that are practical and actionable. With this background in mind, what are your thoughts on this technology?

Daley: The fundamental concepts of AI have been around for 60 years, including the idea of modeling machines to mimic human intelligence. What’s really changed in the last few years is the im- provements using probability and statistics. The algorithms for deep learning, in particular—which is really a form of backward chaining neural nets [an approach that uses learning algorithms to progressively infer a pattern about a body of data by starting with the goal and then determining the rules that are inferred, which can then be used to reach other goals]—has advanced significantly. Second is the recent discovery that using graphical processing unit [GPU] chips to do calculations in parallel allows results that would have taken weeks to be achieved to now be obtained in just a few hours. Finally, and I think probably most significantly, is the amount of data that’s available to train AI systems. They need a lot of data, and it’s available digitally in all forms. In Google’s famous cat experiment, an AI system was exposed to millions of thumbnails from YouTube, and without being supervised, the system was able to recognize a cat.

Downs: Many machine learning solutions have already been developed, and they are continually being improved. I spent some time at Microsoft Research doing some early work in Bayesian reasoning and machine learning. We built a solution for traffic modeling that was spun out as Microsoft Research’s first startup company, called INRIX, which now provides real-time and predicted traffic information around the world.

I see three tiers of commercial engagement with these types of technologies. For one group of companies, such as Google, Amazon, Facebook, Microsoft, and Apple, these technologies are strategic, and their investment is a hundredfold or more than it would be from a more conventional business. Second, for some companies, the impact of these technologies could be strategic, but they cannot make the invest- ment in terms of human resources and tools. Finally, we come across companies that have had some of the tools become more accessible to them, but they lack the knowhow to use them. Some of them say, “We’ve tried machine learning, and it doesn’t work.”

One of the key propositions for machine learning and cognitive computing to really cross the chasm in terms of driving business value is to be able to package up some of these capabilities, perhaps with some specific applications in mind, that actually are able to add value without requiring hands-on intervention. The tools can act without human expertise and provide information on what the technology is discovering.

Earley: How is this technology being applied in marketing?

Downs: Without technological solutions, marketing is a relatively slow process that engages at the enterprise level and uses a large number of resources. It isn’t unusual to have 50 or a hundred people involved in planning, campaign formulation, and testing. At Globys, we allow the marketer to drive the discovery and continual optimization both from a large and price value sensitivity and to compare components automatically. The machine learning actually discovers and reads out back to the marketers what the right target audience is for an offer or new product. It makes the process more efficient and less labor-intensive.

Earley: Of course, there is no magic bullet; we need to start with offers and a hypothesis of some sort, as well as have clean data inputs.

Downs: Yes, that’s correct. Certain foundational elements need to be in place.

Earley: What cautions would you propose for companies in using machine learning?

Schuster: Machine learning is a tremendously powerful tool for extracting information from data. But as Seth said, it’s not a magic wand. You can’t simply say “machine learning” and make your problems go away. You have to frame the questions in a way that really allows the algorithms to answer them. Data needs to be set up in the appropriate way, and that can be complex. Sometimes, the data needed to answer the questions may not be available, which can be another barrier. Once the results come out, they need to be interpreted. It’s very important to understand the context. The marketing algorithm can tell a marketer what’s working the best, but the marketer still needs to know how to leverage the information.

Earley: What concerns do companies have about providing machine learning services?

Heffernan: I see several levels of concern in the machine learning field; some are from the vendors, some are from the prospective customers, and some are from the IT staff. The IBMs and Accentures of the world are afraid of getting behind their competitors. As much as they understand the potential, they aren’t sure they can make a compelling business case or that they have the resources to provide the services. For the IT services vendors who truly rely on IT for their business success, having the right people makes all the difference. Without the right staff, they are also-rans.

The second type of concern comes from potential clients, the businesses that could benefit from machine learning and cognitive computing. The real fear here is pretty simple. Can I afford it, or can I afford not to do this? Am I already competitively behind? Am I throwing good money at a pointless project? I think when you take a step back, everyone truly appreciates the potential of groundbreaking technology. Every C-suite decision maker has an iPhone and understands the truly radical nature of IT in our personal and business lives now, compared to 15 years ago. You don’t need to make the case anymore about how important technology is and what it can do. The challenge still remains, though—is this the right investment?

The third fear is from employees who are wondering what happens to their work when the machines take over. We’ve done some research into robotic process automation in business process outsourcing, which sounds a lot like word salad; but what it basically means is the robots doing the work in a call center, for example. We’ve seen that one robot can replace three people and have an immediate impact on cost, a reduction of as much as 25 percent. It’s not just call center staff, but IT staff as well. As we’ve seen in digital and cloud and business intelligence, it’s the lower-skilled IT services folks who are in the most trouble.

Earley: What about the higherend skills? How does that issue fit into the machine learning picture?

Heffernan: At that end of the skills scale, you have the data scientist with an MBA and seven years of industry experience— what we sometimes call a “people unicorn.” That’s the person who can translate business needs into machine learning possibilities, then to creating the analytics, and from there, back to the business. That person is in huge demand right now. Being able to find that top talent, retain it, and manage it is really what makes all the difference. One company that’s embedding cognitive computing into its business process outsourcing offerings is in fact laying off people at the lower levels, or they’re being retrained for jobs that require higher-level services. The employees who are staying with the cognitive computing-enabled BPO offers are the ones who can formulate a strategy to get tested, translate the business problems into analytics, and communicate the results. We’re looking at an increased premium on IT staff at the more advanced levels. The premium was already high, and it’s only getting higher. These companies have to make a choice. Are they going to invest in cognitive computing? IBM’s Watson is one example of a major commitment. Wipro has made cognitive computing an important pillar going forward, and they are truly building out their own capabilities. Other companies are partnering to provide those high- skilled, high-priced, high-margin, high-value consultants and leaving the more basic IT aspect of it to others. We’re looking at this space as one that might completely upend existing business models and how companies are run.

Earley: So how do organizations get started, if their business models are going to change so dramatically?

Daley: The first step is almost always to get the people who have business problems to be solved into a room to discuss their needs. In the past, typical line-ofbusiness leaders didn’t really care what the IT solution was—they just wanted the problem solved. I think that’s changed now, and a lot of business leaders are much more familiar with the technological possibilities. What they need is guidance, with understanding of how to apply their knowledge to their business problems, and to understand what is practical to actually get done.

One possible strategy is to start working with an open source AI package to understand the technology. With “black box” AI solutions, you may not know why someone got turned down for credit. It’s just, “The machine turned it down.” If the IT department gets some experience with open source tools, they can get that transparency and be able to understand a little bit about how AI works, and address the issues of how the software works in the future. There is an opportunity out there, and at least at this juncture, open source seems to be the way to go.

Earley: How do you tell what’s practical and achievable for a given organization?

Shuster: A large array of prob- lems can be addressed by machine learning. I wouldn’t even say there are any issues that can’t be helped by this technology. It comes down to what kinds of data you have and what kinds of questions you want to ask. Are the data sources internal or external, and how much do you trust them? How clean and well organized is the data, and what kind of governance is around it? Then, when you ask the question, you can trust the answer, which I think is a bigger potential barrier than the type of question that can be asked.

Earley: So the table stakes are really a core architecture and clean data to work with. Some would argue that their tools work with messy data; however, I would not think many work well with bad data. Most AI experts will say that no matter the type of system, it will perform better when it’s given the products, services, processes, and unique terminology of the business. A domain model is an important part of the puzzle. What kind of culture change is needed to make this transition?

Downs: There needs to be a shift to decide that the business is going to be oriented around analytics, and a kind of “no excuses” approach to understanding at least some of the output. In terms of getting going and understanding what’s feasible, I think a lot of the first stab efforts today get bogged down in the infrastructure and other challenges with getting the data in place. They may focus too much on getting IT efficiencies in data warehousing rather than trying to solve a relatively simple but soup-to-nuts problem that can show ROI at the very highest level, where it impacts business functions. That is a better approach than just having a technology activity that impacts technology stakeholders. It’s always a challenge, because you have to have a cross-functional initiative to pull that off. But it makes the business value much more apparent to the high-level stakeholders. It also forces the company to become more analytically minded across the business, because you engage multiple functions. You are more likely to get buy-in because you are actually driving some appreciated high-level business value.

Earley: Right. We can’t walk into the business and talk about data and architectures. We need to speak to the benefits and how we support specific objectives and solve problems. How do organizations get the education they need to tackle these analytics issues?

Daley: To be honest, I haven’t seen a lot of unicorns in my career who can span multiple disciplines. AI is really based on a strong mathematical foundation. I think that’s absolutely indispensable. But I’m not sure there is any specific roadmap. Certainly one of the things I’ve been surprised about in my research about AI is how many papers have been written on the topic—over 150,000 research papers on neural nets over the last 60 years. As a result, what I see is that applications are springing up all over the world, independently of one another. Some will be very successful, and some will not be successful at all. Organizations need to be willing to accept failure as a part of learning, and start experimenting.

Earley: So learning by doing can provide practical lessons; however, the organization has to have some tolerance for exploration. In our experience, the exploration can be done in the context of the problem being addressed, which grounds the exploration in practical outcomes. Otherwise, it’s a science project. That’s okay if there’s budget for pure experimentation; however, there better be some clear application eventually, or that certainly won’t last. What other applications have you seen in which organizations have tried projects that provided good learning experiences?

Downs: The US Postal Service actually has one of the oldest deep learning implementations that exists, which is a dedicated piece of hardware for handwritten zip code recognition. They deployed the system in 1987. Of course now, deep learning is a very topical technology, with a lot of focus on image recognition, and researchers are branching out into domains such as medicine, with medical imaging as a developing area.

There are many applications in language, both spoken and written, with applications in areas that have a lot of inefficiency, such as medical billing, re-admittance, bill prediction, and fraud detection.

Some of the early neural network applications were in finance. The advertising auction marketplace is another good example. An ad is actually bid on in a second-price auction in real time, while the page renders it in your browser within a time span of 300 milliseconds. Some additional work has been sparked by European privacy laws. This has produced a wave of innovation about how to understand the qualities of individuals and being cognizant of them without knowing their specific identity. A market economy has developed around advertising of all types, whether it is in games, in apps on a mobile device, or online. A better understanding of the customer drives efficiency in the marketplace around advertising and display. It is highly focused, particularly as we push more and more toward anonymized information, which requires increasingly sophisticated analytics.

Earley: This is a fascinating space. The ROI is clear, and it’s easy to see the impact of analytics for businesses today and how developments on the horizon will continue to disrupt. Digital marketing is a tremendous application area. In terms of what needs to be in place to make this technology work, you still need an architecture. These technologies are not magic. You do need to understand the data structure, what the rules are, and have the right components assembled for digital marketing to work. You need to understand your customer attributes and have a mechanism for pulling them from multiple sources. The tools do provide unsupervised learning approaches that show patterns, but you will still need to know what you are looking for and what you are trying to get. This aspect of machine learning, along with many other potential applications, provides a tremendous opportunity both over the near term and farther down the road.

Roundtable Participants

Bruce Daley is a principal analyst contributing to Tractica’s Automation & Robotics practice . He focuses on artificial intelligence and machine learning for enterprise applications. Daley has extensive experience as an industry analyst, writer, and publisher focused on the global IT market; has been widely quoted as an industry expert in major publications, including the Wall Street Journal, the New York Times, the Financial Times, the International Herald Tribune, IEEE Spectrum, and the San Jose Mercury News; and is the author of a soonto-be-published book on data storage, Where Knowledge is Power, Data is Wealth. He received a BA from Tufts University. Contact him at bruce.

Olly Downs is chief scientist at Globys. He is responsible for the analytics strategy, technical approach, and algorithm design and development for Globys’s marketing personalization technology platform (Amplero). Downs is a machine learning scientist and serial technology entrepreneur, credited with bringing advanced analytics and machine learning methods to bear as the creative spark behind numerous early stage technology companies. He has a PhD in applied and computational mathematics from Princeton University. Contact him at odowns@

Mitchell Shuster is an informationist and data scientist at Knowledgent Group . He specializes in applying advanced analytics and data science concepts and techniques, including machine learning (regression, neural nets, support vector machines, clustering, PCA, anomaly detection, and so on), to help client organizations gain actionable insights and competitive advantage. Shuster previously designed and developed the basis for Intel’s worldwide high-volume manufacturing at the newest technology node and was recognized for computational modeling and process implementation. He received a PhD in physics and multiple research fellowships from Penn State University, where he authored research published in multiple prominent peer-reviewed scientific journals. Contact him at mitchell.

Patrick Heffernan is practice manager and principal analyst in the Professional Services Practice for Technology Business Research (TBR; He covers the areas of IT services, management consulting, global delivery, strategy and operations, cloud, intelligence cycle, project management, and client engagement. Heffernan received an MA in foreign affairs from the University of Virginia. Contact him at

Seth Earley (moderator) is CEO of Earley Information Science . He’s an expert in knowledge processes, enterprise data architecture, and customer experience management strategies. His interests include customer experience analytics, knowledge management, structured and unstructured data systems and strategy, and machine learning. Contact him at


This article first appeared in IEEE IT Professional magazine. IEEE IT Professional offers solid, peer-reviewed information about today's strategic technology issues. To meet the challenges of running reliable, flexible enterprises, IT managers and technical leads rely on IT Pro for state-of-the-art solutions.

Rate this Article


Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p