Key Takeaways
- Generative AI will transform communication and information sharing in all business processes, across all industries
- The full impact of generative AI won’t be clear for many years
- AI transformation won’t be consistent across all areas, even in the same business
- AI integration will work best when the workers are allowed to determine the best ways to augment their roles with AI
- We need to build AI literacy into our organizations
- Human workers bring innovation, reasoning, and empathy to our jobs – we can’t lose that
This is a summary of a talk I gave at QCon SF in October 2023. There’s a lot of scary stuff out there about generative AI. But if we address the rise in its development with a little perspective, we’ll be able to help shape the process. I work with AI Lab, and we’ve been talking a lot about the future of generative AI with executives of private equity firms. I’ll be sharing a bit about those discussions here.
Computers are dumb
As a software engineer who grew up in the early days of computing, I came up with one simple rule that’s helped me throughout my entire programming career: Computers are dumb. They don’t know what to do, unless you tell them, and then they execute exactly what you say. They’re like puppets, with a programmer behind them, making them do the interaction. Computers are also really bad at interacting with humans, which is sad, because that’s what we want to use computers for.
Then about 10 years ago, conversational user interfaces came out, like Alexa, Siri, and Google Home. Computers started getting better at interacting with humans. But there was still a programmer behind them, manipulating the interaction – programming in the sentence structure, the synonyms, and the finite number of responses. It's still a puppet. You still have to program it.
Generative AI
This year, things changed. Now that puppet is talking on its own. There's no longer a programmer manipulating the system; rather the system is talking back, and it's interacting with humans relatively well. It's not entirely dumb. As a programmer, I find this amazing, and exciting, and a little terrifying. It's certainly changing the way we think about writing software.
Almost all of our work communication is mediated by a computer in some way. And almost all of our business processes rely on some form of that communication. It isn’t too huge a leap to think about how we can improve all those processes and all that communication by augmenting it with generative AI, to make it more efficient. Then you can start to get an idea of the transformation that we're facing over the next few years. I think we all should be amazed and excited, and a little terrified.
The threat, or transformation
Analysts and economists are predicting that we’ll see up to a 3.3% annual increase in global productivity due to generative AI across the entire economy. McKinsey predicts that generative AI will increase automation in most jobs no matter what the educational level required for those jobs.
Of course, this also makes the future much harder to predict and raises new concerns. We're looking ahead at a mountain of change as these technologies proliferate across industries. Every organization will need to grapple with integrating them into processes and workflows. This could involve everything from automating customer support and market research to generating content and analyzing data.
The potential scope is vast because generative AI has implications for how we communicate and share information – the core of all business operations. Any place there is communication within an enterprise, there are now opportunities to optimize, augment or even automate that process with generative AI. It will touch everything from internal messaging and documentation to client reports and product interfaces. No department, role or project will be entirely exempt from its effects.
Rather than big bang transformations, the nature of disruption from generative AI will likely be a death by a thousand cuts. There will be small changes enacted across all corners of a company, each modest but collectively amounting to a revolution over time. The skills needed, controls required and effects on staffing will be complex to manage. Every industry will have unique challenges and applications as well.
In the software industry, we’re already seeing how the use of GitHub’s Copilot can improve our developers’ performance. At West Monroe, we did our own study, and found a 22% gain in productivity. Designers may eventually be augmented by AI that can churn out webpages and apps to spec. And generative AI makes testing and QA extremely difficult, as outputs can vary with each run.
Other industries like finance and healthcare are exploring how generative AI can improve decision-making, predict outcomes, generate detailed content and enhance customer experiences. And in fields like marketing, generative AI can churn out mountains of copy, social posts and ad creatives, perhaps squeezing out human jobs down the line.
The timeline
William Gibson said "The future is already here, it’s just unevenly distributed." We’re definitely seeing this with generative AI. The exact timeline for these broad transformations remains unclear. Looking to history provides some perspective - previous general purpose technologies like electricity, computers and the internet took decades to realize their full potential. As you can see in the chart describing the rise of the internet, the core technologies were often developed and available long before they transformed society.
We're likely to see a similar trajectory with generative AI, spanning 10 or more years (see the chart below). Though the foundations with neural networks and transformers were laid years ago, applications only exploded in 2022 with models like DALL-E 2 and ChatGPT showing the possibilities. Ten years from now, we may look back on 2022 as a distant, quaint era before AI assimilation.
Staying Resilient
As business leaders, how do we build organizational resilience in the face of such monumental change ahead? The key is maintaining flexibility, with a balanced response – not too conservative or aggressive. Completely resisting or banning generative AI out of fear is unrealistic, as competitors will eagerly adopt it and outpace you over time. But hastily reengineering every process to be "AI-optimized" is also risky, as we don't yet know where it will or won't add value. I’d recommend several ideas to face this transformation in the best possible way.
Let workers automate their own jobs – they know best which parts of their roles make sense to automate with generative AI and which don't. Give them the independence to determine what to augment, and how to do it. Generative AI is one of the most democratic technologies available, at least since the spreadsheet. It’s not hard to use.
Maintain an open perspective on integrating generative AI, rather than banning it out of fear. Allow teams to experiment with ways to incorporate it responsibly. Setting some guardrails while encouraging learning will help you adopt ahead of the curve. While there may be some security concerns, it’s important to note that all the major cloud platforms now have a generative AI solution connected to them, in the same cloud with the sensitive information you already store there.
Build organizational AI literacy by providing training on learnings so far, like which tasks are suited for AI versus those that aren't. Share knowledge on how to use AI most effectively. Share information within your organization on how to keep generative AI from hallucinating.
Set up an internal wiki or repository for collecting and sharing prompts, to let people build on colleagues' work. Consider having a prompt librarian to curate the most effective ones and tweak others to improve performance. Sharing prompts will save staff time, and you’ll essentially be documenting some obscure business processes that people really want automated.
Try out tools like GitHub Copilot which we've found boost productivity while also improving retention and morale as developers enjoy the experience. I already noted the productivity gain of 22%. That’s like getting an extra developer for every four you pay.
Leaning In
We’re all in different places in our AI journey. What if your organization is ready to take advantage of the productivity increase now?
Simply having an AI absorb manuals and job descriptions is not sufficient to replicate most roles. The human contribution around communication, problem-solving, innovation and empathy is impossible to codify. Organizations should start with a deep understanding of current work through observation, workflow analysis and behavioral research. At AI Lab, we’re seeing clients running into a handful of problems, so here are some tips and tricks so you can learn from our trials.
What a human brings to their job description
Human workers add innovation, reasoning, and empathy to their jobs, and you don’t always see that in a job description. AI just isn’t capable of these behaviors right now. Often the job description is not enough to truly do the job. For example, I was working with a chatbot for an insurance company, so that it could give coverage advice. How do you explain to a chatbot when to congratulate or commiserate with a person who thinks they’re pregnant?
Product approach to automation
As I said before, it’s dangerous to automate in a big-bang approach. The image below shows a product approach to automating jobs. The first thing you want to do is learn from the workers about what they’re doing, and what makes sense to automate. Then build tools, and let the team use the tools. Take small steps toward automation.
It’s important to note that generative AI will most likely impact the job market. But rather than eliminating full positions, there will be certain tasks that generative AI will be really good at, and others that will need a human, for their more nuanced reasoning capability and empathy. Alternatively, there will be some rote tasks that ChatGPT will never do. I was talking with a paralegal about the impact on their role, and they said that generative AI will probably not eliminate the most tedious thing they do – make copies.
Guardrails
As with any tool that we use, it’s important to understand what limits there are, and to set guardrails in place. The best way to avoid the AI doing offensive things is just try to stay out of the realm of where things can be offensive. Don't ask it to be funny. That will almost always be offensive. Just ask it to respond to a question as succinctly as it can. The big platforms are doing some fine-tuning to ensure that their generative AI tools remain ethical. One option is to add an intent filter on the front, every time anyone asks for something from your generative AI. If it's not appropriate, you can steer them away from that question.
Beware the uncanny valley
The concept of the uncanny valley is from animation and robotics. It's the idea that if you have a humanoid character, as it gets more human-like, it gets more relatable up to a point, and then suddenly, it reverses and becomes really creepy. For companies looking to more aggressively optimize with generative AI, avoiding pitfalls will be critical. A similar thing happens when you use a chatbot. Beware of unintended negatives, like customer dissatisfaction when an AI interaction feels impersonal. Tweaks like using collective pronouns, so the chatbot is representing the company, can avoid the "uncanny valley" effect.
Conclusion
Generative AI is going to change everything. It's going to take years, though. We should prepare by making sure all our employees are AI literate. When we automate our jobs, we should make sure that we are thinking about the human element ensuring that humans can focus on what they do best. But with deliberate, human-centric planning, we can build the organizational resilience needed to weather the changes ahead and thrive. Generative AI won't cause an apocalypse or make humans obsolete overnight. But it may slowly and substantially change how work gets done within companies.