BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Q&A: Relevant Search with Elasticsearch and Solr

Q&A: Relevant Search with Elasticsearch and Solr

Bookmarks

Key takeaways

  • It’s important to determine what “relevant” actually means in each search use case.
  • Engage in a virtuous cycle of improving and reevaluating relevance against user expectations.
  • Focus on search evaluation over search solutions.
  • The future of design is Information Retrieval.
  • Improvements in Elasticsearch and Solr will make developing recommendations a far less daunting prospect

In their book, Relevant Search, Doug Turnbull and John Berryman show how to tackle the challenges of search engine relevance tuning in a fun and approachable manner.

The book focuses on Elasticsearch and Solr and how relevance engineers can use those tools to build search applications that appropriately suit the needs of the business and customer. In a good search engine, results are ranked basked not just on factual criteria but on the relevance to the end user. Success and failure largely depend on improving the ranking number. The book puts it a little differently: "The majority of this book is about the best way to take mastery over this one number!"

Unfortunately, there's no one right answer.

Using comfortable examples in a way that Star Trek fans everywhere will enjoy, the book lays out the difficulty in determining what the user actually wants to search for. This book isn't really meant for absolute beginners. Those with some experience will get more out of the examples and theory.

InfoQ spoke with Turnbull to discuss what it means to be a relevance engineer.

InfoQ: What is a typical day like for a relevance engineer?

Doug Turnbull: When I'm focused on a relevance project, a couple of tasks tend to dominate my time.

First and foremost, I'm trying to figure out what "relevant" means in this application. Every search use case is different. Is this application or use case focused on research where users will review every search result in-depth? Or is this more of a single item lookup, where the top result must be the right answer? Are the users experts in this domain, willing to use complicated search syntax? Or are they the average public, expecting Google-like magic? Sometimes this means digging into user analytics. How often are users abandoning searches? Do they achieve goals/conversions? Other times this means collaborating with domain experts to understand what users are doing.

Second, before I get to improving search, I need to ensure the impact of my changes can be measured real-time. This means turning the intelligence I've gathered from the last paragraph into some kind-of test-driven workbench such as Quepid. Using such a tool I can play with a broad range of solutions from simple relevance tweaks to advanced NLP or machine learning and get an instant sense whether they help across my most important searches.

Third, there's the actual hands-on relevance work. There's no "one size fits all" here. Some problems can be fixed by just tweaking how the search engine is queried. Other problems require more complex work. Perhaps it's important to understand parts of speech? Or perhaps you're searching twitter and hashtags need their own special treatment? Perhaps that magic solution a vendor is just the ticket -- perhaps not!

Fourth, there's issues around user experience outside the strict ranking of search results that help people find what they need. One big mistake people make is focusing on actual relevance -- that is just the ranking of results -- and ignoring perceived relevance. By perceived relevance I mean helping the user understand why a result is relevant. This involves ensuring your content has descriptive titles and that you are highlighting the matched keywords in context. Other features like faceting, autocomplete, and spell checking also help users find their way to what they need without needing 100% perfect actual relevance.

Finally, and this may surprise people, there's ensuring that changes can be released quickly. Is search deployed in a way that relevance changes can be rolled out incrementally? These are ops concerns, but they matter quite a lot to relevance. You need to get your changes out quickly and reevaluate whether or not changes you suspect will have a positive impact.

InfoQ: How does the work of a relevance engineer change over time? Is it ever possible to "set it and forget it"?

Turnbull: In some cases, you might "set it and forget it." For example, you get to a good enough point for a search against the corpus of William Shakespeare. The documents don't change. The queries rarely do.

More importantly, there's only so much business incentive to have a tremendous search of the works of William Shakespeare.

More typically things change. Everything from user expectations to the design of your application. The kids start using a new lingo. There's different products on your online store. Old products are no longer for sale.

We like to talk about engaging in a virtuous cycle of constantly improving and reevaluating relevance against your user expectations. That's the more typical case: the case where search matters and you're constantly adjusting.

Most importantly, you want to adjust incrementally and quickly, not slowly/whole hog. Ship changes in small batches and reevaluate your analytics. Be OK with rolling back if that change didn't work out. In other words prefer being "agile" over a "big bang" relevance strategy. You pretty much have to be: releasing relevance tweaks based on last summer's catalog/users won't really help if released this winter.

InfoQ: What are some of the most common mistakes?

Turnbull: To me the biggest mistake is focusing on solutions over search evaluation.

In my work, the hardest part of relevance, has been measuring whether search is relevant. Answering questions like: Are these the right results? Are users happy with them? Are these results what users expect from this search application? In this context? Are we making progress with our tweaks?

The relevance engineer isn't really equipped to know whether or not search results are relevant. Instead they need to work with non-technical colleagues and domain experts to analyze user data and evaluate search correctness. It's really really hard, and even the best analytics available can be misleading in the wrong context. Sometimes the application is so specialized analytics are useless entirely!

Instead of putting evaluation first, unfortunately, many organizations go straight to silver-bullet solutions or exciting new technologies. They don't invest the time to setup a practice of evaluation. For example, one popular technology that might apply to some forms of search could be applying a technology called word2vec to your search documents. Word2vec is a machine-learning approach to understand the semantic relationships behind words. Understanding that "prince" and "king" are really closely related. Or that "Anakin Skywalker" and "Darth Vader" are the same person. It perks our ears as engineers to think "oh if I search for 'Darth Vader' then I'll match 'Anakin Skywalker". But it may either be "just the thing" that solves this particular search problem, or it may be completely irrelevant to the job at hand.

Organizations that take search really seriously can never outsource search evaluation. In the book, we write about ways to interpret analytics and techniques like test driven relevance that can help address these problems. And when you're really good and measuring relevance, then you can begin to use advanced techniques like learning to rank.

InfoQ: Is the problem of search too hard for generalists to handle?

Turnbull: Another way to think about this question would be the evolution of Web design. In the early Web, very few held the title "Web Designer." I was a generalist programmer, for example, and I made an HTML page look nice by trying to organize img tags within table elements in the early 2000s. I didn't stop to think to hire a designer. Why? These were still the early days of the Web. The Web interaction was new. After time, consumers developed taste for what a good user interaction with a Website looked like. So today any reasonable Website invests as much in design as they do "generalist programming."

I see search in a similar vein. Google has given us high expectations on good search interaction. Yet there's so much that's domain or app specific. Searching through medical articles to help doctors diagnose patients looks nothing like searching your e-commerce catalog for good deals. Which looks nothing like researching the news of the late 19th century. Some of the widgets looks the same: the autocomplete, the search bar, the highlighted search results. But the small details behind every keystroke in the search bar can change a great deal. The way search results are ordered can change a great deal. This is a new kind of "designer" that focuses on relevance and conversational interactivity behind search.

And this is the tip of the iceberg. A lot of the machine learning craze today is really about problems of Information Retrieval: some kind of process that returns results ranked by relevance personalized for a user. Many startups drive chat bots through really smart search. Recommendation systems are increasingly being built with Elasticsearch.

These are the interaction paradigms of tomorrow. The "future" of design is Information Retrieval. Solr and Elasticsearch have all the tools in their extensive toolbelt to lead the next generation of design and interactivity.

InfoQ: Right now, Elasticsearch and Solr seem to have vast feature sets. What's on the horizon? How is the technology changing over time.

Turnbull: Elasticsearch and Solr are both making strides as a simpler framework for building recommendation systems. I think this will make developing recommendations a far less daunting prospect for many medium-sized businesses. In the same way Elasticsearch and Solr made it easier to implement search, I think that being able to take a single open source tool off the shelf that already knows how to rank results based on relevance means avoiding complex and expensive machine-learning solutions. My coauthor, John Berryman, for instance actually does this: he's built a recommendation system using Elasticsearch for Eventbrite. But there's more being done to help. This includes Trey Granger's knowledge graph for Solr and Elastic's graph product.

InfoQ: Other than reading the book, what resource should budding relevance engineers seek out?

Turnbull: Well first, don't hesitate to seek me out. I'm a consultant, and really enjoy speaking -- so I seek to be sought out! In particular, I do free one hour knowledge sharing/lunch and learn events in case you're having trouble staffing your company's events.

For great search blogs, I write a lot on my company's blog. Sujit Pal, writes a ton of interesting material on his blog that you'd also find interesting. We should all get on my coauthor's case and get him to write more because when we worked together he contributed tremendous content to our blog.

For relevance/information retrieval books, you'll definitely want to read Introduction to Information Retrieval. Taming Text is another great read that blends search and Natural Language Processing. I also have another book that builds on Relevant Search to teach a business-level flow to implementing relevance.

For search engine specific books, I'd recommend my colleague's Apache Solr Enterprise Search Server, Solr in Action, and Elasticsearch in Action.

About the Book Authors

 Doug Turnbull, is an Elasticsearch consultant for OpenSource Connections and author of Relevant Search. Doug helps clients deliver smarter, contextually aware, and personalized search applications. Doug loves building tools to help with search relevancy, including QuepidSplainer and Elyzer.

 

John Berryman began his career as an Aerospace Engineering, but quickly discovered that his true interest lay at the intersection of software and math. John soon moved into technology consulting, specializing in full-text search. Highlights to this point include prototyping a patent examination search system with the US Patent and Trademark Office, speaking internationally about search technologies and their application, and integrating semantic search and recommendation into Solr search engine.

 

Rate this Article

Adoption
Style

BT