Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News eBay New Recommendations Model with Three Billion Item Titles

eBay New Recommendations Model with Three Billion Item Titles

eBay developed a new recommendations model based on Natural Language Processing (NLP) techniques and in particular on BERT model. This new model, called "ranker," uses the distance score between the embeddings as a feature; in this way the information in the titles of the products is analyzed from the semantic points of view. Ranker allows eBay to increase the metrics of Purchases, Clicks, and Ad Revenue by 3.76%, 2.74%, and 4.06% compared with the previous model in production on the native app (Android and iOS) and web platform.

The eBay Promoted Listing Similar Reccomendation Model (PLSIM) is composed of three stages: retrieve the Promoted Listing Similar, called "recall set"; these are most relevant. Apply the trained ranker, trained with offline historical data, to rank the recall set accordingly to the likelihood of purchase, reranking the listing by incorporating seller ad-rate. The features of the model include: recommended item historical data, recommended item-to-seed item similarity, product category, country, and user personalization features. The model is continuously trained using a Gradient Boost Tree to rank items according to the relative purchase probability; the incorporation of deep-learning-based features in similarity detection increases the performance significantly.

The previous version of the recommendation ranking models evaluates the product titles using Term Frequency-Inverse Document Frequency (TF-IDF) as well as the Jaccard similarity. This token-base approach has basic limitations and doesn’t consider the context of the sentences and the synonyms. Instead, BERT, a deep learning approach, has excellent performance on language understanding. Since the eBay corpora are different than books and Wikipedia, eBay engineers introduce eBERT, a BERT variant, pre-trained on the eBay items titles. It is trained with 250 million sentences form Wikipedia and 3 billion from eBay titles in several languages. In offline evaluations, this eBERT model significantly outperforms out-of-the-box BERT models on a collection of eBay-specific tagging tasks, with a F1 score of 88.9.

eBERT architecture is too heavy for a high-throughput inference; in this case, the recommendations can’t be delivered on time. To address this issue, eBay developed MicroBERT, another model that is a smaller version of BERT and optimized for CPU inference. MicroBERT uses eBERT as a teacher in the training phase using a knowledge distillation process. In this way, microBERT retains 95%-98% of eBERT quality with a decreasing inference time of 300%.

Finally, microBERT, is fine-tuned using a contrastive loss function called InfoNCE. The item’s titles are encoded as embedding vectors, and the model is trained to increase the cosine similarity of the thematical distance between these vectors (that represents the embeddings of the titles) that are known to be related to each other, while decreasing the cosine similarity of all other pairings of item titles in a mini-batch.

This new ranking model achieves a 3.5% improvement in purchase rank (the average rank of the sold item) but the complexity of this model makes it hard to run the recommendations in real-time. This is why the title embeddings are generated by a daily batch job and stored in NuKV (eBay’s cloud-native key-value store) with item titles as key and embeddings as values. With this approach, eBay, is able to meet the latency required.

About the Author

Rate this Article