BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News LinkedIn and Twitter Contribute Machine Learning Libraries to Open Source

LinkedIn and Twitter Contribute Machine Learning Libraries to Open Source

Twitter’s engineering group, known for various contributions to open source from streaming MapReduce to front-end framework Bootstrap recently announced open sourcing an algorithm that can efficiently recommend content. This is a really important problem for Twitter as it helps promoting the right ads to the right users and recommending which users to follow. The algorithm, named DIMSUM, can pre-process similarity data and feed the actual recommendation algorithm with a subset of users that are calculated to be above a similarity threshold.

As former Twitter engineer Reza Zadeh explains, DIMSUM is sampling the problem space to weed out the pairs of items that are not similar enough to matter. The DIMSUM algorithm may not matter as much in small data sets but its strength comes in play with big datasets, when one can’t bruteforce the problem and calculate all possible similarity pairs. The algorithm has been integrated into Scalding and Spark.

LinkedIn also open sourced a Machine Learning library of its own, ml-ease. ml-ease is a library focused in model fitting and training. Currently supporting ADMM (Alternating Direction Method of Multipliers), ml-ease can apply logistic regression in a highly paralelized fashion and converge to a solution that is theoretically close to what you could have obtained in a single machine algorithm execution.

Logistic regression is one of the most popular machine learning algorithms and not an easy one to parallelize. Mahout’s implementation of logistic regression using Stochastic Gradient Descent is one example of inherently sequential algorithm for a parallel problem. An evaluation of parallel logistic regression models has shown that given enough computing resources, a tradeoff between speed and precision can parallelize the problem for massive datasets. LinkedIn’s implementation is focusing in scalability, speed and ease of use with a small margin of error as a tradeoff for speed. This could be a good proposition for several commercial facing problems. The code is available in GitHub.

Rate this Article

Adoption
Style

BT