Recommender system

User 3200 | 7/11/2016, 5:28:54 PM

Hi

What makes the turi recommender system algorithm different from other frameworks that are available(except open source) and ,have you took any other framework/ algorithm as reference or benchmark to compare with graphlab create??

Comments

User 1207 | 7/12/2016, 12:56:25 AM

Hello @Venki,

Great question! The main thing that makes our algorithm unique is their ability to scale. Since all our data structures are disk-backed, we can handle large problems typical of those we find in industry. We've tested all of our algorithms on a mid-sized AWS machine with 1B observations, 10M+ users, and 10M items, and the recommender models train in an hour or two and generate recommendations at a rate of 20-30 millisecond per user.

Now, our recommender algorithms tend to be based mainly on ML techniques, so we tend to deal with item similarity and with factorization techniques well. Rule based methods and other techniques would need to be hand-coded to work well in our framework.

We have attempted to do benchmarks against a few other libraries -- mostly open source and one or two proprietary ones with free trials -- and all of them crashed on much smaller datasets, or, on the small benchmarks we care about like the netflix challenge (90M observations, 2-3M users, 17K items), they took much longer to train and performed much worse in terms of precision/recall. (There are techniques that beat ours in accuracy, but we haven't seen them show up outside of custom systems, academic papers, or with available code aimed at industry applications.) We're continually looking for ways to improve the performance of these data sets to make sure our recommender stays cutting edge, and since we have worked with a lot of customers on very large industry datasets, our models have proven to be quite reliable and optimized. But please let us know what your experience is -- hopefully you find it quite reliable.

Thanks! -- Hoyt