From the course: Building Recommender Systems with Machine Learning and AI

Unlock this course with a free trial

Join today to access over 22,500 courses taught by industry experts.

Experiment with different KNN parameters

Experiment with different KNN parameters

- Maybe we can improve on these results somehow by tweaking the parameters of the algorithm. As an exercise, try out the different similarity measures Surprise lib offers. Right now we're using cosine, but how 'about MSD and Pearson? Give it a go, and to compare the results you get from each, are they substantially different? After you've experimented with these different metrics, continue to the next slide, and I'll show you my results. So I tabulated my results using different similarity metrics with KNN for both the user-based and item-based cases. Let's start with user-based. If we look at the RMSE error scores, we might conclude that mean squared distance or MSD significantly outperforms the cosine metric. It's .97 as opposed to .99. But look at the actual Top-N recommendations for our test user. They are exactly the same. This means that the math behind MSD leads to more accurate rating predictions, but the ranking of those predictions is more or less the same. So from a…

Contents