From the course: Building Recommender Systems with Machine Learning and AI
Unlock this course with a free trial
Join today to access over 22,500 courses taught by industry experts.
Experiment with different KNN parameters - Python Tutorial
From the course: Building Recommender Systems with Machine Learning and AI
Experiment with different KNN parameters
- Maybe we can improve on these results somehow by tweaking the parameters of the algorithm. As an exercise, try out the different similarity measures Surprise lib offers. Right now we're using cosine, but how 'about MSD and Pearson? Give it a go, and to compare the results you get from each, are they substantially different? After you've experimented with these different metrics, continue to the next slide, and I'll show you my results. So I tabulated my results using different similarity metrics with KNN for both the user-based and item-based cases. Let's start with user-based. If we look at the RMSE error scores, we might conclude that mean squared distance or MSD significantly outperforms the cosine metric. It's .97 as opposed to .99. But look at the actual Top-N recommendations for our test user. They are exactly the same. This means that the math behind MSD leads to more accurate rating predictions, but the ranking of those predictions is more or less the same. So from a…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
-
-
-
Measuring similarity and sparsity4m 49s
-
Similarity metrics8m 32s
-
User-based collaborative filtering7m 25s
-
User-based collaborative filtering: Hands-on4m 59s
-
Item-based collaborative filtering4m 14s
-
Item-based collaborative filtering: Hands-on2m 23s
-
Tuning collaborative filtering algorithms3m 31s
-
Evaluating collaborative filtering systems offline1m 28s
-
Measure the hit rate of item-based collaborative filtering2m 17s
-
KNN recommenders4m 4s
-
Running user- and item-based KNN on MovieLens2m 26s
-
Experiment with different KNN parameters4m 25s
-
Bleeding edge alert: Translation-based recommendations2m 29s
-
-
-
-
-
-
-
-
-