From the course: Building Recommender Systems with Machine Learning and AI
Unlock this course with a free trial
Join today to access over 22,500 courses taught by industry experts.
Train/test and cross-validation - Python Tutorial
From the course: Building Recommender Systems with Machine Learning and AI
Train/test and cross-validation
- A big part of why recommender systems are as much art as they are science is that it's difficult to measure how good they are. There's a certain aesthetic quality to the results they give you and it's hard to say whether a person considers the recommendation to be good or not especially if you're developing your algorithms offline. People have come up with a lot of different ways to measure the quality of a recommender system and often different measurements can be at odds with each other but let's go through the more popular metrics for recommender systems, as they all have their own uses. First, let's talk about the methodology for testing recommender systems offline. If you've done machine learning before you're probably familiar with the concept of train/test splits. A recommender system is a machine learning system, you train it using prior user behavior and then use it to make predictions about items new users might like. So on paper at least you can evaluate a recommender…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
Train/test and cross-validation3m 49s
-
Accuracy metrics (RMSE and MAE)4m 6s
-
Top-N hit rate: Many ways4m 35s
-
Coverage, diversity, and novelty4m 55s
-
Churn, responsiveness, and A/B tests5m 6s
-
Review ways to measure your recommender2m 55s
-
Walkthrough of RecommenderMetrics.py6m 53s
-
Walkthrough of TestMetrics.py5m 8s
-
Measure the performance of SVD recommendations2m 24s
-
-
-
-
-
-
-
-
-
-
-
-