import and run a notebook which demonstrates how to use set of libraries to create a pipeline with hyperparameters via SparkML.
- [Instructor] So, continuing on,…we got a result of only a little over 29% accuracy…and that's just really not good enough for…us to use this as a model.…So, do we have some capability that we can use to evaluate…other types of solutions to this problem?…And we do!…So, we're going to use this method…called Regression Metrics.…We can get more insight into our model performance.…So, just of note,…regression metrics requires input formatted…as tuples of doubles,…where the first item is the prediction…and the second item is the observation.…
In this case, the count, how many markets?…Once you have mapped these values from Hold Out,…you can directly pass them…to the Regression Metrics Constructor.…Now, the idea with this…is that we're going to have this holding area,…so that we can see what it is…we want to get out of this model.…And then, we're going to apply some new methods against it.…So, the key line of this is mapped equals holdout select…from our prediction, the count…from the rdd-map into a lambda…and then pass the values.…
Author
Released
7/5/2017- Relate which file system is typically used with Hadoop.
- Explain the differences between Apache and commercial Hadoop distributions.
- Cite how to set up IDE - VS Code + Python extension.
- Relate the value of Databricks community edition.
- Compare YARN vs. Standalone.
- Review various streaming options.
- Recall how to select your programming language.
- Describe the Databricks environment.
Skill Level Intermediate
Duration
Views
Related Courses
-
Apache Spark Essential Training
with Ben Sullins1h 27m Intermediate
-
Introduction
-
Welcome53s
-
-
1. Hadoop Core Fundamentals
-
Modern Hadoop1m 53s
-
Hadoop libraries1m 23s
-
Run Hadoop job on GCP1m 52s
-
Databricks on AWS2m 32s
-
-
2. Setting Up a Hadoop Dev Environment
-
Load data into tables1m 51s
-
3. Hadoop Batch Processing
-
Processing options1m 2s
-
Resource coordinators1m 30s
-
Compare YARN vs. Standalone1m 30s
-
-
4. Fast Hadoop Options
-
Big data streaming1m 57s
-
Streaming options1m 10s
-
Apache Spark basics1m 46s
-
Spark use cases1m 2s
-
5. Spark Basics
-
Apache Spark libraries3m 24s
-
Spark shell1m 53s
-
-
6. Using Spark
-
Tour the notebook5m 29s
-
Import and export notebooks2m 56s
-
Calculate pi on Spark8m 19s
-
Import data2m 50s
-
Transformations and actions4m 43s
-
Caching and the DAG6m 49s
-
7. Spark Libraries
-
Spark SQL8m 34s
-
SparkR6m 11s
-
Spark ML: Preparing data4m 21s
-
Spark ML: Building the model3m 50s
-
MXNet or TensorFlow2m 30s
-
Spark with GraphX2m 12s
-
-
8. Spark Streaming
-
Spark streaming4m 21s
-
9. Hadoop Streaming
-
Pub/Sub on GCP3m 59s
-
Apache Kafka1m 26s
-
Kafka architecture1m 6s
-
Apache Storm1m 30s
-
Storm architecture1m 36s
-
-
10. Modern Hadoop Architectures
-
Conclusion
-
Next steps26s
-
- Mark as unwatched
- Mark all as unwatched
Are you sure you want to mark all the videos in this course as unwatched?
This will not affect your course history, your reports, or your certificates of completion for this course.
CancelTake notes with your new membership!
Type in the entry box, then click Enter to save your note.
1:30Press on any video thumbnail to jump immediately to the timecode shown.
Notes are saved with you account but can also be exported as plain text, MS Word, PDF, Google Doc, or Evernote.
Share this video
Embed this video
Video: Spark ML: Evaluating the model