From the course: Cloud Hadoop: Scaling Apache Spark
Unlock the full course today
Join today to access over 22,600 courses taught by industry experts or purchase this course individually.
Tour the Databricks Environment - Apache Spark Tutorial
From the course: Cloud Hadoop: Scaling Apache Spark
Tour the Databricks Environment
- [Instructor] So as we get started working in the notebook environment, I think it's helpful to tour around a bit. Now we've looked at a few things but we're going to take our tour of the environment first and then of the notebook or the working area. So in terms of the environment, this is where we left off in the last movie so I'm going to go ahead and go back to the Databricks screen here and then I'm going to click on home and that gives us our menu and then you remember that the workspace is our container for our stored information and then recent is what we've been working on, tables are tables that we have created and you may remember this from a previous movie. Now the reason that we get this, and I did this on purpose, is do you remember how this environment works, you get it for free and you get to have storage of your meta data but your compute those off and on because it's running on Amazon so yes, this is…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
-
-
(Locked)
Tour the Databricks Environment4m 36s
-
(Locked)
Tour the notebook5m 29s
-
(Locked)
Import and export notebooks2m 56s
-
(Locked)
Calculate Pi on Spark8m 30s
-
(Locked)
Run WordCount of Spark with Scala4m 59s
-
(Locked)
Import data2m
-
(Locked)
Transformations and actions3m 21s
-
(Locked)
Caching and the DAG6m 49s
-
(Locked)
Architecture: Streaming for prediction3m 51s
-
(Locked)
-
-
-
-