From the course: Big Data Analytics with Hadoop and Apache Spark

Unlock the full course today

Join today to access over 22,600 courses taught by industry experts or purchase this course individually.

Storing intermediate results

Storing intermediate results

From the course: Big Data Analytics with Hadoop and Apache Spark

Start my 1-month free trial

Storing intermediate results

- [Narrator] As we have seen in the previous examples for execution plans. Every time an action is performed, Spark goes all the way towards data source and reads the data. This happens even if the data was read before and some actions were performed. While this works fine while running automated jobs, it is a problem during interactive analytics. Every time a new action command is executed on an interactive shell, Spark goes back to its source, it is better to cache intermediate results, so, we can receive analytics from these results without starting all over. Spark has two modes of caching in memory and disk. The cache method is used to cache in memory only. The persist method is used to cache in memory, disk or both. In this example, we first cache the words RDD into memory using the cache function. Spark does less evaluation, so we need to execute an action to trigger the caching. Next, we will compare execution…

Contents