Chapter 2 Getting Started

From R, getting started with Spark using sparklyr and a local cluster is as easy as installing and loading the sparklyr package followed by running:

spark_install()
sc <- spark_connect(master = "local")

To make sure we can all run the code above and understand it, this section will walk you through the prerequisites, installing sparklyr and Spark, connecting to a local Spark cluster and briefly explaining how to use Spark.

However, if a Spark cluster and R environment have been made available to you, you do not need to install the prerequisites nor install Spark yourself. Instead, you should ask for the Spark master parameter and connect as follows; this parameter will be formally introduced under the Connections chapter.

sc <- spark_connect(master = "<cluster-master>")

2.1 Prerequisites

R can run in many platforms and environments; therfore, whether you use Windows, Mac or Linux, the first step is to install R from the r-project.org, detailed instructions are provided in the Installing R appendix.

Most people use programming languages with tools to make them more productive; for R, RStudio would be such tool. Strictly speaking, RStudio is an Integrated Development Environment (or IDE), which also happens to support many platforms and environments. We strongly recommend you get RStudio installed if you haven’t done so already, see details under the Installing RStudio appendix.

Additionally, since Spark is built in the Scala programming language which is run by the Java Virtual Machine, you also need to install Java 8 in your system. It is likely that your system already has Java installed, but you should still check the version and update or downgrade as described in the Installing Java appendix.

2.2 Installing sparklyr

As many other R packages, sparkylr is available to be installed from CRAN and can be easily installed as follows:

install.packages("sparklyr")

The CRAN release of sparklyr contains the most stable version and it’s the recommended version to use; however, to try out features being developed in sparklyr, you can install directly from GitHub using the devtools package. First, install the devtools package and then install sparklyr as follows:

install.packages("remotes")
remotes::install_github("rstudio/sparklyr")

The examples in this book assume you are using the latest version of sparklyr, you can verify your version is as new as the one we are using by running:

packageVersion("sparklyr")
[1] ‘1.0.0’

2.3 Installing Spark

Start by loading sparklyr,

library(sparklyr)

This will makes all sparklyr functions available in R, which is really helpful; otherwise, we would have to run each sparklyr command prefixed with sparklyr::.

Spark can be easily installed by running spark_install(); this will install the latest version of Spark locally in your computer, go ahead and run spark_install(). Notice that this command requires internet connectivity to download Spark.

spark_install()

All the versions of Spark that are available for installation can be displayed by running:

spark_available_versions()
##   spark
## 1   1.6
## 2   2.0
## 3   2.1
## 4   2.2
## 5   2.3
## 6   2.4

A specific version can be installed using the Spark version and, optionally, by also specifying the Hadoop version. For instance, to install Spark 1.6.3, we would run:

spark_install(version = "1.6")

You can also check which versions are installed by running:

spark_installed_versions()
  spark hadoop                              dir
7 2.3.1    2.7 /spark/spark-2.3.1-bin-hadoop2.7

The path where Spark is installed is referenced as Spark’s home, which is defined in R code and system configuration settings with the SPARK_HOME identifier. When using a local Spark cluster installed with sparklyr, this path is already known and no additional configuration needs to take place.

Finally, in order to uninstall an specific version of Spark you can run spark_uninstall() by specifying the Spark and Hadoop versions, for instance:

spark_uninstall(version = "1.6.3", hadoop = "2.6")

Note: The default installation paths are ~/spark for OS X and Linux and, %LOCALAPPDATA%/spark for Windows. To customize the installation path you can run options(spark.install.dir = "<installation-path>") before spark_install() and spark_connect().

2.4 Connecting to Spark

It’s important to mention that, so far, we’ve only installed a local Spark cluster. A local cluster is really helpful to get started, test code and troubleshoot with ease. Further chapters will explain where to find, install and connect to real Spark clusters with many machines, but for the first few chapters, we will focus on using local clusters.

To connect to this local cluster we simply run:

sc <- spark_connect(master = "local")

The master parameter identifies which is the “main” machine from the Spark cluster; this machine is often called the driver node. While working with real clusters using many machines, most machines will be worker machines and one will be the master. Since we only have a local cluster with only one machine, we will default to use "local" for now.

If connection fails, the Connections chapter contains a troubleshooting section which can help you resolve your connection issue.

2.5 Using Spark

Now that you are connected, we can run a few simple commands. For instance, let’s start by copying the mtcars dataset into Apache Spark using copy_to().

cars <- copy_to(sc, mtcars)

The data was copied into Spark but we can access it from R using the cars reference. To print it’s contents we can simply type cars.

cars
# Source: spark<mtcars> [?? x 11]
     mpg   cyl  disp    hp  drat    wt  qsec    vs    am  gear  carb
   <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
 1  21       6  160    110  3.9   2.62  16.5     0     1     4     4
 2  21       6  160    110  3.9   2.88  17.0     0     1     4     4
 3  22.8     4  108     93  3.85  2.32  18.6     1     1     4     1
 4  21.4     6  258    110  3.08  3.22  19.4     1     0     3     1
 5  18.7     8  360    175  3.15  3.44  17.0     0     0     3     2
 6  18.1     6  225    105  2.76  3.46  20.2     1     0     3     1
 7  14.3     8  360    245  3.21  3.57  15.8     0     0     3     4
 8  24.4     4  147.    62  3.69  3.19  20       1     0     4     2
 9  22.8     4  141.    95  3.92  3.15  22.9     1     0     4     2
10  19.2     6  168.   123  3.92  3.44  18.3     1     0     4     4
# … with more rows

Congrats! You have successfully connected and loaded your first dataset into Spark.

Let’s explain what’s going on in copy_to(). The first parameter, sc, gives the function a reference to the active Spark Connection that was earlier created with spark_connect(). The second parameter specifies a dataset to load into Spark. Now, copy_to() returns a reference to the dataset in Spark which R automatically prints. Whenever a Spark dataset is printed, Spark will collect some of the records and display them for you. In this particular case, that dataset contains only a few rows describing automobile models and some of their specifications like Horse Power and expected Miles per Gallon.

2.5.1 Web Interface

Most of the Spark commands are executed from the R console; however, monitoring and analyzing execution is done through Spark’s web interface, see Figure 2.1. This interface is a web application provided by Spark which can be accessed by running:

spark_web(sc)
Apache Spark Web Interface

FIGURE 2.1: Apache Spark Web Interface

Printing the cars dataset collected a few records to be displayed in the R console. You can see in the Spark web interface that a job was started to collect this information back from Spark. You can also select the storage tab to see the “mtcars” dataset cached in-memory in Spark, Figure 2.2.

Apache Spark Web Interface - Storage Tab

FIGURE 2.2: Apache Spark Web Interface - Storage Tab

Notice that this dataset is fully loaded into memory since the fraction cached is 100%, you can know exactly how much memory this dataset is using through the size in memory column.

The executors tab, Figure 2.3, provides a view of your cluster resources. For local connections, you will find only one executor active with only 2GB of memory allocated to Spark and 384MB available for computation. The Tunning chapter you will learn how request more compute instances, resources and learn how memory is allocated.

Apache Spark Web Interface - Executors Tab

FIGURE 2.3: Apache Spark Web Interface - Executors Tab

The last tab to explore is the environment tab, Figure 2.4, this tab lists all the settings for this Spark application which the tunning will also introduce them in detail. As you will learn, most settings don’t need to be configured explicitly, but in order to properly run at scale, you will have to become familiar with some of them, eventually.

Apache Spark Web Interface - Environment Tab

FIGURE 2.4: Apache Spark Web Interface - Environment Tab

Next, you will make use of a small subset of the practices that the Analysis chapter will cover.

2.5.2 Analysis

When using Spark from R to analyze data, you can use SQL (Structured Query Language) or dplyr (a grammar of data manipulation). SQL can be used through the DBI package; for instance, to count how many records are available in our cars dataset we can run:

library(DBI)
dbGetQuery(sc, "SELECT count(*) FROM mtcars")
  count(1)
1       32

When using dplyr, you write less code and it’s often much easier to write than SQL; which is why we won’t make use SQL in this book; however, if you are profficient in SQL, this is a viable option to you. For instance, counting records in dplyr is more compact and easier to understand.

library(dplyr)
count(cars)
# Source: spark<?> [?? x 1]
      n
  <dbl>
1    32

In general, we usually start by analysing data in Spark with dplyr, followed by sampling rows and selecting a subset of the available columns, the last step is to collect data from Spark to perform further data processing in R, like data visualization. Let’s perform a very simple data analysis example by selecting, sampling and plotting the cars dataset in Spark:

select(cars, hp, mpg) %>%
  sample_n(100) %>%
  collect() %>%
  plot()
Horse Power vs Miles per Gallon

FIGURE 2.5: Horse Power vs Miles per Gallon

The plot in Figure 2.5, shows that as we increase the horse power in a vehicle, their fuel efficiency measured in miles per gallon gets reduced. While this is insightful, it’s hard to predict numerically how increased horse power would affect fuel effiency, modeling can help us overcome this.

2.5.3 Modeling

While data analysis can take you quite far when understanding data, building a mathematical model that describes and generalizes the dataset is quite powerful. In the Introduction chapter you learned that the fields of machine learning and data science make use of mathematical models to perform predictions and find additional insights.

For instance, we can use a linear model to approximate the relationship between fuel efficiency and horse power:

model <- ml_linear_regression(cars, mpg ~ hp)

This model can now be used to predict values that are not in the original datset. For instance, we can add entries for cars with horse power beyond 250 and also visualize the predicted values as shown in Figure 2.6.

model %>%
  ml_predict(copy_to(sc, data.frame(hp = 250 + 10 * 1:10))) %>%
  transmute(hp = hp, mpg = prediction) %>%
  full_join(select(cars, hp, mpg)) %>%
  collect() %>%
  plot()
Horse power vs miles per gallon with predictions

FIGURE 2.6: Horse power vs miles per gallon with predictions

In addition, we can refine our insights using the broom package to retrive additional statistics from our model that can help us asses it’s quality.

broom::glance(model)
# A tibble: 1 x 5
  explained.varia… mean.absolute.e… mean.squared.er… r.squared
             <dbl>            <dbl>            <dbl>     <dbl>
1             21.2             2.91             14.0     0.602
# … with 1 more variable: root.mean.squared.error <dbl>

While the previous example lacks many of the appropriate techniques you should use while modeling, it’s also a simple example to briefly introduce the modeling capabilities of Spark. All the Spark models, techniques and best practices will be properly introduced in the Modeling chapter.

2.5.4 Data

For simplicity, we copied the mtcars dataset into Spark; however, data is usually not copied into Spark. Instead, data is read from existing data sources in a variety of formats like plain text, CSV, JSON, JDBC and many more which, the Data chapter will introduce in detail. For instance, we can export our cars dataset as a CSV file:

spark_write_csv(cars, "cars.csv")

In practice, we would read an existing dataset from a distributed storage system like HDFS, but we can also read back from the local file system:

cars <- spark_read_csv(sc, "cars.csv")

2.5.5 Extensions

In the same way that R is known for it’s vibrant community of package authors, at a smaller scale, many extensions for Spark and R have been written and are available to you. The Extensions chapter will introduce many interesting ones to perform advanced modeling, graph analysis, preprocess datasets for deep learning, etc.

For instance, the sparkly.nested extension is an R package that extends sparklyr to help you manage values that contain nested information. A common use case arises while dealing with JSON files which contain nested lists that require preprocessing before doing meaningful data analysis. To use this extension, we have to first install it as follows:

install.packages("sparklyr.nested")

Then we can use this extension to group all the horse power data points over the number of cylinders:

sparklyr.nested::sdf_nest(cars, hp) %>%
  group_by(cyl) %>%
  summarize(data = collect_list(data))
# Source: spark<?> [?? x 2]
    cyl data       
  <int> <list>     
1     6 <list [7]> 
2     4 <list [11]>
3     8 <list [14]>

While nesting data makes it harder to read, it is a requirement while dealing with nested data formats like JSON using the spark_read_json() and spark_write_json() functions.

2.5.6 Distributed R

For those few cases where a particular functionality is not available in Spark and no extension has been developed, you can consider distributing your own R code across the Spark cluster. This is a powerful tools but comes with additional complexity that you should only use as a last resort option.

Suppose that we need to round all the values across all the columns in our dataset, one approach would be running custom R code making use of R’s round() function:

cars %>% spark_apply(~round(.x))
# Source: spark<?> [?? x 11]
     mpg   cyl  disp    hp  drat    wt  qsec    vs    am  gear  carb
   <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
 1    21     6   160   110     4     3    16     0     1     4     4
 2    21     6   160   110     4     3    17     0     1     4     4
 3    23     4   108    93     4     2    19     1     1     4     1
 4    21     6   258   110     3     3    19     1     0     3     1
 5    19     8   360   175     3     3    17     0     0     3     2
 6    18     6   225   105     3     3    20     1     0     3     1
 7    14     8   360   245     3     4    16     0     0     3     4
 8    24     4   147    62     4     3    20     1     0     4     2
 9    23     4   141    95     4     3    23     1     0     4     2
10    19     6   168   123     4     3    18     1     0     4     4
# … with more rows

If you are a profficient R user, it can be quite tempting to use spark_apply() for everything, but please, don’t! spark_apply() was designed for advanced use cases where Spark falls short; instead, you will learn how to do proper data analysis and modeling without having to distribute custom R code across your cluster.

2.5.7 Streaming

While processing large static datasets is the most typical use case for Spark, processing dynamic datasets in realtime is also possible and for some applications, a requirement. You can think of a streaming dataset as a static data source with new data arriving continously, like stock market quotes. Streaming data is usually read from Kafka (an open-source stream-processing software platform) or from distributed storage that receives new data continuously.

To try out streaming, lets first create an input/ folder with some data that we will use as the input for this stream:

dir.create("input")
write.csv(mtcars, "input/cars_1.csv", row.names = F)

Then we will define a stream that processes incoming data from the input/ folder, performs a custom transformation in R and, pushes the output into an output/ folder

stream_read_csv(sc, "input/") %>%
  spark_apply(~sapply(.x, jitter)) %>%
  stream_write_csv("output/")
Stream: 720aac2a-d4aa-4e6c-828d-325d8b017fdb
Status: Waiting for next trigger
Active: TRUE

As soon as the stream of realtime data starts, the input/ folder is processed and turned into a set of new files under the output/ folder containing the new transformed files. Since the input contained only one file, the output folder will also contain a single file resulting from applying the custom spark_apply() transformation.

dir("output", pattern = ".csv")
[1] "part-00000-eece04d8-7cfa-4231-b61e-f1aef8edeb97-c000.csv"

Up to this point, this resembles static data processing; however, we can keep adding files to the input/ location and Spark will parallelize and process data automatically. Let’s add one more file and validate that it’s automatically processed.

# Write more data into the stream source
write.csv(mtcars, "input/cars_2.csv", row.names = F)

# Wait for the input stream to be processed
Sys.sleep(1)

# Check the contents of the stream destination
dir("output", pattern = ".csv")
[1] "part-00000-2d8e5c07-a2eb-449d-a535-8a19c671477d-c000.csv"
[2] "part-00000-eece04d8-7cfa-4231-b61e-f1aef8edeb97-c000.csv"

You can use dplyr, SQL, Spark models or distributed R to analyze streams in realtime, we will properly introduce you to all the interesting transformations you can perform to analyze realtime data during the Streaming chapter.

2.5.8 Logs

Logging is definetely less interesting that realtime data processing; however, it’s a tool you should be familiar with. A log is just a text file where Spark will append information relevant to the execution of tasks in the cluster. For local clusters, we can retrieve all the recent log by running:

spark_log(sc)
18/10/09 19:41:46 INFO Executor: Finished task 0.0 in stage 5.0 (TID 5)...
18/10/09 19:41:46 INFO TaskSetManager: Finished task 0.0 in stage 5.0...
18/10/09 19:41:46 INFO TaskSchedulerImpl: Removed TaskSet 5.0, whose...
18/10/09 19:41:46 INFO DAGScheduler: ResultStage 5 (collect at utils...
18/10/09 19:41:46 INFO DAGScheduler: Job 3 finished: collect at utils...

Or we can retrieve specific log entries containing, say sparklyr, by using the filter parameter as follows:

spark_log(sc, filter = "sparklyr")
## 18/10/09 18:53:23 INFO SparkContext: Submitted application: sparklyr
## 18/10/09 18:53:23 INFO SparkContext: Added JAR...
## 18/10/09 18:53:27 INFO Executor: Fetching spark://localhost:52930/...
## 18/10/09 18:53:27 INFO Utils: Fetching spark://localhost:52930/...
## 18/10/09 18:53:27 INFO Executor: Adding file:/private/var/folders/...

Most of the time, you won’t need to worry about Spark logs, except in cases where you need to troubleshoot a failed computation; in those cases, logs are an invaluable resource to be aware of, now you know.

2.6 Disconnecting

For local clusters (really, any cluster) once you are done processing data you should disconnect by running:

spark_disconnect(sc)

This will terminate the connection to the cluster as well as the cluster tasks . If multiple Spark connections are active, or if the conneciton instance sc is no longer available, you can also disconnect all your Spark connections by running:

spark_disconnect_all()

Notice that exiting R, RStudio or restarting your R session will also cause the Spark connection to terminate, which in turn terminates the Spark cluster and cached data that is not explicitly persisted.

2.7 Using RStudio

Since it’s very common to use RStudio with R, sparklyr provides RStudio extensions to help simplify your workflows and increase your productivity while using Spark in RStudio. If you are not familiar with RStudio, take a quick look at the Using RStudio appendix section. Otherwise, there are a couple extensions worth highlighting.

First, instead of starging a new connections using spark_connect() from RStudio’s R console, you can use the new connection action from the connections pane and then, select the Spark connection which will open the dialog shown in Figure 2.7. You can then customize the versions and connect to Spark which will simply generate the right spark_connect() command and execute this in the R console for you.

RStudio New Spark Connection

FIGURE 2.7: RStudio New Spark Connection

Second, once connected to Spark, either by using the R console or through RStudio’s connections pane, RStudio will display your datasets available in the connections pane, see Figure 2.8. This is a useful way to track your existing datasets and provides an easy way to explore each of them.

RStudio Connections Pane

FIGURE 2.8: RStudio Connections Pane

Additionally, an active connection provides the following custom actions:

Spark
Opens the Spark web interface, a shortcut to spark_ui(sc).
Log
Opens the Spark web logs, a shortcut to spark_log(sc).
SQL
Opens a new SQL query, see DBI and SQL support in the data Analysis chapter.
Help
Opens the reference documentation in a new web browser window.
Disconnect
Disconnects from Spark, a shortcut to spark_disconnect(sc).

The rest of this book will use plain R code, it is up to you to execute this code in the R console, RStudio, Jupyter Notebooks or any other tool that support executing R code since, the code provided in this book executes in any R environment.

2.8 Resources

While we’ve put significant effort into simplifying the onboarding process, there are many additional resources that can help you troubleshoot particular issues while getting started and, in general, introduce you to the broader Spark and R communities to help you get specific answers, discuss topics and get connected with many users actevely using Spark with R.

2.9 Recap

In this chapter you learned about the prerequisites required to work with Spark, how to connect to Spark using spark_connect(), install a local cluster using spark_install(), load a simple dataset, launch the web interface and display logs using spark_web(sc) and spark_log(sc) respectively, disconnect from RStudio using spark_disconnect() and we closed this chapter presenting the RStudio extensions sparklyr provides.

It is our hope that this chapter will help anyone interested in learning cluster computing using Spark and R getting started, ready to experiment on your own and ready to tackle actual data analysis and modeling problems which, the next two chapters will introduce you. The next chapter, Analysis, will present data analysis as the process to inspect, clean, and transform data with the goal of discovering useful information. Modeling can be considered part of data analysis; however, it deserves it’s own chapter to truly understand and take advantage of the modeling functionality available in Spark.