
Applying SQL table joins
In order to examine the table joins, we have created some additional test data. Let's consider banking data. We have an account table called account_data.json and a customer data table called client_data.json. So let's take a look at the two JSON files.
First, let's look at client.json:

Next, let's look at account.json:

As you can see, clientId of account.json refers to id of client.json. Therefore, we are able to join the two files but before we can do this, we have to load them:
var client = spark.read.json("client.json")
var account = spark.read.json("account.json")
Then we register these two DataFrames as temporary tables:
client.createOrReplaceTempView("client")
account.createOrReplaceTempView("account")
Let's query these individually, client first:

Then follow it up with account:

Now we can join the two tables:

Finally, let's calculate some aggregation on the amount of money that every client has on all his accounts:
