Mastering Apache Spark 2.x(Second Edition)
上QQ阅读APP看书,第一时间看更新

Managing temporary views with the catalog API

Since Apache Spark 2.0, the catalog API is used to create and remove temporary views from an internal meta store. This is necessary if you want to use SQL, because it basically provides the mapping between a virtual table name and a DataFrame or Dataset.

Internally, Apache Spark uses the org.apache.spark.sql.catalyst.catalog.SessionCatalog class to manage temporary views as well as persistent tables.

Temporary views are stored in the SparkSession object, as persistent tables are stored in an external metastore. The abstract base class org.apache.spark.sql.catalyst.catalog.ExternalCatalog is extended for various meta store providers. One already exists for using Apache Derby and another one for the Apache Hive metastore, but anyone could extend this class and make Apache Spark use another metastore as well.