title: "Connectors" date: 2019-03-13T18:28:09-07:00 draft: false
They provide integration with any SQL database or Job execution engine. Here is a list of the existing connectors.
Connectors are pluggable and new engines can be added. Feel free to contact the community.
SqlAlchemy is the prefered way if the HiveServer2 API is not supported by the database. The implementation is in sql_alchemy.py and is depends on the repective SqlAlchemy dialects.
With the JDBC proxy, query editor with any JDBC compatible database. View the JDBC connector.
Note In the long term, SqlAlchemy is prefered as more "Python native".
If the built-in HiveServer2 (Hive, Impala, Spark SQL), RDBMS (MySQL, PostgreSQL, Oracle, SQLite), and JDBC interfaces don’t meet your needs, you can implement your own connector to the notebook app: Notebook Connectors. Each connector API subclasses the Base API and must implement the methods defined within; refer to the JdbcApi or RdbmsApi for representative examples.
Based on the Livy REST API
MapReduce, Pig, Java, Shell, Sqoop, DistCp Oozie connector.
The Job Browser is generic and can list any type of jobs, queries and provide bulk operations like kill, pause, delete... and access to logs and recommendations.
Here is its API.
Various storage systems like Hadoop HDFS, AWS S3 and Azure ADLS can be interacted with. The fsmanager.py is the main router to each API.
Note Ceph can be used via the S3 browser.
Dashboards are generic and support Apache Solr and SQL:
The API was influenced by Solr but is now generic:
Implementations:
A connector similar to Solr or SQL Alchemy binding would need to be developed HUE-7828