Browse Source

HUE-8888 [docs] Refactoring of the connector API section

Romain 6 years ago
parent
commit
b16bdde9f8

+ 1 - 1
desktop/libs/azure/src/azure/adls/webhdfs.py

@@ -102,4 +102,4 @@ class WebHdfs(HadoopWebHdfs):
     return UPLOAD_CHUCK_SIZE
 
   def filebrowser_action(self):
-    return self._filebrowser_action
+    return self._filebrowser_action

+ 6 - 12
docs/docs-site/content/developer/api/_index.md

@@ -50,21 +50,11 @@ Once the request is successful then capture headers and cookies for subsequent r
     # check metadata output
     print r.text
 
-
-### SQL Querying
-### SQL Risk Optimization
-### Data Browsing
-### Workflow scheduling
-
 ### Data Catalog
 
-The [metadata API](https://github.com/cloudera/hue/tree/master/desktop/libs/metadata) is powering [Search and Tagging here](http://gethue.com/improved-sql-exploration-in-hue-4-3/) and the [Query Assistant with Navigator Optimizer Integration](http://gethue.com/hue-4-sql-editor-improvements/).
+The [metadata API](https://github.com/cloudera/hue/tree/master/desktop/libs/metadata) is powering [Search and Tagging here](/user/browsing/#data-catalogs).
 
-The backends is pluggable by providing alternative [client interfaces](https://github.com/cloudera/hue/tree/master/desktop/libs/metadata/src/metadata/catalog):
-
-* Cloudera Navigator (default)
-* Apache Atlas ([HUE-8749](https://issues.cloudera.org/browse/HUE-8749))
-* Dummy (skeleton for integrating new catalogs)
+See the backends API in the [data catalog connector](/developer/connectors/#data-catalog) section.
 
 #### Searching for entities
 
@@ -192,6 +182,10 @@ Adding/updating a comment with the dummy backend:
       console.log(ko.mapping.toJSON(data));
     });
 
+### SQL Querying
+### SQL Risk Optimization
+### Data Browsing
+### Workflow scheduling
 
 ## Python
 

+ 44 - 1
docs/docs-site/content/developer/connectors/_index.md

@@ -63,12 +63,42 @@ The Job Browser is generic and can list any type of jobs, queries and provide bu
 
 Here is its [API](https://github.com/cloudera/hue/tree/master/apps/jobbrowser/src/jobbrowser/apis).
 
+#### SQL Queries
+
+The API currently supports:
+
+* [Apache Impala](https://github.com/cloudera/hue/blob/master/apps/jobbrowser/src/jobbrowser/apis/query_api.py)
+* [Apache Hive](https://github.com/cloudera/hue/blob/master/apps/jobbrowser/src/jobbrowser/apis/beeswax_query_api.py)
+
+#### Spark / Livy
+
+* [Livy API](https://github.com/cloudera/hue/blob/master/apps/jobbrowser/src/jobbrowser/apis/livy_api.py)
+
+#### Oozie
+
+* [Workflow API](https://github.com/cloudera/hue/blob/master/apps/jobbrowser/src/jobbrowser/apis/workflow_api.py)
+* [Coordinators API](https://github.com/cloudera/hue/blob/master/apps/jobbrowser/src/jobbrowser/apis/schedule_api.py)
+* [Bundles API](https://github.com/cloudera/hue/blob/master/apps/jobbrowser/src/jobbrowser/apis/bundle_api.py)
+
 ## File Browser
 
-Various storage systems like Hadoop HDFS, AWS S3 and Azure [ADLS](https://issues.cloudera.org/browse/HUE-7248) can be interacted with. The [`fsmanager.py`](https://github.com/cloudera/hue/blob/master/desktop/core/src/desktop/lib/fsmanager.py) is the main router to each API.
+Various storage systems can be interacted with. The [`fsmanager.py`](https://github.com/cloudera/hue/blob/master/desktop/core/src/desktop/lib/fsmanager.py) is the main router to each API.
 
 **Note** Ceph can be used via the S3 browser.
 
+### Hadoop HDFS
+
+* [WebHdfs API](https://github.com/cloudera/hue/blob/master/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py)
+
+### AWS S3
+
+* [S3 API](https://github.com/cloudera/hue/blob/master/desktop/libs/aws/src/aws/s3)
+
+### Azure ADLS
+
+* [ADLS v2](https://github.com/cloudera/hue/blob/master/desktop/libs/azure/src/azure/abfs)
+* [ADLS v1](https://github.com/cloudera/hue/blob/master/desktop/libs/azure/src/azure/adls)
+
 ## Dashboard
 
 [Dashboards](/user/querying/#dashboards) are generic and support Apache Solr and SQL:
@@ -94,8 +124,21 @@ Implementations:
 
 A connector similar to Solr or SQL Alchemy binding would need to be developed [HUE-7828](https://issues.cloudera.org/browse/HUE-7828).
 
+## Data Catalog
+
+The backends is pluggable by providing alternative [client interfaces](https://github.com/cloudera/hue/tree/master/desktop/libs/metadata/src/metadata/catalog):
+
+* Cloudera Navigator (default)
+* Dummy (skeleton for integrating new catalogs)
+
+### Apache Atlas
+
+* [Client API](desktop/libs/metadata/src/metadata/catalog/atlas_client.py)
+
 ## Scheduling
 
+### Oozie
+
 Currently only Apache Oozie is supported for your Datawarehouse, but the API is getting generic with [HUE-3797](https://issues.cloudera.org/browse/HUE-3797) that is bringing Celery Beat integration.
 
 * [API](https://github.com/cloudera/hue/blob/master/desktop/core/src/desktop/lib/scheduler/lib/beat.py)

+ 2 - 2
docs/docs-site/content/user/browsing/_index.md

@@ -525,7 +525,7 @@ tasks in layers for quick access to the logs and troubleshooting.
 
 Any job running on the Resource Manager will be automatically listed. The information will be fetched accordingly if the job got moved to one of the history servers.
 
-### Impala Queries
+### SQL Queries
 
 There are three ways to access the Query browser:
 
@@ -553,6 +553,6 @@ List submitted workflows, schedules and bundles. See more in details in the [Sch
 
 ![Oozie jobs](https://cdn.gethue.com/uploads/2016/04/hue-dash-oozie.png)
 
-### Livy / Spark
+### Spark / Livy
 
 List Livy sessions and submitted statements.

+ 9 - 7
docs/docs-site/content/user/querying/_index.md

@@ -19,7 +19,9 @@ The configuration of the connectors is currently done by the [Administrator](/ad
 
 ![Editor](https://cdn.gethue.com/uploads/2019/08/hue_4.5.png)
 
-### Running Queries
+### Running
+
+#### Queries
 
 SQL query execution is the primary use case of the Editor.
 
@@ -43,12 +45,12 @@ When you have multiple statements it's enough to put the cursor in the statement
 
 **Note**: On top of the logs panel, there is a link to open the query profile in the [Query Browser](/user/browsing/#impala-queries).
 
-### Running Jobs
+#### Jobs
 
 In addition to SQL, these types of jobs are supported:
 
-* [Apache Pig](https://pig.apache.org/) Latin instructions to load/merge data to perform ETL or Analytics.
-* Running an SQL import from a traditional relational database via an [Apache Sqoop](https://sqoop.apache.org/) command.
+* [Apache Pig](https://pig.apache.org/) latin instructions to load/merge data to perform ETL or Analytics.
+* Running an [SQL import](/user/browsing/#relational-databases) from a traditional relational database via an [Apache Sqoop](https://sqoop.apache.org/) command.
 * Regular Java, MapReduce, [shell script](http://gethue.com/use-the-shell-action-in-oozie/).
 * [Spark](http://gethue.com/use-the-spark-action-in-oozie/) Jar or Python script to trial and error them in YARN via [Oozie](http://gethue.com/how-to-schedule-spark-jobs-with-spark-on-yarn-and-oozie/) or Livy.
 
@@ -194,7 +196,7 @@ While editing, Hue will run your queries through Navigator Optimizer in the back
 
 #### During execution
 
-The Query Browser details the plan of the query and the bottle necks. When detected, "Health" risks are listed with suggestions on how to fix them.
+The [Query Browser](/user/browsing/#sql-queries) details the plan of the query and the bottle necks. When detected, "Health" risks are listed with suggestions on how to fix them.
 
 ![Pretty Query Profile](https://cdn.gethue.com/uploads/2019/03/Screen-Shot-2019-03-07-at-11.40.24-AM.png)
 
@@ -218,7 +220,7 @@ To toggle the dark mode you can either press `Ctrl-Alt-T` or `Command-Option-T`
 
 Scheduling is detailed in its [own section](/user/scheduling/).
 
-## Dashboards
+## Dashboard
 
 Dashboards provide an interactive way to query indexed data quickly and easily. No programming is required and the analysis is done by drag & drops and clicks.
 
@@ -314,7 +316,7 @@ The main advantage is to be able to add snippets of different dialects (e.g. PyS
 
 ![Notebook mode](https://cdn.gethue.com/uploads/2015/10/notebook-october.png)
 
-Any configured language of the Editor will be available as a dialect. Each snippet has a code editor, wih autocomplete, syntax highlighting and other feature like shortcut links to HDFS paths and Hive tables have been added.
+Any configured language of the Editor will be available as a dialect. Each snippet has a code editor, wih autocomplete, syntax highlighting and other feature like shortcut links to HDFS paths and Hive tables.
 
 ![Notebook Screen](https://cdn.gethue.com/uploads/2015/08/notebook.png)