Browse Source

HUE-8888 [docs] Large revamp of the browser section

Romain 6 years ago
parent
commit
b243e14308

+ 5 - 0
docs/docs-site/content/developer/connectors/_index.md

@@ -99,6 +99,11 @@ Various storage systems can be interacted with. The [`fsmanager.py`](https://git
 * [ADLS v2](https://github.com/cloudera/hue/blob/master/desktop/libs/azure/src/azure/abfs)
 * [ADLS v2](https://github.com/cloudera/hue/blob/master/desktop/libs/azure/src/azure/abfs)
 * [ADLS v1](https://github.com/cloudera/hue/blob/master/desktop/libs/azure/src/azure/adls)
 * [ADLS v1](https://github.com/cloudera/hue/blob/master/desktop/libs/azure/src/azure/adls)
 
 
+### HBase / Key Value Stores
+
+With just a few changes in the [Python API](https://github.com/cloudera/hue/blob/master/apps/hbase/src/hbase/api.py),
+the HBase browser could be compatible with Apache Kudu or Google Big Table.
+
 ## Dashboard
 ## Dashboard
 
 
 [Dashboards](/user/querying/#dashboards) are generic and support Apache Solr and SQL:
 [Dashboards](/user/querying/#dashboards) are generic and support Apache Solr and SQL:

+ 72 - 21
docs/docs-site/content/user/browsing/_index.md

@@ -344,6 +344,27 @@ The File Browser application lets you interact with these file systems HDFS, S3
 -   View and edit files as text or binary.
 -   View and edit files as text or binary.
 -   Create external tables or export query results
 -   Create external tables or export query results
 
 
+**Exploring ADLS in Hue’s file browser**
+
+Once Hue is successfully configured to connect to ADLS, we can view all accessible folders within the account by clicking on the ADLS root. From here, we can view the existing keys (both directories and files) and create, rename, move, copy, or delete existing directories and files. Additionally, we can directly upload files to ADLS.
+
+![Browse files](https://cdn.gethue.com/uploads/2016/08/image2.png)
+
+**Create Hive Tables Directly From ADLS**
+
+Hue’s table browser import wizard can create external Hive tables directly from files in ADLS. This allows ADLS data to be queried via SQL from Hive or Impala, without moving or copying the data into HDFS or the Hive Warehouse. To create an external Hive table from ADLS, navigate to the table browser, select the desired database and then click the plus icon in the upper right. Select a file using the file picker and browse to a file on ADLS.
+
+Choose your input files’ delimiter and press next. Keep unchecked “Store in Default location” if you want the file to stay intact on ADLS, update the column definition options and finally click “Submit” when you’re ready to create the Hive table. Once created, you should see the newly created table details in the table browser.
+
+![Create tables from external files](https://cdn.gethue.com/uploads/2017/11/image4-1.png)
+
+**Save Query Results to ADLS**
+
+Now that we have created external Hive tables created from our ADLS data, we can jump into either the Hive or Impala editor and start querying the data directly from ADLS seamlessly. These queries can join tables and objects that are backed either by ADLS, HDFS, or both. Query results can then easily be saved back to ADLS.
+
+![Save results to storage](https://cdn.gethue.com/uploads/2017/11/image1-1.png)
+
+
 ### HDFS
 ### HDFS
 
 
 Hue is fully compatible with HDFS and is handy for browsing, peeking at file content, upload or downloading data.
 Hue is fully compatible with HDFS and is handy for browsing, peeking at file content, upload or downloading data.
@@ -372,11 +393,6 @@ Topics, Streams can be listed via the [`ksql` connector](/administrator/configur
 
 
 ### HBase
 ### HBase
 
 
-We'll take a look at the [HBase Browser App](http://gethue.com/the-web-ui-for-hbase-hbase-browser).
-
-**Note**: With just a few changes in the [Python API](https://github.com/cloudera/hue/blob/master/apps/hbase/src/hbase/api.py),
-the HBase browser could be compatible with Apache Kudu or Google Big Table.
-
 
 
 #### Smart View
 #### Smart View
 
 
@@ -388,35 +404,41 @@ amount of standard database operations. To explore a row, simple scroll
 to the right. By scrolling, the row should continue to lazily-load cells
 to the right. By scrolling, the row should continue to lazily-load cells
 until the end.
 until the end.
 
 
+![HBase](https://lh4.googleusercontent.com/rSmhp0hTq4xtod8SsoIn1A8tp7omHB46j0xtpnmtOQAHzn1PHw1C0rN7Yq8CBq0WOeSh_GVfFWB1P0mKsGGWIpAnGr-mxxJRIR3uW4exevkS5_mKBG0xIbJW)
+
 #### Adding Data
 #### Adding Data
 
 
 To initially populate the table, you can insert a new row or bulk upload
 To initially populate the table, you can insert a new row or bulk upload
 CSV/TSV/etc. type data into your table.
 CSV/TSV/etc. type data into your table.
 
 
+![HBase](https://lh4.googleusercontent.com/3aMhyC8qDYdNf98Ge8qbD2EPXzCiL62lCWxHpzhfiYfZPj1F-nAgu3IhbuDYQpTVz1OCqaMDC1WDZ617YfiTsZDafbhHjXufv_f9yyXJbk95fMLNlywLZkHS)
+
+On the right hand side of a row is a '+' sign that lets you insert columns into your row.
 
 
-On the right hand side of a row is a '+' sign that lets you insert
-columns into your
-row
 
 
 #### Mutating Data
 #### Mutating Data
 
 
 To edit a cell, simply click to edit inline.
 To edit a cell, simply click to edit inline.
 
 
+![HBase](https://lh4.googleusercontent.com/ADTmywVLvEGPordZoEdsOIFkzCWlgc6lG6hrQdtAzT74nHgXqmyto4tPEqqrNmwk0pu709EnP_VIPAgvFPhlPT7NYSDj4LCbApRmw1z-mPyad2jMehWXiZAb)
+
 If you need more control or data about your cell, click “Full Editor” to
 If you need more control or data about your cell, click “Full Editor” to
 edit.
 edit.
 
 
+![HBase](https://lh4.googleusercontent.com/irYJEB6muPCT5Oj3x-LJvMZIhSskXJhIJUsnYL00VpaoYKNTI8NnL09WsmzkxuryFWQpETnUb6EfRkT3ZrrTu7-yAXRDmDCG940Ssh-wbJhaGYt3Sj4txn4T)
+
 In the full editor, you can view cell history or upload binary data to
 In the full editor, you can view cell history or upload binary data to
 the cell. Binary data of certain MIME Types are detected, meaning you
 the cell. Binary data of certain MIME Types are detected, meaning you
 can view and edit images, PDFs, JSON, XML, and other types directly in
 can view and edit images, PDFs, JSON, XML, and other types directly in
 your browser!
 your browser!
 
 
+![HBase](https://lh5.googleusercontent.com/N5MqnAhIPQ5D7KSU-ulHTLS0mGFZqC22ciwKGeWhntzpYx4bvqCSvcTc3xCYfCCP6HuxNTr7FlEVMowbSIJ_1nOt36wOXzNpvC-Bhy3gRXve4rIS-Ei6t_By)
+
 Hovering over a cell also reveals some more controls (such as the delete
 Hovering over a cell also reveals some more controls (such as the delete
 button or the timestamp). Click the title to select a few and do batch
 button or the timestamp). Click the title to select a few and do batch
 operations:
 operations:
 
 
-If you need some sample data to get started and explore, check out this
-howto create [HBase table
-tutorial](http://gethue.com/hadoop-tutorial-how-to-create-example-tables-in-hbase).
+![HBase](https://lh3.googleusercontent.com/ECcsG6M0zGESG4vuHO8KvgsxrGPbZ5cEhbFxjq2uPhgKzUS-8eTaPq3W2P-rSm13fLxEnEMJY1yFJ8pb2IBmy2KwhGgdFjqQUOTQhQV0sWsxnPFPxpjvoe3T)
 
 
 
 
 #### Smart Searchbar
 #### Smart Searchbar
@@ -430,6 +452,8 @@ two row keys with:
     domain.100, domain.200
     domain.100, domain.200
 
 
 
 
+![HBase](https://lh4.googleusercontent.com/2swltMjM0iwMfsN5oL4CAGJvg_2ZEow_swIfUbUqfugC6WfwY7zSlCBeejTTH9u7ixy5w01KKJv4YEoh3ipGTQQrm0PZGgRxXyuqlD4XKS39w3NMVxSHGrx5)
+
 Submitting this query gives me the two rows I was looking for. If I want
 Submitting this query gives me the two rows I was looking for. If I want
 to fetch rows after one of these, I have to do a scan. This is as easy
 to fetch rows after one of these, I have to do a scan. This is as easy
 as writing a '+' followed by the number of rows you want to fetch.
 as writing a '+' followed by the number of rows you want to fetch.
@@ -531,17 +555,44 @@ There are three ways to access the Query browser:
 
 
 ![Pretty Query Profile](https://cdn.gethue.com/uploads/2019/03/Screen-Shot-2019-03-07-at-11.40.24-AM.png)
 ![Pretty Query Profile](https://cdn.gethue.com/uploads/2019/03/Screen-Shot-2019-03-07-at-11.40.24-AM.png)
 
 
-Query capabilities
+There are three ways to access the new browser:
+
+Best: Click on the query ID after executing a SQL query in the editor. This will open the mini job browser overlay at the current query. Having the query execution information side by side the SQL editor is especially helpful to understand the performance characteristics of your queries.
+Open the mini job browser overlay and navigate to the queries tab.
+Open the job browser and navigate to the queries tab.
+
+
+**Query capabilities**
+
+Display the list of currently running queries on the user’s current Impala coordinator and a certain number of completed queries based on your configuration (25 by default).
+
+![Pretty Query Profile](https://cdn.gethue.com/uploads/2017/12/JB.png)
+
+
+Display the explain plan which outlines logical execution steps. You can verify here that the execution will not proceed in an unexpected way (i.e. wrong join type, join order, projection order). This can happen if the statistics for the table are out of date as shown in the image below by the mention of “cardinality: unavailable”. You can obtain statistics by running:
+
+    COMPUTE STATS <TABLE_NAME>
+
+![Pretty Query Profile](https://cdn.gethue.com/uploads/2017/11/Explain.png)
+
+Display the summary report which shows physical timing and memory information of each operation of the explain plan. You can quickly find bottlenecks in the execution of the query which you can resolve by replacing expensive operations, repartitioning, changing file format or moving data.
+
+![Pretty Query Profile](https://cdn.gethue.com/uploads/2017/11/Summary.png)
+
+Display the query plan which is a condensed version of the summary report in graphical form.
+
+![Pretty Query Profile](https://cdn.gethue.com/uploads/2017/12/Plan.png)
+
+Display the memory profile which contains information about the memory usage during the execution of the query. You can use this to determine if the memory available to your query is sufficient.
+
+![Pretty Query Profile](https://cdn.gethue.com/uploads/2017/11/Memory.png)
+
+Display the profile which gives you physical execution of the query in great detail. This view is used to analyze data exchange between the various operator and the performance of the IO (disk, network, CPU). You can use this to reorganize the location of your data (on disk, in memory, different partitions or file formats).
+
+![Pretty Query Profile](https://cdn.gethue.com/uploads/2017/12/Profile.png)
 
 
-* Display the list of currently running queries on the user's current Impala coordinator and a certain number of completed queries based on your configuration (25 by default).
-* Display the summary report which shows physical timing and memory information of each operation of the explain plan. You can quickly find bottlenecks in the execution of the query which you can resolve by replacing expensive operations, repartitioning, changing file format or moving data.
-* Display the query plan which is a condensed version of the summary report in graphical form
-* Display the memory profile which contains information about the memory usage during the execution of the query. You can use this to determine if the memory available to your query is sufficient.
-* Display the profile which gives you physical execution of the query in great detail. This view is used to analyze data exchange between the various operator and the performance of the IO (disk, network, CPU). You can use this to reorganize the location of your data (on disk, in memory, different partitions or file formats).
-* Manually close an opened query.
+Manually close an opened query.
 
 
-Read more about it on [Browsing Impala Query Execution within the SQL Editor
-](http://gethue.com/browsing-impala-query-execution-within-the-sql-editor/).
 
 
 ### YARN (Spark, MapReduce, Tez)
 ### YARN (Spark, MapReduce, Tez)
 
 
@@ -556,4 +607,4 @@ List submitted workflows, schedules and bundles. See more in details in the [Sch
 
 
 ### Spark / Livy
 ### Spark / Livy
 
 
-List Livy sessions and submitted statements.
+List [Spark Livy](/user/querying/#spark) sessions and submitted statements.

+ 152 - 0
docs/docs-site/content/user/querying/_index.md

@@ -188,6 +188,158 @@ The [Query Browser](/user/browsing/#sql-queries) details the plan of the query a
 
 
 ![Pretty Query Profile](https://cdn.gethue.com/uploads/2019/03/Screen-Shot-2019-03-07-at-11.40.24-AM.png)
 ![Pretty Query Profile](https://cdn.gethue.com/uploads/2019/03/Screen-Shot-2019-03-07-at-11.40.24-AM.png)
 
 
+#### Tutorial
+
+After finding data in the Catalog and using the Query Assistant, end users might wonder why their queries are taking a lot of time to execute. Build up on top of the Impala profiler, this new feature educates them and surface up more information so that they can be more productive by themselves. Here is a scenario that showcases the flow:
+
+**Execution Timeline**
+
+To give you a feel for the new features, we’ll execute a few queries.
+
+    SELECT *
+    FROM
+      transactions1g s07 left JOIN transactions1g s08
+    ON ( s07.field_1 = s08.field_1) limit 100
+
+transactions1g is a 1GB table and the self join with no predicates will force a network transfer of the whole table.
+
+![Impala Profile](https://cdn.gethue.com/uploads/2019/03/Screen-Shot-2019-03-06-at-4.08.01-PM.png)
+
+Looking at the profile, you can see a number on the top right of each node that represent its IO and CPU time. There’s also a timeline that gives an estimated representation of when that node was processed during execution. The dark blue color is the CPU time, while the lighter blue is the network or disk IO time. In this example, we can see that the hash join ran for 2.5s. The exchange node, which does the network transfer between 2 hosts, was the most expensive node at 7.2s.
+
+**Detail pane**
+
+On the right hand side, there is now a pane that is closed by default. To open or close press on the header of the pane. There, you will find a list of all the nodes sorted by execution time, which makes it easier to navigate larger execution graphs. This list is clickable and will navigate to the appropriate node.
+
+![Impala Profile](https://cdn.gethue.com/uploads/2019/03/Screen-Shot-2019-03-06-at-4.12.38-PM.png)
+
+**Events**
+
+Pressing on the exchange node, we find the execution timeline with a bit more detail.
+
+![Impala Profile](https://cdn.gethue.com/uploads/2019/03/Screen-Shot-2019-03-06-at-4.13.40-PM.png)
+
+We see that the IO was the most significant portion of the exchange.
+
+**Statistics by host**
+
+The detail pane also contains detailed statistics aggregated per host per node such as memory consumption and network transfer speed.
+
+![Impala Profile](https://cdn.gethue.com/uploads/2019/03/Screen-Shot-2019-03-06-at-4.16.11-PM.png)
+
+**Risks**
+
+In the detail pane, for each node, you will find a section titled risks. This section will contain hints on how to improve performance for this operator. Currently, this is not enabled by default. To enable it, go to your Hue ini file and enable this flag:
+
+    [notebook]
+    enable_query_analysis=true
+
+**CodeGen**
+
+Let’s look at a few queries and some of the risks that can be identified.
+
+    SELECT s07.description, s07.salary, s08.salary,
+      s08.salary - s07.salary
+    FROM
+      sample_07 s07 left outer JOIN sample_08 s08
+    ON ( s07.code = s08.code)
+    where s07.salary > 100000
+
+sample_07 & sample_08 are small sample tables that come with Hue.
+
+![Impala Profile](https://cdn.gethue.com/uploads/2019/03/Screen-Shot-2019-03-07-at-9.35.23-AM.png)
+
+Looking at the graph, the timelines are mostly empty. If we open one of the nodes we see that all the time is taken by “CodeGen”.
+
+![Impala Profile](https://cdn.gethue.com/uploads/2019/03/Screen-Shot-2019-03-07-at-9.40.50-AM.png)
+
+Impala compiles SQL requests to native code to execute each node in the graph. On queries with large table this gives a large performance boost. On smaller tables, we can see that CodeGen is the main contributor to execution time. Normally, Impala disables CodeGen with tables of small sizes, but Impala doesn’t know it’s a small table as is pointed out in the risks section by the statement “Statistics missing”. Two solutions are available here:
+
+Adding the missing statistics. One way to do this is to execute the following command:
+
+    compute stats sample_07;
+    compute stats sample_08;
+
+This is usually the right thing to do, but on larger tables it can be quite expensive.
+
+Disable codegen for the query via:
+
+    set DISABLE_CODEGEN=true
+
+![Impala Profile](https://cdn.gethue.com/uploads/2019/03/Screen-Shot-2019-03-07-at-9.52.37-AM.png)
+
+After rerunning the query, we see that CodeGen is now gone.
+
+**Join Order**
+
+If we open the join node, there’s a warning for wrong join order.
+
+![Impala Profile](https://cdn.gethue.com/uploads/2019/03/Screen-Shot-2019-03-07-at-4.50.54-PM.png)
+
+Impala prefers having the table with the larger size on the right hand side of the graph, but in this case the reverse is true. Normally, Impala would optimize this automatically, but we saw that the statistics were missing for the tables being joined. There a few ways we could fix this:
+
+* Add the missing statistics as described earlier.
+* Rewrite the query the change the join order:
+
+    ```
+    SELECT s07.description, s07.salary, s08.salary,
+      s08.salary - s07.salary
+    FROM
+      sample_08 s08 left outer JOIN sample_07 s07
+    ON ( s07.code = s08.code)
+    where s07.salary > 100000
+    ```
+
+![Impala Profile](https://cdn.gethue.com/uploads/2019/03/Screen-Shot-2019-03-07-at-9.57.14-AM.png)
+
+The warning is gone and the execution time for the join is down.
+
+**Spilling**
+
+Impala will execute all of its operators in memory if enough is available. If the execution does not all fit in memory, Impala will use the available disk to store its data temporarily. To see this in action, we’ll use the same query as before, but we’ll set a memory limit to trigger spilling:
+
+    set MEM_LIMIT=1g;
+    select *
+    FROM
+      transactions1g s07 left JOIN transactions1g s08
+    ON ( s07.field_1 = s08.field_1);
+
+![Impala Profile](https://cdn.gethue.com/uploads/2019/03/Screen-Shot-2019-03-07-at-11.40.24-AM.png)
+
+Looking at the join node, we can see that there’s an entry in the risk section about a spilled partition. Typically, the join only has CPU time, but in this case it also has IO time due to the spill.
+
+**Kudu Filtering**
+
+Kudu is one of the supported storage backends for Impala. While Impala stand alone can query a variety of file data formats, Impala on Kudu allows fast updates and inserts on your data, and also is a better choice if small files are involved. When using Impala on Kudu, Impala will push down some of the operations to Kudu to reduce the data transfer between the two.
+
+However, Kudu does not support all the operators that Impala support. For example, at the time of writing, Impala support the ‘like’ operator, but Kudu does not. In those cases, all the data that cannot be natively filtered in Kudu is transferred to Impala where it will be filtered. Let’s look at a behavior difference between the two.
+
+    SELECT * FROM transactions1g_kudu s07 left JOIN transactions1g_kudu s08 on s07.field_1 = s08.field_1
+    where s07.field_5 LIKE '2000-01%';
+
+![Impala Profile](https://cdn.gethue.com/uploads/2019/03/Screen-Shot-2019-03-07-at-5.00.59-PM.png)
+
+When we look at the graph, we see that on the Kudu node we have both IO, which represent the time spent in Kudu, and CPU, which represent the time spent in Impala, for a total of 2.1s. In the risk section, we can also find a warning that Kudu could not evaluate the predicate.
+
+    SELECT * FROM transactions1g_kudu s07 left JOIN transactions1g_kudu s08 on s07.field_1 = s08.field_1
+    where s07.field_5 <= '2000-01-31' and s07.field_5 >= '2000-01-01';
+
+![Impala Profile](https://cdn.gethue.com/uploads/2019/03/Screen-Shot-2019-03-07-at-4.02.33-PM.png)
+
+When we look a the graph, we see that on the Kudu node now mostly has IO for a total time 727ms.
+
+**Others**
+
+You might also have queries where the nodes have short execution time, but the total duration time is long. Using the same query, we see all the nodes have sub 10ms execution time, but the query execution was 7.9s.
+
+![Impala Profile](https://cdn.gethue.com/uploads/2019/03/Screen-Shot-2019-03-07-at-10.56.07-AM.png)
+
+Looking at the global timeline, we see that the planning phase took 3.8s with most of the time in metadata load. When Impala doesn’t have metadata about a table, which can happen after a user executes:
+
+    invalidate metadata;
+
+Impala has to refetch the metadata from the metastore. Furthermore, we see that the second most expensive item at 4.1s is first row fetched. This is the time it took the client, Hue in this case, to fetch the results. While both of these events are not things that a user can change, it’s good to see where the time is spent.
+
 #### Post-query execution
 #### Post-query execution
 
 
 A new experimental panel when enabled can offer post risk analysis and recommendation on how to tweak the query for better speed.
 A new experimental panel when enabled can offer post risk analysis and recommendation on how to tweak the query for better speed.

+ 67 - 2
docs/docs-site/content/user/scheduling/_index.md

@@ -9,14 +9,79 @@ Scheduling of queries or jobs (e.g. run this SQL query everyday at 5am) is curre
 
 
 ## Editor
 ## Editor
 
 
-Workflows can be built by pointing to query scripts on the file systems or saved queries. A workflow can then be scheduled to run regularly via a schedule.
+Workflows can be built by pointing to query scripts on the file systems or just selecting one of your saved queries. A workflow can then be scheduled to run regularly via a schedule.
 
 
-![Oozie workflows](https://cdn.gethue.comuploads/2016/04/hue-workflows.png)
+![Oozie workflows](https://cdn.gethue.com/uploads/2016/04/hue-workflows.png)
 
 
 Many users leverage the workflow editor to get the Oozie XML configuration of their workflows.
 Many users leverage the workflow editor to get the Oozie XML configuration of their workflows.
 
 
+### Tutorial
+
+How to run Spark jobs with Spark on YARN? This often requires trial and error in order to make it work.
+
+Hue is leveraging Apache Oozie to submit the jobs. It focuses on the yarn-client mode, as Oozie is already running the spark-summit command in a MapReduce2 task in the cluster. You can read more about the Spark modes here.
+
+Here is how to get started successfully:
+
+#### PySpark
+
+Simple script with no dependency.
+
+![Oozie workflows](https://cdn.gethue.com/uploads/2016/08/oozie-pyspark-simple.png)
+
+Script with a dependency on another script (e.g. hello imports hello2).
+
+![Oozie workflows](https://cdn.gethue.com/uploads/2016/08/oozie-pyspark-dependencies.png)
+
+For more complex dependencies, like Panda, have a look at this documentation.
+
+
+#### Jars (Java or Scala)
+
+Add the jars as File dependency and specify the name of the main jar:
+
+![Oozie workflows](https://cdn.gethue.com/uploads/2016/08/spark-action-jar.png)
+
+Another solution is to put your jars in the ‘lib’ directory in the workspace (‘Folder’ icon on the top right of the editor).
+
+![Oozie workflows](https://cdn.gethue.com/uploads/2016/08/oozie-spark-lib2.png)
+
+### Shell
+
+If the executable is a standard Unix command, you can directly enter it in the `Shell Command` field and click Add button.
+
+![Shell action](https://cdn.gethue.com/uploads/2015/10/1.png)
+
+Arguments to the command can be added by clicking the `Arguments+` button.
+
+![Shell action](https://cdn.gethue.com/uploads/2015/10/2.png)
+
+`${VARIABLE}` syntax will allow you to dynamically enter the value via Submit popup.
+
+![Shell action](https://cdn.gethue.com/uploads/2015/10/31.png)
+![Shell action](https://cdn.gethue.com/uploads/2015/10/4.png)
+
+If the executable is a script instead of a standard UNIX command, it needs to be copied to HDFS and the path can be specified by using the File Chooser in Files+ field.
+
+    #!/usr/bin/env bash
+    sleep
+
+![Shell action](https://cdn.gethue.com/uploads/2015/10/5.png)
+
+Additional Shell-action properties can be set by clicking the settings button at the top right corner.
+
 ## Browser
 ## Browser
 
 
 Submitted workflows, schedules and bundles can be managed directly via an interface:
 Submitted workflows, schedules and bundles can be managed directly via an interface:
 
 
 ![Oozie jobs](https://cdn.gethue.com/uploads/2016/04/hue-dash-oozie.png)
 ![Oozie jobs](https://cdn.gethue.com/uploads/2016/04/hue-dash-oozie.png)
+
+### Extra Coordinator actions
+
+Update Concurrency and PauseTime of running Coordinator.
+
+![Oozie jobs](https://cdn.gethue.com/uploads/2015/08/edit-coord.png)
+
+Ignore a terminated Coordinator action.
+
+![Oozie jobs](https://cdn.gethue.com/uploads/2015/08/ignore.png)