Эх сурвалжийг харах

HUE-8756 [docs] Another pass of clean-up on the user section

Romain Rigaux 6 жил өмнө
parent
commit
4814746bcc

+ 1 - 5
docs/docs-site/content/user/_index.md

@@ -6,8 +6,4 @@ chapter = false
 pre = "<b>2. </b>"
 +++
 
-Hue consists in 4 apps in a single page interface that allow the users to perform data
-analyses without losing any context. The goal is to promote self service and stay simple like Excel
-so that 80% of the user can find, explore and query data and become more data driven.
-
-Here are the main functionalities.
+This section describes the main functionalities from a end user point of view.

+ 65 - 311
docs/docs-site/content/user/browsers/_index.md

@@ -1,120 +1,17 @@
 ---
-title: "Browser"
+title: "Catalog"
 date: 2019-03-13T18:28:09-07:00
 draft: false
 weight: 4
 ---
 
-## Data Importer
-
-The goal of the importer is to allow ad hoc queries on data not yet in the clusters thereby expedite self-service analytics.
-
-If you want to import your own data instead of installing the sample
-tables, open the importer from the left menu or from the little `+` in the left assist.
-
-If you've ever struggled with creating new SQL tables from files, you'll be happy to learn that this is now much easier. The wizard has been revamped to two simple steps and also offers more formats. Now users just need to:
-
-1. Select a source type
-2. Select the type of object for the destination
-
-And that's it!
-
-To learn more, watch the video on [Data Import Wizard](http://gethue.com/import-data-to-be-queried-via-the-self-service-drag-drop-create-table-wizard/).
+Hue Browsers power the Data Catalog. They let you easily search, glance and perform actions on data or jobs in Cloud or on premise clusters.
 
 ## SQL Tables
 
-Although you can create tables by executing the appropriate Hive HQL DDL
-query commands, it is easier to create a table using the create table wizard.
-
-**From a File**
-
-If you've ever struggled with creating new SQL tables from files, you'll be happy to learn that this is now much easier. With the latest Hue release, you can now create these in an ad hoc way and thereby expedite self-service analytics. The wizard has been revamped to two simple steps and also offers more formats. Now users just need to:
-
-1. In the Importer Manager selects source from a 'File'
-1. Select the type of table
-
-Files can be dragged & dropped, selected from HDFS or S3 (if configured), and their formats are automatically detected. The wizard also assists when performing advanced functionalities like table partitioning, Kudu tables, and nested types.
-
-
-**Manually**
-
-1.  In the Importer Manager selects 'Manually'
-2.  Follow the instructions in the wizard to create the table. The basic
-    steps are:
-    -   Name the table.
-    -   Choose the record format.
-    -   Configure record serialization by specifying delimiters for
-        columns, collections, and map keys.
-    -   Choose the file format.
-    -   Specify the location for your table's data.
-    -   Specify the columns, providing a name and selecting the type for
-        each column.
-    -   Specify partition columns, providing a name and selecting the
-        type for each column.
-
-
-## Indexing
-
-In the past, indexing data into Solr to then explore it with a [Dynamic Dashboard](http://gethue.com/search-dashboards/) has been quite difficult. The task involved writing a Solr schema and a Morphlines file then submitting a job to YARN to do the indexing. Often times getting this correct for non trivial imports could take a few days of work. Now with Hue's new feature you can start your YARN indexing job in minutes. This tutorial offers a step by step guide on how to do it.
-
-[Read more about it here](http://gethue.com/easy-indexing-of-data-into-solr/).
-
-## Traditional Databases
-
-Read more about [ingesting data from traditional databases](http://gethue.com/importing-data-from-traditional-databases-into-hdfshive-in-just-a-few-clicks/).
-
-
-# Dashboards
-Dashboards are an interactive way to explore your data quickly and easily. No programming is required and the analysis is done by drag & drops and clicks.
-
-Read more about [Dashboards](http://gethue.com/search-dashboards/).
+The Table Browser enables you to manage the databases, tables, and partitions of the metastore shared by the Hive and Impala. You can perform the following operations:
 
-## Concepts
-
-Simply drag & drop widgets that are interconnected together. This is great for exploring new datasets or monitoring without having to type.
-
-### Importing
-
-Any CSV file can be dragged & dropped and ingested into an index in a few clicks via the Data Import Wizard [link]. The indexed data is immediately queryable and its facets/dimensions will be very fast to explore.
-
-### Browsing
-
-The Collection browser got polished in the last releases and provide more information on the columns. The left metadata assist of Hue 4 makes it handy to list them and peak at their content via the sample popup.
-
-### Querying
-
-The search box support live prefix filtering of field data and comes with a Solr syntax autocomplete in order to make the querying intuitive and quick. Any field can be inspected for its top values of statistic. This analysis happens very fast as the data is indexed.
-
-## Databases
-
-### Solr
-
-#### Autocomplete
-
-The top search bar offers a [full autocomplete](http://gethue.com/intuitively-discovering-and-exploring-a-wine-dataset-with-the-dynamic-dashboards/) on all the values of the index.
-
-#### More Like This
-The “More like This” feature lets you selected fields you would like to use to find similar records. This is a great way to find similar issues, customers, people... with regard to a list of attributes.
-
-### SQL
-
-## Reports
-
-This is work in progress but dashboards will soon offer a classic reporting option.
-
-## SDK
-Read more about extending [connectors](../sdk/sdk.html#dashboard).
-
-
-# Browsers
-Hue's Browsers powers your Data Catalog. They let you easily search, glance and perform actions on data or jobs in Cloud or on premise clusters.
-
-## Tables
-
-The Table Browser enables you to manage the databases,
-tables, and partitions of the metastore shared by
-the Hive and Impala. You can use Metastore
-Manager to perform the following operations:
+-   Search and display metadata like tags and additional description from [Catalog backends](/administrator/configuration/external/).
 
 -   Databases
     -   Select a database
@@ -130,207 +27,92 @@ Manager to perform the following operations:
     -   [Filter, Sort and Browse Partitions](http://gethue.com/filter-sort-browse-hive-partitions-with-hues-metastore/)
 
 
-## Files
-
-The File Browser application lets you browse and manipulate files and
-directories in the Hadoop Distributed File System (HDFS), S3 or ADLS.
-With File Browser, you can:
-
--   Create files and directories, upload and download files, upload zip
-    archives, and rename, move, and delete files and directories. You
-    can also change a file's or directory's owner, group, and
-    permissions. See [Files and Directories](#filesAndDirectories).
--   Search for files, directories, owners, and groups. See [Searching
-    for Files and Directories](#searching).
--   View and edit files as text or binary. See [Viewing and Editing
-    Files](#viewAndEdit).
-
-### File systems
-#### HDFS
-#### S3
-
-Hue can be setup to read and write to a configured S3 account, and users get autocomplete capabilities and can directly query from and save data to S3 without any intermediate moving/copying to HDFS.
-
-[Read more about it](http://gethue.com/introducing-s3-support-in-hue/).
-
-**Create Hive Tables Directly From S3**
-Hue's Metastore Import Data Wizard can create external Hive tables directly from data directories in S3. This allows S3 data to be queried via SQL from Hive or Impala, without moving or copying the data into HDFS or the Hive Warehouse.
-
-To create an external Hive table from S3, navigate to the Metastore app, select the desired database and then click the “Create a new table from a file” icon in the upper right.
-
-Enter the table name and optional description, and in the “Input File or Directory” filepicker, select the S3A filesystem and navigate to the parent directory containing the desired data files and click the “Select this folder” button. The “Load Data” dropdown should automatically select the “Create External Table” option which indicates that this table will directly reference an external data directory.
-
-Choose your input files' delimiter and column definition options and finally click “Create Table” when you're ready to create the Hive table. Once created, you should see the newly created table details in the Metastore.
-
-**Save Query Results to S3**
-
-Now that we have created external Hive tables created from our S3 data, we can jump into either the Hive or Impala editor and start querying the data directly from S3 seamlessly. These queries can join tables and objects that are backed either by S3, HDFS, or both. Query results can then easily be saved back to S3.
-
-
-**S3 Configuration**
-
-[Hue S3 Documentation](/administrator/connectors/files/#s3).
-
-
-#### ADLS
-
-Learn more about it on the [ADLS integration post](http://gethue.com/browsing-adls-data-querying-it-with-sql-and-exporting-the-results-back-in-hue-4-2/).
-
- Users gets autocomplete capabilities and more:
-
-**Exploring ADLS in Hue's file browser**
-Once Hue is successfully configured to connect to ADLS, we can view all accessible folders within the account by clicking on the ADLS root. From here, we can view the existing keys (both directories and files) and create, rename, move, copy, or delete existing directories and files. Additionally, we can directly upload files to ADLS.
-
-**Create Hive Tables Directly From ADLS**
-Hue's table browser import wizard can create external Hive tables directly from files in ADLS. This allows ADLS data to be queried via SQL from Hive or Impala, without moving or copying the data into HDFS or the Hive Warehouse. To create an external Hive table from ADLS, navigate to the table browser, select the desired database and then click the plus icon in the upper right. Select a file using the file picker and browse to a file on ADLS.
-
-**Save Query Results to ADLS**
-Now that we have created external Hive tables created from our ADLS data, we can jump into either the Hive or Impala editor and start querying the data directly from ADLS seamlessly. These queries can join tables and objects that are backed either by ADLS, HDFS, or both. Query results can then easily be saved back to ADLS.
-
-
-**ADLS Configuration**
-
-[Hue ADLS Documentation](/administrator/connectors/files/#adls).
-
-<a id="fileAndDirectories"></a>
-### Files and Directories
-
-You can use File Browser to view the input and output files of your
-MapReduce jobs. Typically, you can save your output files in /tmp or in
-your home directory if your system administrator set one up for you. You
-must have the proper permissions to manipulate other user's files.
-
-#### Creating Directories
-
-1.  In the File Browser window, select **New > Directory**.
-2.  In the **Create Directory** dialog box, enter a directory name and
-    then click **Submit**.
+## Data Importer
 
-#### Changing Directories
+The goal of the importer is to allow ad-hoc queries on data not yet in the clusters and simplifies self-service analytics.
 
--   Click the directory name or parent directory dots in the **File
-    Browser** window.
--   Click the Edit icon, type a directory name, and
-    press **Enter**.
+If you want to import your own data instead of installing the sample
+tables, open the importer from the left menu or from the little `+` in the left assist.
 
-To change to your home directory, click **Home** in the path field at
-the top of the **File Browser** window.
 
-**Note**:
+To learn more, watch the video on [Data Import Wizard](http://gethue.com/import-data-to-be-queried-via-the-self-service-drag-drop-create-table-wizard/).
 
-The **Home** button is disabled if you do not have a home directory. Ask
-a Hue administrator to create a home directory for you.
+**Note** Files can be dragged & dropped, selected from HDFS or S3 (if configured), and their formats are automatically detected. The wizard also assists when performing advanced functionalities like table partitioning, Kudu tables, and nested types.
 
-#### Creating Files
+### Traditional Databases
 
-1.  In the File Browser window, select **New > File**.
-2.  In the **Create File** dialog box, enter a file name and then click
-    **Submit**.
+Read more about [ingesting data from traditional databases](http://gethue.com/importing-data-from-traditional-databases-into-hdfshive-in-just-a-few-clicks/).
 
+### Indexing
 
-#### Uploading Files
+In the past, indexing data into Solr to then explore it with a [Dynamic Dashboard](http://gethue.com/search-dashboards/) has been quite difficult. The task involved writing a Solr schema and a Morphlines file then submitting a job to YARN to do the indexing. Often times getting this correct for non trivial imports could take a few days of work. Now with Hue's new feature you can start your YARN indexing job in minutes. This tutorial offers a step by step guide on how to do it.
 
-You can upload text and binary files to the HDFS.
+[Read more about it here](http://gethue.com/easy-indexing-of-data-into-solr/).
 
-1.  In the **File Browser** window, browse to the directory where you
-    want to upload the file.
-2.  Select **Upload \> Files**.
-3.  In the box that opens, click **Upload a File** to browse to and
-    select the file(s) you want to upload, and then click **Open**.
 
-#### Downloading Files
+## Dashboards
+Dashboards are an interactive way to explore your data quickly and easily. No programming is required and the analysis is done by drag & drops and clicks.
 
-You can download text and binary files to the HDFS.
+Read more about [Dashboards](http://gethue.com/search-dashboards/).
 
-1.  In the **File Browser** window, check the checkbox next to the file
-    you want to download.
-2.  Click the **Download** button.
+### Concepts
 
-### Uploading Zip Archives
+Simply drag & drop widgets that are interconnected together. This is great for exploring new datasets or monitoring without having to type.
 
-You can extract zip archives to the HDFS. The archive is
-extracted to a directory named archivename.
+### Querying
 
-1.  In the **File Browser** window, browse to the directory where you
-    want to upload the archive.
-2.  Select **Upload > Zip file**.
-3.  In the box that opens, click **Upload a zip file** to browse to and
-    select the archive you want to upload, and then click **Open**.
+The search box support live prefix filtering of field data and comes with a Solr syntax autocomplete in order to make the querying intuitive and quick. Any field can be inspected for its top values of statistic. This analysis happens very fast as the data is indexed.
 
-### Trash Folder
+### Autocomplete
 
-File Browser supports the HDFS trash folder (*home directory*/.Trash) to
-contain files and directories before they are permanently deleted. Files
-in the folder have the full path of the deleted files (in order to be
-able to restore them if needed) and checkpoints. The length of time a
-file or directory stays in the trash depends on HDFS properties.
+The top search bar offers a [full autocomplete](http://gethue.com/intuitively-discovering-and-exploring-a-wine-dataset-with-the-dynamic-dashboards/) on all the values of the index.
 
-In the **File Browser** window, click the Trash icon.
+### More Like This
+The “More like This” feature lets you selected fields you would like to use to find similar records. This is a great way to find similar issues, customers, people... with regard to a list of attributes.
 
 
-### Changing Owner, Group, or Permissions
+## Files
 
-**Note**:
+The File Browser application lets you interact with these file systems HDFS, S3 or ADLS:
 
-Only the Hadoop superuser can change a file's or directory's owner,
-group, or permissions. The user who starts Hadoop is the Hadoop
-superuser. The Hadoop superuser account is not necessarily the same as a
-Hue superuser account. If you create a Hue user (in User Admin) with the
-same user name and password as the Hadoop superuser, then that Hue user
-can change a file's or directory's owner, group, or permissions.
+-   Create files and directories, upload and download files, upload zip
+    archives and extract them, rename, move, and delete files and directories.
+-   Change a file's or directory's owner, group, and
+    permissions.
+-   View and edit files as text or binary.
+-   Create external tables or export query results
 
-**Owner or Group**
+### HDFS
 
-1.  In the **File Browser** window, check the checkbox next to the
-    select the file or directory whose owner or group you want to
-    change.
-2.  Choose **Change Owner/Group** from the Options menu.
-3.  In the **Change Owner/Group** dialog box:
-    -   Choose the new user from the **User** drop-down menu.
-    -   Choose the new group from the **Group** drop-down menu.
-    -   Check the **Recursive** checkbox to propagate the change.
+Hue is fully compatible with HDFS and is handy for browsing, peeking at file content, upload or downloading data.
 
-4.  Click **Submit** to make the changes.
+### S3
 
-**Permissions**
+Hue can be setup to read and write to a configured S3 account, and users get autocomplete capabilities and can directly query from and save data to S3 without any intermediate moving/copying to HDFS.
 
-1.  In the **File Browser** window, check the checkbox next to the file
-    or directory whose permissions you want to change.
-2.  Click the **Change Permissions** button.
-3.  In the **Change Permissions** dialog box, select the permissions you
-    want to assign and then click **Submit**.
+[Read more about it](http://gethue.com/introducing-s3-support-in-hue/).
 
-### Viewing and Editing Files
+### ADLS
 
-You can view and edit files as text or binary.
+Learn more about it on the [ADLS integration post](http://gethue.com/browsing-adls-data-querying-it-with-sql-and-exporting-the-results-back-in-hue-4-2/).
 
+**Note** ADLS gen2 is currently not supported.
 
-**View**
+### GFS
 
-1.  In the **File Browser** window, click the file you want to view.
-    File Browser displays the first 4,096 bytes of the file in the
-    **File Viewer** window.
-    -   If the file is larger than 4,096 bytes, use the Block navigation
-        buttons (First Block, Previous Block, Next Block, Last Block) to
-        scroll through the file block by block. The **Viewing Bytes**
-        fields show the range of bytes you are currently viewing.
-    -   To switch the view from text to binary, click **View as Binary**
-        to view a hex dump.
-    -   To switch the view from binary to text, click **View as Text**.
+Google file system is currently not supported.
 
-**Edit**
 
-1.  If you are viewing a text file, click **Edit File**. File Browser
-    displays the contents of the file in the **File Editor** window.
-2.  Edit the file and then click **Save** or **Save As** to save the
-    file.
+## Solr Indexes / Collections
 
-## Indexes / Collections
+Solr indexes can be created and are listed in the interface.
 
 ## Sentry Permissions
 
 Sentry roles and privileges can directly be edited in the Security interface.
 
+**Note** Sentry is going to be replaced by Apache Ranger in [HUE-8748](https://issues.cloudera.org/browse/HUE-8748).
+
 ### SQL
 
 [Hive UI](http://gethue.com/apache-sentry-made-easy-with-the-new-hue-security-app/).
@@ -346,58 +128,30 @@ For listing collections, query and creating collection:
     Schema=*->action=*
     Config=*->action=*
 
+### Kafka
 
-## Jobs
-
-The Job Browser application lets you to examine multiple types of jobs
-jobs running in the cluster. Job Browser presents the job and
-tasks in layers. The top layer is a list of jobs, and you can link to a
-list of that job's tasks. You can then view a task's attempts and the
-properties of each attempt, such as state, start and end time, and
-output size. To troubleshoot failed jobs, you can also view the logs of
-each attempt.
+Kafka topics can be listed.
 
+**Note** This is currently an experimental feature.
 
-If there are jobs running, then the Job Browser list appears.
 
+## Jobs
 
-### Dashboard
-
--   To filter the jobs by their state (such as **Running** or
-    **Completed**), choose a state from the **Job status** drop-down
-    menu.
--   To filter by a user who ran the jobs, enter the user's name in the
-    **User Name** query box.
--   To filter by job name, enter the name in the **Text** query box.
--   To clear the filters, choose **All States** from the **Job status**
-    drop-down menu and delete any text in the **User Name** and **Text**
-    query boxes.
-
-### Viewing Job Information
-
-**Note**: At any level you can view the log
-for an object by clicking the Log icon in the Logs
-column.
-
-**To view job information for an individual job:**
+The Job Browser application lets you to examine multiple types of jobs
+jobs running in the cluster. Job Browser presents the job and
+tasks in layers for quick access to the logs and troubleshooting.
 
-1.  In the **Job Browser** window, click **View** at the right of the
-    job you want to view. This shows the **Job** page for the job, with
-    the recent tasks associated with the job are displayed in the
-    **Tasks** tab.
-2.  Click the **Logs** tab to view the logs for this job.
-3.  Click the **Counters** tab to view the counter metrics for the job.
+### YARN (Spark, MapReduce, Tez)
 
+Any job running on the Resource Manager will be automatically listed. The information will be fetched accordingly if the job got moved to one of the history servers.
 
-### Types
-#### YARN (Spark, MapReduce)
-#### Impala Queries
+### Impala Queries
 
-There are three ways to access the new browser:
+There are three ways to access the Query browser:
 
-Best: Click on the query ID after executing a SQL query in the editor. This will open the mini job browser overlay at the current query. Having the query execution information side by side the SQL editor is especially helpful to understand the performance characteristics of your queries.
-Open the mini job browser overlay and navigate to the queries tab.
-Open the job browser and navigate to the queries tab.
+* Best: Click on the query ID after executing a SQL query in the editor. This will open the mini job browser overlay at the current query. Having the query execution information side by side the SQL editor is especially helpful to understand the performance characteristics of your queries.
+* Open the mini job browser overlay and navigate to the queries tab.
+* Open the job browser and navigate to the queries tab.
 
 Query capabilities
 
@@ -411,10 +165,10 @@ Query capabilities
 Read more about it on [Browsing Impala Query Execution within the SQL Editor
 ](http://gethue.com/browsing-impala-query-execution-within-the-sql-editor/).
 
-#### Workflow / Schedules (Oozie)
+### Workflow / Schedules (Oozie)
 
 List submitted workflows, schedules and bundles.
 
-#### Livy / Spark
+### Livy / Spark
 
-List Livy sessions and submitted statements.
+List Livy sessions and submitted statements.

+ 23 - 20
docs/docs-site/content/user/concept/_index.md

@@ -29,38 +29,28 @@ Each app of Hue can be extended to support your own languages or apps as detaile
 
 ## Interface
 
-The layout simplifies the interface and is now single page app, and this makes things snappier and unifies the apps together.
-
+The layout simplifies the interface and is a snappy single page app.
 
 ![image]({{% param baseURL %}}images/hue-4-interface-concept.png)
 
 From top to bottom we have:
 
-* A completely redesigned top bar, with a quick action (big blue button), a global search and a notification area on the right
+* Quick action (big blue button), a global search and a notification area on the right
 * A collapsible hamburger menu that offers links to the various apps and a quick way to import data
 * An extended quick browse on the left
 * The main app area, where the fun is ;)
-* A right Assistant panel for the current application. It's now enabled for the editors, and in case of Hive for instance, it offers you a live help, a quick browse for the used tables in your query, and much more: if your Hue instance is connected to a SQL Optimizer service like Cloudera Navigator Optimizer, it can offer suggestions on your queries!
-* Various applications have been grouped into 4 main conceptual areas:
+* A right Assistant panel for the current application. It offers a live help and depends on the currently selected application. For example in the Hive Editor, it shows a quick browse for the used tables in your query, suggestions on how to write better queries, SQL language and UDF built-in documentation.
 
 Learn more on the [The Hue 4 user interface in detail](http://gethue.com/the-hue-4-user-interface-in-detail/).
 
 
 ## Top search
 
-The new search bar is always accessible on the top of screen, and it offers a document search and metadata search too if Hue is configured to access a metadata server like Cloudera Navigator.
-
-### Embedded Search & Tagging
-
 Have you ever struggled to remember table names related to your project? Does it take much too long to find those columns or views? Hue now lets you easily search for any table, view, or column across all databases in the cluster. With the ability to search across tens of thousands of tables, you're able to quickly find the tables that are relevant for your needs for faster data discovery.
 
-In addition, you can also now tag objects with names to better categorize them and group them to different projects. These tags are searchable, expediting the exploration process through easier, more intuitive discovery.
+The new search bar is always accessible on the top of screen, and it offers a document search and metadata search too if Hue is configured to access a metadata server.
 
-Through an integration with Cloudera Navigator, existing tags and indexed objects show up automatically in Hue, any additional tags you add appear back in Cloudera Navigator, and the familiar Cloudera Navigator search syntax is supported.
-
-A top search bar now appears. The autocomplete offers a list of facets and prefills the top values. Pressing enter lists the available objects, which can be opened and explored further in the sample popup, the assist or directly into the table browser app.
-
-### Granular Search
+Existing tags and indexed objects show up automatically, any additional tags you add appear back in metadata server, and the familiar metadata server search syntax is supported.
 
 By default, only tables and views are returned. To search for columns, partitions, databases use the ‘type:' filter.
 
@@ -71,15 +61,21 @@ Example of searches:
 * owner:admin type:field usage → List all the fields created by the admin user that matches the usage string
 * parentPath:"/default/web_logs" type:FIELD  originalName:b* → List all the columns starting with `b` of the table `web_logs` in the database `default`.
 
-Learn more on the [Search and Tagging](https://blog.cloudera.com/blog/2017/05/new-in-cloudera-enterprise-5-11-hue-data-search-and-tagging/).
+Learn more on the [Tagging](https://blog.cloudera.com/blog/2017/05/new-in-cloudera-enterprise-5-11-hue-data-search-and-tagging/).
+
+## Tagging
+
+In addition, you can also now tag objects with names to better categorize them and group them to different projects. These tags are searchable, expediting the exploration process through easier, more intuitive discovery.
+
 
 ## Left assist
 
-Data where you need it when you need it
+Data where you need it when you need it.
 
-You can now find your Hue documents, HDFS and S3 files and more in the left assist panel, right-clicking items will show a list of actions, you can also drag-and-drop a file to get the path in your editor and more.
+Find your documents, HDFS and S3 files and more in the left assist panel, right-clicking items will show a list of actions, you can also drag-and-drop a file to get the path in your editor and more.
 
 ## Right assist
+
 This assistant content depends on the context of the application selected and will display the current tables or available UDFs.
 
 ## Sample popup
@@ -88,11 +84,18 @@ This popup offers a quick way to see sample of the data and other statistics on
 
 ## Documents
 
-Similarly to Google Document, queries, workflows... can be saved and shared with other users.
+Similarly to Google Documents, any document (e.g. SQL Query, Workflow, Dashboard...) opened in the Hue apps can be saved.
 
 ### Sharing
 
-Sharing happens on the main page or via the top right menu of the application. Users and groups with Read or Write permissions can be selected.
+Sharing happens on the main page or via the top right menu of the selected application.
+
+Two types of sharing permissions exist:
+
+- read only
+- can modify
+
+Shared documents will show-up with a little blue icon in the homepage.
 
 ### Import / Export
 

+ 2 - 4
docs/docs-site/content/user/dashboards/_index.md

@@ -38,10 +38,8 @@ The “More like This” feature lets you selected fields you would like to use
 
 ### SQL
 
+Any configured SQL source can be queried via the dashboards.
+
 ## Reports
 
 This is work in progress but dashboards will soon offer a classic reporting option.
-
-## SDK
-Read more about extending [connectors](../developer/index.html).
-

+ 34 - 128
docs/docs-site/content/user/editor/_index.md

@@ -16,20 +16,14 @@ Configuration of the connectors is currently done by the [Administrator](/admini
 ## Concepts
 ### Running Queries
 
-**Note**: To run a query, you must be logged
-in to Hue as a user that also has a Unix user account on the remote
-server.
-
-1.  To execute a portion of the query, highlight one or more query
+1.  The currently selected statement has a left blue order. To execute a portion of a query, highlight one or more query
     statements.
-2.  Click **Execute**. The Query Results window appears with the results
-    of your query.
-    -   To view a log of the query execution, toggle the **Log** caret on the
-        left of the progress bar. You can use the information in this tab
-        to debug your query.
+2.  Click **Execute**. The Query Results window appears.
+    -   There is a **Log** caret on the left of the progress bar.
     -   To view the columns of the query, expand the **Columns** icon. Clicking
         on the column label will scroll to the column. Names and types can be filtered.
-    -   To expand a row, double click on it or click on the row number.
+    -   Select the chart icon to plot the results
+    -   To expand a row, click on the row number.
     -   To lock a row, click on the lock icon in the row number column.
     -   Search either by clicking on the magnifier icon on the results tab, or pressing Ctrl/Cmd + F
     -   [See more how to refine your results](http://gethue.com/new-features-in-the-sql-results-grid-in-hive-and-impala/).
@@ -56,73 +50,18 @@ Two of them offer limited scalability:
 1.  Export to a file on your cluster's file systems. This exports the results to a single file. In the export icon, choose Export and then First XXX.
 2.  Download to your computer as a CSV or XLS. This exports the results to a single file in comma-separated values or Microsoft Office Excel format. In the export icon, choose Download as CSV or Download as XLS.
 
-
-<a id="advancedQuerySettings"></a>
 ### Advanced Query Settings
 
 The pane to the top of the Editor lets you specify the following
 options:
 
-
-<table>
-<tr><td>DATABASE</td><td>The database containing the table definitions.</td></tr>
-<tr><td>SETTINGS</td><td>Override the Hive and Hadoop default settings. To configure a new
-setting:
-
-<ol>
-<li> Click Add.
-<li> For Key, enter a Hive or Hadoop configuration variable name.
-<li> For Value, enter the value you want to use for the variable.
-
-For example, to override the directory where structured Hive query logs
-are created, you would enter hive.querylog.location for Key, and a
-path for Value.
-</ol>
-
-To view the default settings, click the Settings tab at the top of
-the page. For information about Hive configuration variables, see:
-[http://wiki.apache.org/hadoop/Hive/AdminManual/Configuration](http://wiki.apache.org/hadoop/Hive/AdminManual/Configuration).
-For information about Hadoop configuration variables, see:
-[http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml](http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml).</td></tr>
-<tr><td>FILE RESOURCES</td><td>Make files locally accessible at query execution time available on the
-Hadoop cluster. Hive uses the Hadoop Distributed Cache to distribute the
-added files to all machines in the cluster at query execution time.
-
-<ol>
-<li>  Click Add to configure a new setting.
-<li>   From the Type drop-down menu, choose one of the following:
-<ul>
-   <li>jar - Adds the specified resources to the Java classpath.
-   <li>archive - Unarchives the specified resources when
-        distributing them.
-    <li>file - Adds the specified resources to the distributed
-        cache. Typically, this might be a transform script (or similar)
-        to be executed.
-
-<li>   For Path, enter the path to the file or click browse and select the file.
-</ol>
-
-Note: It is not necessary to specify files
-used in a transform script if the files are available in the same path
-on all machines in the Hadoop cluster.</td></tr>
-<tr><td>USER-DEFINED FUNCTIONS</td><td>Specify user-defined functions. Click Add to configure a new
-setting. Specify the function name in the Name field, and specify
-the class name for Classname.
-
-You *must* specify a JAR file for the user-defined functions in FILE RESOURCES.
-
-To include a user-defined function in a query, add a $ (dollar sign)
-before the function name in the query. For example, if MyTable is a
-user-defined function name in the query, you would type: SELECT $MyTable
-</td></tr>
-<tr><td>PARAMETERIZATION</td><td>Indicate that a dialog box should display to enter parameter values when
-a query containing the string $parametername is executed. Enabled by
-default.</td></tr>
-</table>
+* Settings: depends on the query engines. For information about [Hive configuration variables](http://wiki.apache.org/hadoop/Hive/AdminManual/Configuration).
+* Files: load a jar of files to use as UDF
+* UDFs: register a custom function
 
 ### Autocomplete
 
-To make your SQL editing experience better we've created a  new autocompleter for Hue 3.11. The old one had some limitations and was only aware of parts of the statement being edited. The new autocompleter knows all the ins and outs of the Hive and Impala SQL dialects and will suggest keywords, functions, columns, tables, databases, etc. based on the structure of the statement and the position of the cursor.
+To make your SQL editing experience, Hue comes with one of the best SQL autocomplete on the planet. The new autocompleter knows all the ins and outs of the Hive and Impala SQL dialects and will suggest keywords, functions, columns, tables, databases, etc. based on the structure of the statement and the position of the cursor.
 
 The result is improved completion throughout. We now have completion for more than just SELECT statements, it will help you with the other DDL and DML statements too, INSERT, CREATE, ALTER, DROP etc.
 
@@ -133,7 +72,7 @@ If multiple tables appear in the FROM clause, including derived and joined table
 
 **Smart keyword completion**
 
-The new autocompleter suggests keywords based on where the cursor is positioned in the statement. Where possible it will even suggest more than one word at at time, like in the case of IF NOT EXISTS, no one likes to type too much right? In the parts where order matters but the keywords are optional, for instance after FROM tbl, it will list the keyword suggestions in the order they are expected with the first expected one on top. So after FROM tbl the WHERE keyword is listed above GROUP BY etc.
+The autocompleter suggests keywords based on where the cursor is positioned in the statement. Where possible it will even suggest more than one word at at time, like in the case of IF NOT EXISTS, no one likes to type too much right? In the parts where order matters but the keywords are optional, for instance after FROM tbl, it will list the keyword suggestions in the order they are expected with the first expected one on top. So after FROM tbl the WHERE keyword is listed above GROUP BY etc.
 
 
 **UDFs**
@@ -147,18 +86,13 @@ When editing subqueries it will only make suggestions within the scope of the su
 
 **All about quality**
 
-We've fine-tuned the live autocompletion for a better experience and we've introduced some options under the editor settings where you can turn off live autocompletion or disable the autocompleter altogether (if you're adventurous). To access these settings open the editor and focus on the code area, press CTRL + , (or on Mac CMD + ,) and the settings will appear.
-
-The autocompleter talks to the backend to get data for tables and databases etc. by default it will timeout after 5 seconds but once it has been fetched it's cached for the next time around. The timeout can be adjusted in the Hue server configuration.
-
-We've got an extensive test suite but not every possible statement is covered, if the autocompleter can't interpret a statement it will be silent and no drop-down will appear. If you encounter a case where you think it should suggest something but doesn't or if it gives incorrect suggestions then please let us know.
+The live autocompletion is fine-tuned for a better experience advanced settings an be accessed via CTRL + , (or on Mac CMD + ,) or clicking on the '?' icon.
 
-Learn more about it in [Autocompleter for Hive and Impala](http://gethue.com/brand-new-autocompleter-for-hive-and-impala/).
+The autocompleter talks to the backend to get data for tables and databases etc and caches it to keep it quick. Clicking on the refresh icon in the left assist will clear the cache. This can be useful if a new table was created outside of Hue and is not yet showing-up (Hue will regularly clear his cache to automatically pick-up metadata changes done outside of Hue).
 
 ### Variables
 Variables are used to easily configure parameters in a query. They can be of two types:
 
-
 <b>Single Valued</b>
 <pre>
 select * from web_logs where country_code = "${country_code}"
@@ -215,7 +149,7 @@ Turns a list of semi-colon separated queries into an interactive presentation. I
 
 ## SQL Databases
 
-Use the query editor with any database.
+Use the query editor with any database. Those databases need to be configured by the [administratior](/administrator/configuration/editor/)
 
 ### Apache Hive
 ### Apache Impala
@@ -261,53 +195,7 @@ Extend with SQL Alchemy, JDBC or build your own [connectors](../../developer/).
 
 ## Jobs
 
-The Editor application enables you to create and submit jobs to
-the cluster. You can include variables with your jobs to enable
-you and other users to enter values for the variables when they run your
-job.
-
-All job design settings except Name and Description support the use of
-variables of the form $variable\_name. When you run the job, a dialog
-box will appear to enable you to specify the values of the variables.
-
-<table>
-<tr><td>Name</td><td>Identifies the job and its collection of properties and parameters.</td></tr>
-<tr><td>Description</td><td>A description of the job. The description is displayed in the dialog box
-that appears if you specify variables for the job.</td></tr>
-<tr><td>Advanced</td><td>Advanced settings:<ul><li>Is shared- Indicate whether to share the action with all users.<li>Oozie parameters - parameters to pass to Oozie</td></tr>
-<tr><td>Prepare</td><td>Specifies paths to create or delete before starting the workflow job.</td></tr>
-<tr><td>Params</td><td>Parameters to pass to a script or command. The parameters are expressed
-using the <a href="http://jcp.org/aboutJava/communityprocess/final/jsr152/">JSP 2.0 Specification (JSP.2.3) Expression
-Language</a>,
-allowing variables, functions, and complex expressions as parameters.</td></tr>
-<tr><td>Job Properties</td><td>Job properties. To set a property value, click <b>Add Property</b>.<ol><li>Property name -  a configuration property name. This field provides autocompletion, so you can type the first few characters of a property name and then select the one you want from the drop-down
-    list.<li>Valuethe property value.</td></tr>
-<tr><td>Files</td><td>Files to pass to the job. Equivalent to the Hadoop -files option.</td></tr>
-<tr><td>Archives</td><td>Files to pass to the job. Archives to pass to the job. Equivalent to the Hadoop -archives option.</td></tr></table>
-
-### MapReduce
-
-A MapReduce job design consists of MapReduce functions written in Java.
-You can create a MapReduce job design from existing mapper and reducer
-classes without having to write a main Java class. You must specify the
-mapper and reducer classes as well as other MapReduce properties in the
-Job Properties setting.
-
-<table>
-<tr><td>Jar path</td><td>The fully-qualified path to a JAR file containing the classes that
-implement the Mapper and Reducer functions.</td></tr>
-</table>
-
-### Java
-
-A Java job design consists of a main class written in Java.
-
-<table>
-<tr><td>Jar path</td><td>The fully-qualified path to a JAR file containing the main class.</td></tr>
-<tr><td>Main class</td><td>The main class to invoke the program.</td></tr>
-<tr><td>Args</td><td>The arguments to pass to the main class.</td></tr>
-<tr><td>Java opts</td><td>The options to pass to the JVM.</td></tr>
-</table>
+In addition to SQL queries, the Editor application enables you to create and submit batch jobs to the cluster.
 
 ### Pig
 
@@ -323,10 +211,16 @@ Type or specify a path to a regular shell script.
 
 [Read more about it here](http://gethue.com/use-the-shell-action-in-oozie/).
 
-### DistCp
+### Java
 
-A DistCp job design consists of a DistCp command.
+A Java job design consists of a main class written in Java.
 
+<table>
+<tr><td>Jar path</td><td>The fully-qualified path to a JAR file containing the main class.</td></tr>
+<tr><td>Main class</td><td>The main class to invoke the program.</td></tr>
+<tr><td>Args</td><td>The arguments to pass to the main class.</td></tr>
+<tr><td>Java opts</td><td>The options to pass to the JVM.</td></tr>
+</table>
 
 ### Spark
 
@@ -379,3 +273,15 @@ Make sure that the Notebook and interpreters are set in the hue.ini, and Livy is
         [[[pyspark]]]
           name=PySpark
           interface=livy
+
+### MapReduce
+
+A MapReduce job design consists of MapReduce functions written in Java.
+You can create a MapReduce job design from existing mapper and reducer
+classes without having to write a main Java class. You must specify the
+mapper and reducer classes as well as other MapReduce properties in the
+Job Properties setting.
+
+### DistCp
+
+A DistCp job design consists of a DistCp command.