| SETTINGS | Override the Hive and Hadoop default settings. To configure a new
setting:
- Click Add.
- For Key, enter a Hive or Hadoop configuration variable name.
- For Value, enter the value you want to use for the variable.
For example, to override the directory where structured Hive query logs
are created, you would enter hive.querylog.location for Key, and a
path for Value.
To view the default settings, click the Settings tab at the top of
the page. For information about Hive configuration variables, see:
[http://wiki.apache.org/hadoop/Hive/AdminManual/Configuration](http://wiki.apache.org/hadoop/Hive/AdminManual/Configuration).
For information about Hadoop configuration variables, see:
[http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml](http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml).
| FILE RESOURCES | Make files locally accessible at query execution time available on the
Hadoop cluster. Hive uses the Hadoop Distributed Cache to distribute the
added files to all machines in the cluster at query execution time.
- Click Add to configure a new setting.
- From the Type drop-down menu, choose one of the following:
- jar - Adds the specified resources to the Java classpath.
- archive - Unarchives the specified resources when
distributing them.
- file - Adds the specified resources to the distributed
cache. Typically, this might be a transform script (or similar)
to be executed.
- For Path, enter the path to the file or click
 to browse and select the file.
 Note: It is not necessary to specify files
used in a transform script if the files are available in the same path
on all machines in the Hadoop cluster.
| USER-DEFINED FUNCTIONS | Specify user-defined functions. Click Add to configure a new
setting. Specify the function name in the Name field, and specify
the class name for Classname.
You *must* specify a JAR file for the user-defined functions in FILE RESOURCES.
To include a user-defined function in a query, add a $ (dollar sign)
before the function name in the query. For example, if MyTable is a
user-defined function name in the query, you would type: SELECT $MyTable
|
| PARAMETERIZATION | Indicate that a dialog box should display to enter parameter values when
a query containing the string $parametername is executed. Enabled by
default. |
## Autocomplete
## Syntax checker
## Query Assist
## Risk Alerts
## Presentation
### Jobs
The Job Designer application enables you to create and submit jobs to
the Hadoop cluster. You can include variables with your jobs to enable
you and other users to enter values for the variables when they run your
job. The Job Designer supports the actions supported by
[Oozie](http://archive.cloudera.com/cdh4/cdh/4/oozie/):
[MapReduce](/content/cloudera-content/cloudera-docs/HadoopTutorial/CDH4/index.html),
Streaming, Java, Pig, Hive, Sqoop, Shell, Ssh, DistCp, Fs, and Email.
Job Designer Installation and Configuration
-------------------------------------------
Job Designer is one of the applications installed as part of Hue. For
information about installing and configuring Hue, see the Hue Installation
manual..
In order to run DistCp, Streaming, Pig, Sqoop, and Hive jobs, Oozie must
be configured to use the Oozie ShareLib. See the Oozie Installation manual.
Starting Job Designer
---------------------
Click the **Job Designer** icon () in
the navigation bar at the top of the Hue web page. The **Job Designs**
page opens in the browser.
Installing the Example Job Designs
----------------------------------
 **Note**: You must be a superuser to perform
this task.
1. Click . The Quick Start Wizard
opens.
2. Click **Step 2: Examples**.
3. Click **Job Designer**.
Job Designs
-----------
A job design specifies several meta-level properties of a job, including
the job design name, description, the executable scripts or classes, and
any parameters for those scripts or classes.
### Filtering Job Designs
You can filter the job designs that appear in the list by owner, name,
type, and description.
**To filter the Job Designs list:**
1. In the **Job Designs** window, click **Designs**.
2. Enter text in the Filter text box at the top of the **Job Designs**
window. When you type in the Filter field, the designs are
dynamically filtered to display only those rows containing text that
matches the specified substring.
### Creating a Job Design
1. In the **Job Designs** window, click New Action \> Action, where
Action is MapReduce, Streaming, Java, Pig, Hive, Sqoop, Shell, Ssh,
DistCp, Fs, or Email.
2. In the **Job Design (Action type)** window, specify the common and
job type specific information.
3. Click **Save** to save the job settings.
### Deleting and Restoring Job Designs
You can move job designs to the trash and later restore or permanently
delete them.
#### Deleting Job Designs
1. In a Manager screen, check the checkbox next to one or more job
designs.
2. Choose one of the following:
- Delete \> Move to trash
- Delete \> Delete forever
#### Restoring Job Designs
1. In a Manager screen, click  **Trash**.
2. Check the checkbox next to one or more job designs.
3. Click Restore.
### Job Design Settings
#### Job Design Common Settings
Most job design types support all the settings listed in the following
table. For job type specific settings, see:
[MapReduce](#mapreduce),
[Streaming](#streaming),
[Java](#java),
[Pig](#pig),
[Hive](#hive),
[Sqoop](#sqoop),
[Shell](#shell),
[Ssh](#ssh),
[DistCp](#distcp),
[Fs](#fs), and
[Email](#email).
All job design settings except Name and Description support the use of
variables of the form $variable\_name. When you run the job, a dialog
box will appear to enable you to specify the values of the variables.
| Name | Identifies the job and its collection of properties and parameters. |
| Description | A description of the job. The description is displayed in the dialog box
that appears if you specify variables for the job. |
| Advanced | Advanced settings:- Is shared- Indicate whether to share the action with all users.
- Oozie parameters - parameters to pass to Oozie
| Prepare | Specifies paths to create or delete before starting the workflow job. |
| Params | Parameters to pass to a script or command. The parameters are expressed
using the [JSP 2.0 Specification (JSP.2.3) Expression
Language](http://jcp.org/aboutJava/communityprocess/final/jsr152/),
allowing variables, functions, and complex expressions as parameters. |
| Job Properties | Job properties. To set a property value, click Add Property.- Property name - a configuration property name. This field provides autocompletion, so you can type the first few characters of a property name and then select the one you want from the drop-down
list.
- Valuethe property value.
| Files | Files to pass to the job. Equivalent to the Hadoop -files option. |
| Archives | Files to pass to the job. Archives to pass to the job. Equivalent to the Hadoop -archives option. |
#### MapReduce Job Design
A MapReduce job design consists of MapReduce functions written in Java.
You can create a MapReduce job design from existing mapper and reducer
classes without having to write a main Java class. You must specify the
mapper and reducer classes as well as other MapReduce properties in the
Job Properties setting.
| Jar path | The fully-qualified path to a JAR file containing the classes that
implement the Mapper and Reducer functions. |
#### Streaming Job Design
Hadoop streaming jobs enable you to create MapReduce functions in any
non-Java language that reads standard Unix input and writes standard
Unix output. For more information about Hadoop streaming jobs, see
[Hadoop
Streaming](http://archive.cloudera.com/cdh/3/hadoop-0.20.2+320/streaming.html).
| Mapper | The path to the mapper script or class. If the mapper file is not on the
machines on the cluster, use the Files option to pass it as a part
of job submission. Equivalent to the Hadoop -mapper option. |
| Reducer | The path to the reducer script or class. If the reducer file is not on
the machines on the cluster, use the Files option to pass it as a
part of job submission. Equivalent to the Hadoop -reducer option. |
#### Java Job Design
A Java job design consists of a main class written in Java.
| Jar path | The fully-qualified path to a JAR file containing the main class. |
| Main class | The main class to invoke the program. |
| Args | The arguments to pass to the main class. |
| Java opts | The options to pass to the JVM. |
#### Pig Job Design
A Pig job design consists of a Pig script.
| Script name | Script name or path to the Pig script. |
#### Hive Job Design
A Hive job design consists of a Hive script.
| Script name | Script name or path to the Hive script. |
#### Sqoop Job Design
A Sqoop job design consists of a Sqoop command.
| Command | The Sqoop command. |
#### Shell Job Design
A Shell job design consists of a shell command.
| Command | The shell command. |
| Capture outputIndicate whether to capture the output of the command. |
#### Ssh Job Design
A Ssh job design consists of an ssh command.
| User | The name of the user to run the command as. |
| Host | The name of the host to run the command on. |
| Command | The ssh command. |
| Capture outputIndicate whether to capture the output of the command. |
#### DistCp Job Design
A DistCp job design consists of a DistCp command.
#### Fs Job Design
A Fs job design consists of a command that operates on HDFS.
| Delete path | The path to delete. If it is a directory, it deletes recursively all its
content and then deletes the directory. |
| Create directoryThe path of a directory to create. |
| Move file | The source and destination paths to the file to be moved. |
| Change permissions | The path whose permissions are to be changed, the permissions, and an
indicator of whether to change permission recursively. |
#### Email Job Design
A Email job design consists of an email message.
| To addresses | The recipient of the email message. |
| CC addresses (optional) | The cc recipients of the email message. |
| Subject | The subject of the email message. |
| Body | The body of the email message. |
### Submitting a Job Design
 **Note**:
A job's input files must be uploaded to the cluster before you can
submit the job.
**To submit a job design:**
1. In the **Job Designs** window, click **Designs** in the upper left
corner. Your jobs and other users' jobs are displayed in the **Job
Designs** window.
2. Check the checkbox next to the job you want to submit.
3. Click the **Submit** button.
1. If the job contains variables, enter the information requested
in the dialog box that appears. For example, the sample grep
MapReduce design displays a dialog where you specify the output
directory.
2. Click **Submit** to submit the job.
After the job is complete, the Job Designer displays the results of the
job. For information about displaying job results, see [Displaying the
Results of Submitting a Job](#submitJob).
### Copying, Editing, and Deleting a Job Design
If you want to edit and use a job but you don't own it, you can make a
copy of it and then edit and use the copied job.
**Copy**
1. In the **Job Designs** window, click **Designs**. The jobs are
displayed in the **Job Designs** window.
2. Check the checkbox next to the job you want to copy.
3. Click the **Copy** button.
4. In the **Job Design Editor** window, change the settings and then
click **Save** to save the job settings.
**Edit**
1. In the **Job Designs** window, click **Designs**. The jobs are
displayed in the **Job Designs** window.
2. Check the checkbox next to the job you want to edit.
3. Click the **Edit** button.
4. In the **Job Design** window, change the settings and then click
**Save** to save the job settings.
Delete
1. In the **Job Designs** window, click **Designs**. The jobs are
displayed in the **Job Designs** window.
2. Check the checkbox next to the job you want to delete.
3. Click the **Delete** button.
4. Click **OK** to confirm the deletion.
Displaying Results of Submitting a Job
--------------------------------------
**To display the Job Submission History:**
In the **Job Designs** window, click the **History** tab. The jobs are
displayed in the **Job Submissions History** listed by Oozie job ID.
**To display Job Details:**
In the **Job Submission History** window, click an Oozie Job ID. The
results of the job display:
- Actions - a list of actions in the job.
- Click  to display the action configuration.
In the action configuration for a MapReduce action, click the value
of the mapred.output.dir property to display the job output.
- In the root-node row, click the Id in the External Id column to view
the job in the Job Browser.
- Details - the job details. Click  to
display the Oozie application configuration.
- Definition - the Oozie application definition.
- Log - the output log.
## SDK
# Dashboards
Dashboards are an interactive way to explore your data quickly and easily. No programming is required and the analysis is done by drag & drops and clicks.
### SDK
# Browsers
Hue’s Browsers powers your Data Catalog. They let you easily search, glance and perform actions on data or jobs in Cloud or on premise clusters.
## Tables
The Table Browser enables you to manage the databases,
tables, and partitions of the metastore shared by
the ([Beeswax](../beeswax.html) and [Cloudera Impala Query
UI](../impala.html)) applications. You can use Metastore
Manager to perform the following operations:
- Databases
- [Select a database](#selectDatabase)
- [Create a database](#createDatabase)
- [Drop databases](#dropDatabase)
- Tables
- [Create tables](#createTables)
- [Browse tables](#browseTables)
- [Import data into a table](#importDataIntoTables)
- [Drop tables](#dropTables)
- [View the location of a table](#viewTableLocation)
Metastore Manager Installation and Configuration
------------------------------------------------
Metastore Manager is one of the applications installed as part of Hue.
For information about installing and configuring Hue, see the Hue Installation
manual.
Starting Metastore Manager
--------------------------
Click the **Metastore Manager** icon
() in the navigation bar at
the top of the Hue browser page.
### Installing Sample Tables
 **Note**: You must be a superuser to perform
this task.
1. Click . The Quick Start Wizard
opens.
2. Click **Step 2: Examples**.
3. Click **Beeswax (Hive UI)** or **Cloudera Impala Query UI**.
### Importing Data
If you want to import your own data instead of installing the sample
tables, follow the procedure in [Creating Tables](#createTables).
Selecting a Database
--------------------
1. In the pane on the left, select the database from the DATABASE
drop-down list.
Creating a Database
-------------------
1. Click .
2. Click **Create a new database**.
1. Specify a database name and optional description. Database names
are not case-sensitive. Click **Next**.
2. Do one of the following:
- Keep the default location in the Hive warehouse folder.
- Specify an external location within HDFS:
1. Uncheck the **Location** checkbox.
2. In the External location field, type a path to a folder
on HDFS or click  to browse
to a folder and click **Select this folder**.
3. Click the **Create Database** button.
Dropping Databases
------------------
1. Click .
2. In the list of databases, check the checkbox next to one or more
databases.
3. Click the  Drop button.
4. Confirm whether you want to delete the databases.
Creating Tables
---------------
Although you can create tables by executing the appropriate Hive HQL DDL
query commands, it is easier to create a table using the Metastore
Manager table creation wizard.
There are two ways to create a table: from a file or manually. If you
create a table from a file, the format of the data in the file will
determine some of the properties of the table, such as the record and
file formats. The data from the file you specify is imported
automatically upon table creation. When you create a file manually, you
specify all the properties of the table, and then execute the resulting
query to actually create the table. You then import data into the table
as an additional step.
**From a File**
1. In the ACTIONS pane in the Metastore Manager window, click **Create
a new table from a file**. The table creation wizard starts.
2. Follow the instructions in the wizard to create the table. The basic
steps are:
- Choose your input file. The input file you specify must exist.
Note that you can choose to have Beeswax create the table
definition only based on the import file you select, without
actually importing data from that file.
- Specify the column delimiter.
- Define your columns, providing a name and selecting the type.
3. Click **Create Table** to create the table. The new table's metadata
displays on the right side of the **Table Metadata** window. At this
point, you can view the metadata or a sample of the data in the
table. From the ACTIONS pane you can import new data into the table,
browse the table, drop it, or go to the File Browser to see the
location of the data.
**Manually**
1. In the ACTIONS pane in the Metastore Manager window, click **Create
a new table manually**. The table creation wizard starts.
2. Follow the instructions in the wizard to create the table. The basic
steps are:
- Name the table.
- Choose the record format.
- Configure record serialization by specifying delimiters for
columns, collections, and map keys.
- Choose the file format.
- Specify the location for your table's data.
- Specify the columns, providing a name and selecting the type for
each column.
- Specify partition columns, providing a name and selecting the
type for each column.
3. Click **Create table**. The Table Metadata window displays.
Browsing Tables
---------------
**To browse table data:**
In the Table List window, check the checkbox next to a table name and
click **Browse Data**. The table's data displays in the Query Results
window.
**To browse table metadata:**
Do one of the following:
- In the Table List window, click a table name.
- Check the checkbox next to a table name and click **View**.
- The table's metadata displays in the **Columns** tab. You can view
the table data by selecting the **Sample** tab.
- If the table is partitioned, you can view the partition columns by
clicking the **Partition Columns** tab and display the partitions by
clicking **Show Partitions(n)**, where n is the number of partitions
in the ACTIONS pane on the left.
Importing Data into a Table
---------------------------
When importing data, you can choose to append or overwrite the table's
data with data from a file.
1. In the Table List window, click the table name. The Table Metadata
window displays.
2. In the ACTIONS pane, click **Import Data**.
3. For **Path**, enter the path to the file that contains the data you
want to import.
4. Check **Overwrite existing data** to replace the data in the
selected table with the imported data. Leave unchecked to append to
the table.
5. Click **Submit**.
Dropping Tables
---------------
1. In the Table List window, click the table name. The Table Metadata
window displays.
2. In the ACTIONS pane, click **Drop Table**.
3. Click **Yes** to confirm the deletion.
Viewing a Table's Location
--------------------------
1. In the Table List window, click the table name. The Table Metadata
window displays.
2. Click **View File Location**. The file location of the selected
table displays in its directory in the File Browser window.
## File Browser
The File Browser application lets you browse and manipulate files and
directories in the Hadoop Distributed File System (HDFS), S3 or ADLS.
With File Browser, you can:
- Create files and directories, upload and download files, upload zip
archives, and rename, move, and delete files and directories. You
can also change a file's or directory's owner, group, and
permissions. See [Files and Directories](#filesAndDirectories).
- Search for files, directories, owners, and groups. See [Searching
for Files and Directories](#searching).
- View and edit files as text or binary. See [Viewing and Editing
Files](#viewAndEdit).
### File systems
#### HDFS
#### S3
#### ADLS
File Browser Installation and Configuration
-------------------------------------------
File Browser is one of the applications installed as part of Hue. For
information about installing and configuring Hue, see the Hue Installation
manual.
Starting File Browser
---------------------
Files and Directories
---------------------
You can use File Browser to view the input and output files of your
MapReduce jobs. Typically, you can save your output files in /tmp or in
your home directory if your system administrator set one up for you. You
must have the proper permissions to manipulate other user's files.
### Creating Directories
1. In the File Browser window, select **New > Directory**.
2. In the **Create Directory** dialog box, enter a directory name and
then click **Submit**.
### Changing Directories
- Click the directory name or parent directory dots in the **File
Browser** window.
- Click the  icon, type a directory name, and
press **Enter**.
To change to your home directory, click **Home** in the path field at
the top of the **File Browser** window.
 **Note**:
The **Home** button is disabled if you do not have a home directory. Ask
a Hue administrator to create a home directory for you.
### Creating Files
1. In the File Browser window, select **New > File**.
2. In the **Create File** dialog box, enter a file name and then click
**Submit**.
### Uploading Files
You can upload text and binary files to the HDFS.
1. In the **File Browser** window, browse to the directory where you
want to upload the file.
2. Select **Upload \> Files**.
3. In the box that opens, click **Upload a File** to browse to and
select the file(s) you want to upload, and then click **Open**.
### Copying a File
1. In the **File Browser** window, check the checkbox next to the file
you want to copy.
2. Click the  Copy button.
### Downloading Files
You can download text and binary files to the HDFS.
1. In the **File Browser** window, check the checkbox next to the file
you want to download.
2. Click the **Download** button.
### Uploading Zip Archives
You can upload zip archives to the HDFS. The archive is uploaded and
extracted to a directory named archivename.
1. In the **File Browser** window, browse to the directory where you
want to upload the archive.
2. Select **Upload > Zip file**.
3. In the box that opens, click **Upload a zip file** to browse to and
select the archive you want to upload, and then click **Open**.
### Trash Folder
File Browser supports the HDFS trash folder (*home directory*/.Trash) to
contain files and directories before they are permanently deleted. Files
in the folder have the full path of the deleted files (in order to be
able to restore them if needed) and checkpoints. The length of time a
file or directory stays in the trash depends on HDFS properties.
**Open**
1. In the **File Browser** window, click .
**Move Files and Directories To**
1. In the **File Browser** window, check the checkbox next to one or
more files and directories.
2. Select **Delete > Move to trash**.
**Empty**
1. In the **File Browser** window, click .
2. Click **Empty**.
### Renaming, Moving, Deleting, and Restoring Files and Directories
**Rename**
1. In the **File Browser** window, check the checkbox next to the file
or directory you want to rename.
2. Click the **Rename** button.
3. Enter the new name and then click **Submit**.
**Move**
1. In the **File Browser** window, check the checkbox next to the file
or directory you want to move.
2. Click the **Move** button.
3. In the **Move** dialog box, browse to or type the new directory, and
then click **Submit**.
**Delete**
1. In the **File Browser** window, check the checkbox next to the file
or directory you want to delete. If you select a directory, all of
the files and subdirectories contained within that directory are
also deleted.
2. Choose one of the following:
- **Delete > Move to trash**
- **Delete > Delete forever**
3. Click **Yes** to confirm. When you move a file to trash it is stored
in the .Trash folder in your home directory.
**Restore**
1. In the **File Browser** window, open the .Trash folder.
2. Navigate to the folder containing the file you want to restore.
3. Check the checkbox next to the file.
4. Click **Restore**.
### Changing a File's or Directory's Owner, Group, or Permissions
 **Note**:
Only the Hadoop superuser can change a file's or directory's owner,
group, or permissions. The user who starts Hadoop is the Hadoop
superuser. The Hadoop superuser account is not necessarily the same as a
Hue superuser account. If you create a Hue user (in User Admin) with the
same user name and password as the Hadoop superuser, then that Hue user
can change a file's or directory's owner, group, or permissions.
**Owner or Group**
1. In the **File Browser** window, check the checkbox next to the
select the file or directory whose owner or group you want to
change.
2. Choose **Change Owner/Group** from the Options menu.
3. In the **Change Owner/Group** dialog box:
- Choose the new user from the **User** drop-down menu.
- Choose the new group from the **Group** drop-down menu.
- Check the **Recursive** checkbox to propagate the change.
4. Click **Submit** to make the changes.
**Permissions**
1. In the **File Browser** window, check the checkbox next to the file
or directory whose permissions you want to change.
2. Click the **Change Permissions** button.
3. In the **Change Permissions** dialog box, select the permissions you
want to assign and then click **Submit**.
Searching for Files and Directories
-----------------------------------
To search for files or directories by name using the query search box,
enter the name of the file or directory in the query search box. File
Browser lists the files or directories matching the search criteria.
Viewing and Editing Files
-------------------------
You can view and edit files as text or binary.
**View**
1. In the **File Browser** window, click the file you want to view.
File Browser displays the first 4,096 bytes of the file in the
**File Viewer** window.
- If the file is larger than 4,096 bytes, use the Block navigation
buttons (First Block, Previous Block, Next Block, Last Block) to
scroll through the file block by block. The **Viewing Bytes**
fields show the range of bytes you are currently viewing.
- To switch the view from text to binary, click **View as Binary**
to view a hex dump.
- To switch the view from binary to text, click **View as Text**.
**Edit**
1. If you are viewing a text file, click **Edit File**. File Browser
displays the contents of the file in the **File Editor** window.
2. Edit the file and then click **Save** or **Save As** to save the
file.
**View Location in HDFS**
Click **View File Location**. File Browser displays the file's location
in the **File Browser** window.
Job Browser
===========
The Job Browser application lets you to examine the Hadoop MapReduce
jobs running on your Hadoop cluster. Job Browser presents the job and
tasks in layers. The top layer is a list of jobs, and you can link to a
list of that job's tasks. You can then view a task's attempts and the
properties of each attempt, such as state, start and end time, and
output size. To troubleshoot failed jobs, you can also view the logs of
each attempt.
Job Browser Installation and Configuration
------------------------------------------
Job Browser is one of the applications installed as part of Hue. For
information about installing and configuring Hue, see the Hue Installation
manual.
Job Browser can display both MRv1 and MRv2 jobs, but must be configured
to display one type at a time.
Starting Job Browser
--------------------
Click the **Job Browser** icon ()
in the navigation bar at the top of the Hue web page.
If there are no jobs that have been run, the **Welcome to the Job
Browser** page opens, with links to the Job Designer and Beeswax.
If there are jobs running, then the Job Browser list appears.
Filtering the Job Browser List
------------------------------
- To filter the jobs by their state (such as **Running** or
**Completed**), choose a state from the **Job status** drop-down
menu.
- To filter by a user who ran the jobs, enter the user's name in the
**User Name** query box.
- To filter by job name, enter the name in the **Text** query box.
- To clear the filters, choose **All States** from the **Job status**
drop-down menu and delete any text in the **User Name** and **Text**
query boxes.
- To display retired jobs, check the **Show retired jobs** checkbox.
Retired jobs show somewhat limited information – for example,
information on maps and reduces and job duration is not available.
Jobs are designated as Retired by the JobTracker based on the value
of mapred.jobtracker.retirejob.interval. The retired jobs no longer
display after the JobTracker is restarted.
Viewing Job Information and Logs
--------------------------------
 **Note**: At any level you can view the log
for an object by clicking the  icon in the Logs
column.
**To view job information for an individual job:**
1. In the **Job Browser** window, click **View** at the right of the
job you want to view. This shows the **Job** page for the job, with
the recent tasks associated with the job are displayed in the
**Tasks** tab.
2. Click the **Metadata** tab to view the metadata for this job.
3. Click the **Counters** tab to view the counter metrics for the job.
**To view details about the tasks associated with the job:**
1. In the **Job** window, click the **View All Tasks** link at the
right just above the **Recent Tasks** list. This lists all the tasks
associated with the job.
2. Click **Attempts** to the right of a task to view the attempts for
that task.
**To view information about an individual task:**
1. In the **Job** window, click the **View** link to the right of the
task. The attempts associated with the task are displayed.
2. Click the **Metadata** tab to view metadata for this task. The
metadata associated with the task is displayed.
3. To view the Hadoop counters for a task, click the **Counters** tab.
The counters associated with the task are displayed.
4. To return to the **Job** window for this job, click the job number
in the status panel at the left of the window.
**To view details about a task attempt:**
1. In the **Job Task** window, click the **View** link to the right of
the task attempt. The metadata associated with the attempt is
displayed under the **Metadata** tab.
2. To view the Hadoop counters for the task attempt, click the
**Counters** tab. The counters associated with the attempt are
displayed.
3. To view the logs associated with the task attempt, click the
**Logs** tab. The logs associated with the task attempt are
displayed.
4. To return to the list of tasks for the current job, click the task
number in the status panel at the left of the window.
### Scheduler
The application lets you build workflows and then schedule them to run regularly automatically. A monitoring interface shows the progress, logs and allow actions like pausing or stopping jobs.
The Oozie Editor/Dashboard application allows you to define Oozie
workflow, coordinator, and bundle applications, run workflow,
coordinator, and bundle jobs, and view the status of jobs. For
information about Oozie, see [Oozie
Documentation](http://archive.cloudera.com/cdh4/cdh/4/oozie/).
A workflow application is a collection of actions arranged in a directed
acyclic graph (DAG). It includes two types of nodes:
- Control flow - start, end, fork, join, decision, and kill
- Action - [MapReduce](../jobdesigner.html#mapreduce),
[Streaming](../jobdesigner.html#streaming),
[Java](../jobdesigner.html#java),
[Pig](../jobdesigner.html#pig),
[Hive](../jobdesigner.html#hive),
[Sqoop](../jobdesigner.html#sqoop),
[Shell](../jobdesigner.html#shell),
[Ssh](../jobdesigner.html#ssh),
[DistCp](../jobdesigner.html#distcp),
[Fs](../jobdesigner.html#fs), and
[Email](../jobdesigner.html#email).
In order to run DistCp, Streaming, Pig, Sqoop, and Hive jobs, Oozie
must be configured to use the Oozie ShareLib. See the Oozie Installation
manual.
A coordinator application allows you to define and execute recurrent and
interdependent workflow jobs. The coordinator application defines the
conditions under which the execution of workflows can occur.
A bundle application allows you to batch a set of coordinator
applications.
Oozie Editor/Dashboard Installation and Configuration
-----------------------------------------------------
Oozie Editor/Dashboard is one of the applications installed as part of
Hue. For information about installing and configuring Hue, see the Hue Installation
manual.
Starting Oozie Editor/Dashboard
-------------------------------
Click the **Oozie Editor/Dashboard** icon
() in the navigation bar at the top of
the Hue browser page. **Oozie Editor/Dashboard** opens with the
following screens:
- [Dashboard](#dashboard) - shows the running and completed workflow,
coordinator, and bundle jobs and information about Oozie
instrumentation and configuration. The screen is selected and opened
to the Workflows page.
- [Workflow Manager](#workflowManager) - shows available workflows and
allows you to create and import workflows.
- [Coordinator Manager](#coordinatorManager) - shows available coordinators and
allows you to create coordinators.
- [Bundle Manager](#bundleManager) - shows available bundles and
allows you to create bundles.
Installing Oozie Editor/Dashboard Examples
------------------------------------------
 **Note**: You must be a superuser to perform
this task.
1. Click . The Quick Start Wizard
opens.
2. Click **Step 2: Examples**.
3. Click **Oozie Editor/Dashboard**.
Filtering Lists in Oozie Editor/Dashboard
-----------------------------------------
Many screens contain lists. When you type in the Filter field on
screens, the lists are dynamically filtered to display only those rows
containing text that matches the specified substring.
Permissions in Oozie Editor/Dashboard
-------------------------------------
In the Dashboard workflows, coordinators, and bundles can only be
viewed, submitted, and modified by their owner or a superuser.
Editor permissions for performing actions on workflows, coordinators,
and bundles are summarized in the following table:
| Action | Superuser or Owner | All |
| View | Y. | Only if "Is shared" is set |
| Submit | Y. | Only if "Is shared" is set |
| Modify | Y. | N |
Deleting and Restoring Workflows, Coordinators, and Bundles
-----------------------------------------------------------
You can move workflows, coordinators, and bundles to the trash and later
restore or permanently delete them.
### Deleting Workflows, Coordinators, and Bundles
1. In a Manager screen, check the checkbox next to one or more
workflows, coordinators or bundles.
2. Choose one of the following:
- Delete \> Move to trash
- Delete \> Delete forever
### Restoring Workflows, Coordinators, and Bundles
1. In a Manager screen, click  **Trash**.
2. Check the checkbox next to one or more workflows, coordinators or
bundles.
3. Click Restore.
Dashboard
---------
The Dashboard shows a summary of the running and completed workflow,
coordinator, and bundle jobs.
You can view jobs for a period up to the last 30 days.
You can filter the list by date (1, 7, 15, or 30 days) or status
(Succeeded, Running, or Killed). The date and status buttons are
toggles.
### Workflows
Click the **Workflows** tab to view the running and completed workflow
jobs for the filters you have specified.
Click a workflow row in the Running or Completed table to view detailed
information about that workflow job.
In the left pane contains a link to the workflow and the variable values
specified.
For the selected job, the following information is available in the
right area.
- **Graph** tab shows the workflow DAG.
- **Actions** tab shows you details about the actions that make up the
workflow.
- Click the **Id** link to see additional details about the
action.
- Click the **External Id** link to view the job in the Job
Browser.
- **Details** tab shows job statistics including start and end times.
- **Configuration** tab shows selected job configuration settings.
- **Logs** tab shows log output generated by the workflow job.
- **Definition** tab shows the Oozie workflow definition, as it
appears in the workflow.xml file (also linked under the application
path properties in the **Details** tab and the **Configuration**
tab).
For each action in the workflow you can:
- Click the  icon to view the action screen,
which contains:
- **Details** tab shows job statistics including start and end
times.
- **Configuration** tab shows the action configuration settings.
- **Child Jobs** tab lists jobs generated by the action.
- Click the  icon to view the log in the Job
Browser.
### Coordinators
Click the **Coordinators** tab to view the running and completed
coordinator jobs for the filters you have specified.
For the selected job, the following information is available.
- The **Calendar** tab shows the timestamp of the job. Click the
timestamp to open the workflow DAG.
- The **Actions** tab shows you details about the actions that make up
the coordinator.
- Click the **Id** link to see additional details about the
action.
- Click the **External Id** link to view the job in the Job
Browser.
- The **Configuration** tab shows selected job configuration settings.
- The **Logs** tab shows log output generated by the coordinator.
- The **Definition** tab shows the Oozie coordinator definition, as it
appears in the coordinator.xml file (also linked under the
oozie.coord.application.path property in the **Configuration** tab).
### Bundles
Click the **Bundles** tab to view the running and completed bundle jobs
for the filters you have specified.
### Oozie
The Oozie tab provides subtabs that give you access to Oozie
instrumentation and configuration settings.
#### Instrumentation
For information on the instrumentation metrics supported by Oozie, see
[Oozie
Monitoring](http://oozie.apache.org/docs/3.3.0/AG_Monitoring.html).
#### Configuration
For information on the configuration properties supported by Oozie, see
[Oozie
Configuration](http://oozie.apache.org/docs/3.3.0/AG_Install.html#Oozie_Configuration).
Workflow Manager
----------------
In Workflow Manager you create Oozie workflows and submit them for
execution.
Click the **Workflows** tab to open the Workflow Manager.
Each row shows a workflow: its name, description, timestamp of its last
modification. It also shows:
- **Steps** - the number of steps in the workflow execution path. This
is the number of execution steps between the start and end of the
workflow. This will not necessarily be the same as the number of
actions in the workflow, if there are control flow nodes in the
control path.
- **Status** - who can run the workflow. **shared** means users other
than the owner can access the workflow. **personal** means only the
owner can modify or submit the workflow. The default is personal.
- **Owner** - the user that created the workflow.
In Workflow Editor you edit workflows that include MapReduce, Streaming,
Java, Pig, Hive, Sqoop, Shell, Ssh, DistCp, Fs, Email, Sub-workflow, and
Generic actions. You can configure these actions in the Workflow Editor,
or you can import job designs from Job Designer to be used as actions in
your workflow. For information about defining workflows, see the
[Workflow
Specification](http://archive.cloudera.com/cdh4/cdh/4/oozie/WorkflowFunctionalSpec.html).
### Opening a Workflow
To open a workflow, in Workflow Manager, click the workflow. Proceed
with [Editing a Workflow](#editingWorkflow).
### Creating a Workflow
1. Click the **Create** button at the top right.
2. In the Name field, type a name.
3. Check the Is shared checkbox to allow all users to access the
workflow.
4. Click **advanced** to specify the deployment directory or a job.xml
file.
5. Click **Save**. The Workflow Editor opens. Proceed with [Editing a
Workflow](#editingWorkflow).
### Importing a Workflow
1. Click the **Import** button at the top right.
2. In the Name field, type a name.
3. In the **Local workflow.xml file** field, click **Choose File** and
select a workflow file.
4. Click **advanced** to specify whether the workflow is shared, the
deployment directory, or a job.xml file.
5. Click **Save**. The Workflow Editor opens. Proceed with [Editing a
Workflow](#editingWorkflow).
### Submitting a Workflow
To submit a workflow for execution, do one of the following:
- In the Workflow Manager, click the radio button next to the
workflow, and click the **Submit** button.
- In the Workflow Editor, click the **Submit** button.
The workflow job is submitted and the Dashboard displays the workflow
job.
To view the output of the job, click  **View the
logs**.
#### Suspending a Running Job
In the pane on the left, click the **Suspend** button.
1. Verify that you want to suspend the job.
#### Resuming a Suspended Job
In the pane on the left, click the **Resume** button.
1. Verify that you want to resume the job.
#### Rerunning a Workflow
In the pane on the left, click the **Rerun** button.
1. Check the checkboxes next to the actions to rerun.
2. Specify required variables.
3. Click **Submit**.
### Scheduling a Workflow
To schedule a workflow for recurring execution, do one of the following:
- In the Workflow Manager, check the checkbox next to the workflow and
click the **Schedule** button.
- In the Workflow Editor, click the **Schedule** button.
A coordinator is created and opened in the Coordinator Editor. Proceed
with [Editing a Coordinator](#editingCoordinator).
### Editing a Workflow
In the Workflow Editor you can easily perform operations on Oozie action
and control nodes.
#### Action Nodes
The Workflow Editor supports dragging and dropping action nodes. As you
move the action over other actions and forks, highlights indicate active
areas. If there are actions in the workflow, the active areas are the
actions themselves and the areas above and below the actions. If you
drop an action on an existing action, a fork and join is added to the
workflow.
- Add actions to the workflow by clicking an action
 button and drop the action on the
workflow. The Edit Node screen displays.
1. Set the action properties and click **Done**. Each action in a
workflow must have a unique name.
- Copy an action by clicking the  **Copy**
button.
1. The action is opened in the Edit Node screen.
2. Edit the action properties and click **Done**. The action is added
to the end of the workflow.
- Delete an action by clicking the  button.
- Edit an action by clicking the  button.
- Change the position of an action by left-clicking and dragging an
action to a new location.
#### Control Nodes
- Create a fork and join by dropping an action on top of another
action.
- Remove a fork and join by dragging a forked action and dropping it
above the fork.
- Convert a fork to a decision by clicking the
 button.
- To edit a decision:
1. Click the  button.
2. Fill in the predicates that determine which action to perform
and select the default action from the drop-down list.
3. Click **Done**.
### Uploading Workflow Files
In the Workflow Editor, click the **Upload** button.
The workspace of the workflow is opened in the File Browser application.
Follow the procedure in [Uploading
Files](../filebrowser.html#uploadingFiles) to upload the files. You must
put JAR files in a lib directory in the workspace.
### Editing Workflow Properties
1. In the Workflow Editor, click the link under the Name or Description
fields in the left pane.
2. To share the workflow with all users, check the **Is shared**
checkbox.
3. To set advanced execution options, click **advanced** and edit the
deployment directory, add parameters and job properties, or specify
a job.xml file.
4. Click **Save**.
### Displaying the History of a Workflow
1. Click the **Dashboard** tab.
2. Click the **Workflows** tab.
3. Click a workflow.
4. Click the **Actions** tab.
Coordinator Manager
-------------------
In Coordinator Manager you create Oozie coordinator applications and
submit them for execution.
Click the **Coordinators** tab to open the Coordinator Manager.
Each row shows a coordinator: its name, description, timestamp of its
last modification. It also shows:
- **Workflow** - the workflow that will be run by the coordinator.
- **Frequency** - how often the workflow referenced by the coordinator
will be run.
- **Status** - who can run the coordinator. **shared** means users
other than the owner can access the workflow. **personal** means
only the owner can modify or submit the workflow. The default is
personal.
- **Owner** - the user that created the coordinator.
In Coordinator Editor, you edit coordinators and the datasets required
by the coordinators. For information about defining coordinators and
datasets, see the [Coordinator
Specification](http://archive.cloudera.com/cdh4/cdh/4/oozie/CoordinatorFunctionalSpec.html).
### Opening a Coordinator
To open a coordinator, in Coordinator Manager, click the coordinator.
Proceed with [Editing a Coordinator](#editingCoordinator).
### Creating a Coordinator
To create a coordinator, in Coordinator Manager:
1. Click the **Create** button at the top right. The Coordinator wizard
opens. Proceed with [Editing a Coordinator](#editingCoordinator).
### Submitting a Coordinator
To submit a coordinator for execution, check the checkbox next to the
coordinator and click the **Submit** button.
### Editing a Coordinator
In the Coordinator Editor you specify coordinator properties and the
datasets on which the workflow scheduled by the coordinator will operate
by stepping through screens in a wizard. You can also advance to
particular steps and revisit steps by clicking the Step "tabs" above the
screens. The following instructions walk you through the wizard.
1. Type a name, select the workflow, check the **Is shared checkbox**
to share the job, and click **Next**. If the Coordinator Editor was
opened after scheduling a workflow, the workflow will be set.
2. Select how many times the coordinator will run for each specified
unit, the start and end times of the coordinator, the timezone of
the start and end times, and click **Next**. Times must be expressed
as UTC times. For example, to run at 10 pm PST, specify a start time
of 6 am UTC of the following day (+8 hours) and set the Timezone
field to America/Los\_Angeles.
3. Click **Add** to select an input dataset and click **Next**. If no
datasets exist, follow the procedure in [Creating a
Dataset](#creatingDataset).
4. Click **Add** to select an output dataset. Click **Save
coordinator** or click **Next** to specify advanced settings.
5. To share the coordinator with all users, check the **Is****shared**
checkbox.
6. Fill in parameters to pass to Oozie, properties that determine how
long a coordinator will wait before timing out, how many
coordinators can run and wait concurrently, and the coordinator
execution policy.
7. Click **Save coordinator**.
### Creating a Dataset
1. In the Coordinator Editor, do one of the following:
- Click **here** in the Inputs or Outputs pane at the top of the
editor.
- In the pane at the left, click the **Create new** link. Proceed
with [Editing a Dataset](#editingDataset).
### Displaying Datasets
1. In the Coordinator Editor, click **Show existing** in pane at the
left.
2. To edit a dataset, click the dataset name in the Existing datasets
table. Proceed with [Editing a Dataset](#editingDataset).
### Editing a Dataset
1. Type a name for the dataset.
2. In the Start and Frequency fields, specify when and how often the
dataset will be available.
3. In the URI field, specify a URI template for the location of the
dataset. To construct URIs and URI paths containing dates and
timestamps, you can specify the variables
${YEAR},${MONTH},${DAY},${HOUR},${MINUTE}. For example:
hdfs://foo:9000/usr/app/stats/${YEAR}/${MONTH}/data.
4. In the Instance field, click a button to choose a default, single,
or range of data instances. For example, if frequency==DAY, a window
of the last rolling 5 days (not including today) would be expressed
as start: -5 and end: -1. Check the advanced checkbox to display a
field where you can specify a coordinator [EL
function](http://archive.cloudera.com/cdh4/cdh/4/oozie/CoordinatorFunctionalSpec.html).
5. Specify the timezone of the start date.
6. In the Done flag field, specify the flag that identifies when input
datasets are no longer ready.
### Displaying the History of a Coordinator
1. Click the **Dashboard** tab.
2. Click the **Coordinators** tab.
3. Click a coordinator.
4. Click the **Actions** tab.
Bundle Manager
--------------
In Bundle Manager you create Oozie bundle applications and submit them
for execution.
Click the **Bundle** tab to open the Bundle Manager.
Each row shows a bundle: its name, description, timestamp of its last
modification. It also shows:
- **Coordinators** - the coordinators that will be run by the bundle.
- **Kick off** - the UTC time when the coordinators referenced by the
bundle will be started.
- **Status** - who can run the bundle. **shared** means users other
than the owner can access the workflow. **personal** means only the
owner can modify or submit the workflow. The default is personal.
- **Owner** - the user that created the bundle.
For information about defining bundles, see the [Bundle
Specification](http://archive.cloudera.com/cdh4/cdh/4/oozie/BundleFunctionalSpec.html).
### Opening a Bundle
To open a bundle, in Bundle Manager, click the bundle. Proceed with
[Editing a Bundle](#editingBundle).
### Creating a Bundle
1. Click the **Create** button at the top right.
2. In the Name field, type a name.
3. In the Kick off time field, choose a kick off time.
4. Check the Is shared checkbox to allow all users to access the
workflow.
5. Click **Save**. The Bundle Editor opens. Proceed with [Editing a
Bundle](#editingBundle).
### Submitting a Bundle
To submit a bundle for execution, check the checkbox next to the bundle
and click the **Submit** button.
### Editing a Bundle
In the Bundle Editor, you specify properties by stepping through screens
in a wizard. You can also advance to particular steps and revisit steps
by clicking the Step "tabs" above the screens. The following
instructions walk you through the wizard.
1. Click **Add** to select a coordinator that the bundle will kick off.
2. Choose the kick off time. The time must be expressed as a UTC time.
For example, to run at 10 pm PST, specify a start time of 6 am UTC
of the following day (+8 hours).
3. To share the bundle with all users, check the **Is shared**
checkbox.
4. Click **Next** to specify advanced settings or click **Save
bundle**.
5. Fill in parameters to pass to Oozie.
6. Click **Save bundle**.
Displaying the History of a Bundle
----------------------------------
1. Click the **Dashboard** tab.
2. Click the **Bundles** tab.
3. Click a bundle.
4. Click the **Actions** tab.
# Administration
Quick Start Wizard
------------------
The Quick Start wizard allows you to perform the following Hue setup
operations by clicking the tab of each step or sequentially by clicking
Next in each screen:
1. **Check Configuration** validates your Hue configuration. It will
note any potential misconfiguration and provide hints as to how to
fix them. You can edit the configuration file described in the next
section or use Cloudera Manager, if installed, to manage your
changes.
2. **Examples** contains links to install examples into the Hive,
Impala, MapReduce, Spark, Oozie, Solr Dashboard and Pig Editor applications.
3. **Users** contains a link to the User Admin application to create or
import users and a checkbox to enable and disable collection of
usage information.
4. **Go!** - displays the Hue home screen, which contains links to the
different categories of applications supported by Hue: Query,
Hadoop, and Workflow.
Configuration
-------------
Displays a list of the installed Hue applications and their
configuration. The location of the folder containing the Hue
configuration files is shown at the top of the page. Hue configuration
settings are in the hue.ini configuration file.
Click the tabs under **Configuration Sections and Variables** to see the
settings configured for each application. For information on configuring
these settings, see Hue Configuration in the Hue installation manual.
Server Logs
-----------
Displays the Hue Server log and allows you to download the log to your
local system in a zip file.
## User management
The User Admin application lets a superuser add, delete, and manage Hue
users and groups, and configure group permissions. Superusers can add
users and groups individually, or import them from an LDAP directory.
Group permissions define the Hue applications visible to group members
when they log into Hue and the application features available to them.
Starting User Admin
-------------------
Click the **User Admin** icon ()
in the navigation bar at the top of the Hue browser page. The Hue Users
page opens.
Users
-----
The User Admin application provides two levels of user privileges:
superusers and users.
- Superusers — The first user who logs into Hue after its initial
installation becomes the first superuser. Superusers have
permissions to perform administrative functions:
- Add and delete users
- Add and delete groups
- Assign permissions to groups
- Change a user into a superuser
- Import users and groups from an LDAP server
- Users — can change their name, e-mail address, and password and log
in to Hue and run Hue applications, subject to the permissions
provided by the Hue groups to which they belong.
### Adding a User
1. In the **User Admin** page, click **Add User**.
2. In the **Credentials** screen, add required information about the
user. Once you provide the required information you can click the
wizard step tabs to set other information.
| Username | A user name that contains only letters, numbers, and underscores;
blank spaces are not allowed and the name cannot begin with a
number. The user name is used to log into Hue and in file
permissions and job submissions. This is a required field.
|
| Password and Password confirmation | A password for the user. This is a required field. |
| Create home directory | Indicate whether to create a directory named /user/username in HDFS.
For non-superusers, the user and group of the directory are
username. For superusers, the user and group are username and
supergroup. |
3. Click **Add User** to save the information you specified and close
the **Add User** wizard or click **Next**.
4. In the **Names and Groups** screen, add optional information.
| First name and Last name | The user's first and last name.
|
| E-mail address | The user's e-mail address. The e-mail address is used by the Job
Designer and Beeswax applications to send users an e-mail message
after certain actions have occurred. The Job Designer sends an
e-mail message after a job has completed. Beeswax sends a message
after a query has completed. If an e-mail address is not specified,
the application will not attempt to email the user. |
| Groups | The groups to which the user belongs. By default, a user is assigned
to the **default** group, which allows access to all applications.
See [Permissions](#permissions). |
5. Click **Add User** to save the information you specified and close
the **Add User** wizard or click **Next**.
6. In the **Advanced** screen, add status information.
| Active | Indicate that the user is enabled and allowed to log in. Default: checked. |
| Superuser status | Assign superuser privileges to the user. |
7. Click **Add User** to save the information you specified and close
the **Add User** wizard.
### Deleting a User
1. Check the checkbox next to the user name and click **Delete**.
2. Click **Yes** to confirm.
### Editing a User
1. Click the user you want to edit in the **Hue Users** list.
2. Make the changes to the user and then click **Update user**.
### Importing Users from an LDAP Directory
Hue must be configured to use an external LDAP directory (OpenLDAP or
Active Directory). See Hue Installation in [CDH4
Installation](http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/CDH4-Installation-Guide/CDH4-Installation-Guide.html).
 **Note**:
Importing users from an LDAP directory does not import any password
information. You must add passwords manually in order for a user to log
in.
To add a user from an external LDAP directory:
1. Click **Add/sync LDAP user**.
2. Specify the user properties:
| Username | The user name. |
| Distinguished name | Indicate that Hue should use a full distinguished name for the user.
This imports the user's first and last name, username, and email,
but does not store the user password. |
| Create home directory | Indicate that Hue should create a home directory for the user in
HDFS. |
3. Click **Add/sync user**.
If the user already exists in the User Admin, the user information
in User Admin is synced with what is currently in the LDAP
directory.
### Syncing Users and Groups with an LDAP Directory
You can sync the Hue user database with the current state of the LDAP
directory using the **Sync LDAP users/groups** function. This updates
the user and group information for the already imported users and
groups. It does not import any new users or groups.
1. Click **Sync LDAP users/groups**.
2. The **Create Home Directories** checkbox creates home directories in
HDFS for existing imported members that don't have home directories.
3. In the **Sync LDAP users and groups** dialog, click **Sync** to
perform the sync.
Groups
------
Superusers can add and delete groups, configure group permissions, and
assign users to group memberships.
### Adding a Group
You can add groups, and delete the groups you've added. You can also
import groups from an LDAP directory.
1. In the **User Admin** window, click **Groups** and then click **Add
Group**.
2. Specify the group properties:
| Name | The name of the group. Group names can only be letters, numbers, and
underscores; blank spaces are not allowed. |
| Members | The users in the group. Check user names or check Select all. |
| Permissions | The applications the users in the group can access. Check
application names or check Select all. |
3. Click **Add group**.
### Adding Users to a Group
1. In the **User Admin** window, click **Groups**.
2. Click the group.
3. To add users to the group, check the names in the list provided or
check **Select All**.
4. Click **Update group**.
### Deleting a Group
1. Click **Groups**.
2. Check the checkbox next to the group and click **Delete**.
3. Click **Yes** to confirm.
### Importing Groups from an LDAP Directory
1. From the **Groups** tab, click **Add/sync LDAP group**.
2. Specify the group properties:
| Name | The name of the group. |
| Distinguished name | Indicate that Hue should use a full distinguished name for the
group. |
| Import new members | Indicate that Hue should import the members of the group. |
| Import new members from all subgroups |
Indicate that Hue should import the members of the subgroups. |
| Create home directories | Indicate that Hue should create home directories in HDFS for the
imported members. |
3. Click **Add/sync group**.
Permissions
-----------
Permissions for Hue applications are granted to groups, with users
gaining permissions based on their group membership. Group permissions
define the Hue applications visible to group members when they log into
Hue and the application features available to them.
1. Click **Permissions**.
2. Click the application for which you want to assign permissions.
3. Check the checkboxes next to the groups you want to have permission
for the application. Check **Select all** to select all groups.
4. Click **Update permission**. The new groups will appear in the
Groups column in the **Hue Permissions** list.
# Contrib
Those modules are not active enough to be officially maintained in the core Hue but can still
fit your needs. Any [contribution](https://github.com/cloudera/hue/wiki) is still welcomed!
## SDK
Check the SDK guide or contact the community about how to build your own custom app.
## HBase Browser
We'll take a look at the new [HBase Browser App](http://gethue.com/the-web-ui-for-hbase-hbase-browser).
Prerequisites before using the app:
\1. Have HBase and Thrift Service 1 initiated (Thrift can be configured)
\2. Configure your list of HBase Clusters in
[hue.ini](https://github.com/cloudera/hue/blob/master/desktop/conf.dist/hue.ini#L467)
to point to your Thrift IP/Port
SmartView
---------
The smartview is the view that you land on when you first enter a table.
On the left hand side are the row keys and hovering over a row reveals a
list of controls on the right. Click a row to select it, and once
selected you can perform batch operations, sort columns, or do any
amount of standard database operations. To explore a row, simple scroll
to the right. By scrolling, the row should continue to lazily-load cells
until the end.
### Adding Data
To initially populate the table, you can insert a new row or bulk upload
CSV/TSV/etc. type data into your table.
On the right hand side of a row is a '+' sign that lets you insert
columns into your
row
### Mutating Data
To edit a cell, simply click to edit inline.
If you need more control or data about your cell, click “Full Editor” to
edit.
In the full editor, you can view cell history or upload binary data to
the cell. Binary data of certain MIME Types are detected, meaning you
can view and edit images, PDFs, JSON, XML, and other types directly in
your browser!
Hovering over a cell also reveals some more controls (such as the delete
button or the timestamp). Click the title to select a few and do batch
operations:
If you need some sample data to get started and explore, check out this
howto create [HBase table
tutorial](http://gethue.com/hadoop-tutorial-how-to-create-example-tables-in-hbase).
### Smart Searchbar
The "Smart Searchbar" is a sophisticated tool that helps you zero-in on
your data. The smart search supports a number of operations. The most
basic ones include finding and scanning row keys. Here I am selecting
two row keys with:
domain.100, domain.200
Submitting this query gives me the two rows I was looking for. If I want
to fetch rows after one of these, I have to do a scan. This is as easy
as writing a '+' followed by the number of rows you want to fetch.
domain.100, domain.200 +5
Fetches domain.100 and domain.200 followed by the next 5 rows. If you're
ever confused about your results, you can look down below and the query
bar and also click in to edit your query.
The Smart Search also supports column filtering. On any row, I can
specify the specific columns or families I want to retrieve. With:
domain.100[column_family:]
I can select a bare family, or mix columns from different families like
so:
domain.100[family1:, family2:, family3:column_a]
Doing this will restrict my results from one row key to the columns I
specified. If you want to restrict column families only, the same effect
can be achieved with the filters on the right. Just click to toggle a
filter.
Finally, let's try some more complex column filters. I can query for
bare columns:
domain.100[column_a]
This will multiply my query over all column families. I can also do
prefixes and scans:
domain.100[family: prefix* +3]
This will fetch me all columns that start with prefix\* limited to 3
results. Finally, I can filter on range:
domain.100[family: column1 to column100]
This will fetch me all columns in 'family:' that are lexicographically
\>= column1 but <= column100. The first column ('column1') must be a
valid column, but the second can just be any string for comparison.
The Smart Search also supports prefix filtering on rows. To select a
prefixed row, simply type the row key followed by a star \*. The prefix
should be highlighted like any other searchbar keyword. A prefix scan is
performed exactly like a regular scan, but with a prefixed row.
domain.10* +10
Finally, as a new feature, you can also take full advantage of the
[HBase filtering](denied:about:blank)language, by typing your filter
string between curly braces. HBase Browser autocompletes your filters
for you so you don't have to look them up every time. You can apply
filters to rows or scans.
domain.1000 {ColumnPrefixFilter('100-') AND ColumnCountGetFilter(3)}
This doc only covers a few basic features of the Smart Search. You can
take advantage of the full querying language by referring to the help
menu when using the app. These include column prefix, bare columns,
column range, etc. Remember that if you ever need help with the
searchbar, you can use the help menu that pops up while typing, which
will suggest next steps to complete your query.
## Sqoop 2 Editor
The Sqoop UI enables transfering data from a relational database
to Hadoop and vice versa. The UI lives uses Apache Sqoop to do this.
See the [Sqoop Documentation](http://sqoop.apache.org/docs/1.99.2/index.html) for more details on Sqoop.
Installation and Configuration
------------------------------
The Sqoop UI is one of the applications installed as part of
Hue. For information about installing and configuring Hue, see the Hue Installation
manual.
Starting
--------
Click the **Sqoop** icon
() in the navigation bar at the top of
the Hue browser page.
Sqoop Jobs
----------
Sqoop UI is oriented around jobs in Apache Sqoop.
### Creating a New Job
1. Click the **New job** button at the top right.
2. In the Name field, enter a name.
3. Choose the type of job: import or export.
The proceeding form fields will change depending on which type is chosen.
4. Select a connection, or create one if it does not exist.
5. Fill in the rest of the fields for the job.
For importing, the "Table name", "Storage type", "Output format", and "Output directory" are necessary at a minimum.
For exporting, the "Table name" and "Input directory" are necessary at a minimum.
6. Click **save** to finish.
### Editing a Job
1. In the list of jobs, click on the name of the job.
2. Edit the desired form fields in the job.
### Copying a Job
1. In the list of jobs, click on the name of the job.
2. On the left hand side of the job editor, there should be a panel containing actions.
Click **Copy**.
### Removing a Job
1. In the list of jobs, click on the name of the job.
2. On the left hand side of the job editor, there should be a panel containing actions.
Click **Delete**.
### Running a Job
There's a status on each of the items in the job list indicating
the last time a job was ran. The progress of the job should dynamically
update. There's a progress bar at the bottom of each item on the job list
as well.
1. In the list of jobs, click on the name of the job.
2. On the left hand side of the job editor, there should be a panel containing actions.
Click **Run**.
### Creating a New Connection
1. Click the **New job** button at the top right.
2. At the connection field, click the link titled **Add a new connection**.
3. Fill in the displayed fields.
4. Click **save** to finish.
### Editing a Connection
1. Click the **New job** button at the top right.
2. At the connection field, select the connection by name that should be edited.
3. Click **Edit**.
4. Edit the any of the fields.
5. Click **save** to finish.
### Removing a Connection
1. Click the **New job** button at the top right.
2. At the connection field, select the connection by name that should be deleted.
3. Click **Delete**.
NOTE: If this does not work, it's like because a job is using that connection.
Make sure not jobs are using the connection that will be deleted.
### Filtering Sqoop Jobs
The text field in the top, left corner of the Sqoop Jobs page enables fast filtering
of sqoop jobs by name.
## ZooKeeper Browser
The main two features are:
- Listing of the ZooKeeper cluster stats and clients
- Browsing and edition of the ZNode hierarchy
ZooKeeper Browser requires the [ZooKeeper
REST](https://github.com/apache/zookeeper/tree/trunk/src/contrib/rest)
service to be running. Here is how to setup this one:
First get and build ZooKeeper:
git clone https://github.com/apache/zookeeper
cd zookeeper
ant
Buildfile: /home/hue/Development/zookeeper/build.xml
init:
[mkdir] Created dir: /home/hue/Development/zookeeper/build/classes
[mkdir] Created dir: /home/hue/Development/zookeeper/build/lib
[mkdir] Created dir: /home/hue/Development/zookeeper/build/package/lib
[mkdir] Created dir: /home/hue/Development/zookeeper/build/test/lib
...
And start the REST service:
cd src/contrib/rest
nohup ant run&
If ZooKeeper and the REST service are not on the same machine as Hue, go
update the [Hue
settings](https://github.com/cloudera/hue/blob/master/desktop/conf.dist/hue.ini#L581)
and specify the correct hostnames and ports:
[zookeeper]
[[clusters]]
[[[default]]]
# Zookeeper ensemble. Comma separated list of Host/Port.
# e.g. localhost:2181,localhost:2182,localhost:2183
## host_ports=localhost:2181
# The URL of the REST contrib service
## rest_url=http://localhost:9998
|
| | |