manual.txt 28 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827
  1. Hue Installation Guide
  2. ======================
  3. Introduction
  4. ------------
  5. Hue is a graphical user interface to operate and develop applications for
  6. Apache Hadoop. Hue applications are collected into a desktop-style environment
  7. and delivered as a Web application, requiring no additional installation for
  8. individual users.
  9. This guide describes how to install and configure a Hue tarball. For
  10. information about installing Hue packages, see
  11. https://ccp.cloudera.com/display/CDH4DOC/Hue+Installation[Installing Hue].
  12. There is also a companion SDK guide that describes how to develop
  13. new Hue applications:
  14. link:sdk/sdk.html[Hue SDK Documentation]
  15. IMPORTANT: Hue requires the Hadoop contained in
  16. https://ccp.cloudera.com/display/CDH4DOC/CDH4+Quick+Start+Guide[Cloudera's
  17. Distribution including Apache Hadoop (CDH4)]
  18. .Conventions Used in this Guide:
  19. * Commands that must be run with +root+ permission have a +#+ command prompt.
  20. * Commands that do not require +root+ permission have a +$+ command prompt.
  21. Hue Installation Instructions
  22. -----------------------------
  23. The following instructions describe how to install the Hue tarball on a
  24. multi-node cluster. You need to also install CDH and update some
  25. Hadoop configuration files before running Hue.
  26. Install Hue
  27. ~~~~~~~~~~~
  28. Hue consists of a web service that runs on a special node in your cluster.
  29. Choose one node where you want to run Hue. This guide refers to that node as
  30. the _Hue Server_. For optimal performance, this should be one of the nodes
  31. within your cluster, though it can be a remote node as long as there are no
  32. overly restrictive firewalls. For small clusters of less than 10 nodes,
  33. you can use your existing master node as the Hue Server.
  34. You can download the Hue tarball here:
  35. http://github.com/cloudera/hue/downloads/
  36. Hue Dependencies
  37. ^^^^^^^^^^^^^^^^
  38. Hue employs some Python modules which use native code and requires
  39. certain development libraries be installed on your system. To install from the
  40. tarball, you must have the following installed:
  41. .Required Dependencies
  42. [grid="rows"]
  43. ``~
  44. Redhat,Ubuntu
  45. ~~~~~~~~~~
  46. gcc,gcc
  47. g++,g++
  48. libxml2-devel,libxml2-dev
  49. libxslt-devel,libxslt-dev
  50. cyrus-sasl-devel,libsasl2-dev
  51. cyrus-sasl-gssapi,libsasl2-modules-gssapi-mit
  52. mysql-devel,libmysqlclient-dev
  53. python-devel,python-dev
  54. python-setuptools,python-setuptools
  55. python-simplejson,python-simplejson
  56. sqlite-devel,libsqlite3-dev
  57. ant,ant
  58. ~~~~~~~~~~
  59. The full list is here: https://github.com/cloudera/hue#development-prerequisites
  60. Build
  61. ^^^^^
  62. Configure `$PREFIX` with the path where you want to install Hue by running:
  63. $ PREFIX=/usr/share make install
  64. $ cd /usr/share/hue
  65. $ sudo chmod 4750 apps/shell/src/shell/build/setuid
  66. You can install Hue anywhere on your system, and run Hue as a non-root user.
  67. The Shell application needs root privileges to launch various sub-processes as
  68. the logged in users.
  69. It is a good practice to create a new user for Hue and either install Hue in
  70. that user's home directory, or in a directory within `/usr/share`.
  71. Troubleshooting the Hue Tarball Installation
  72. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  73. .Q: I moved my Hue installation from one directory to another and now Hue no
  74. longer functions correctly.
  75. A: Due to the use of absolute paths by some Python packages, you must run a
  76. series of commands if you move your Hue installation. In the new location, run:
  77. ----
  78. $ rm app.reg
  79. $ rm -r build
  80. $ make apps
  81. ----
  82. .Q: Why does "make install" compile other pieces of software?
  83. A: In order to ensure that Hue is stable on a variety of distributions and
  84. architectures, it installs a Python virtual environment which includes its
  85. dependencies. This ensures that the software can depend on specific versions
  86. of various Python libraries and you don't have to be concerned about missing
  87. software components.
  88. Install Hadoop from CDH
  89. ~~~~~~~~~~~~~~~~~~~~~~~
  90. To use Hue, you must install and run Hadoop from CDH4 or later. If you
  91. are not running this version of CDH or later, upgrade your cluster before
  92. proceeding.
  93. .Dependency on CDH Components
  94. [grid="rows"]
  95. `-------------.----------.----------------------------------.------------------------
  96. Component Required Applications Notes
  97. -------------------------------------------------------------------------------------
  98. HDFS Yes Core, Filebrowser HDFS access through WebHdfs or HttpFS
  99. MR1 No JobBrowser, JobDesigner, Beeswax Job information access through hue-plugins
  100. Yarn No JobDesigner, Beeswax Transitive dependency via Hive or Oozie
  101. Oozie No JobDesigner, Oozie Oozie access through REST API
  102. Hive No Beeswax Beeswax uses the Hive client libraries
  103. Flume No Shell Optionally provides access to the Flume shell
  104. HBase No Shell Optionally provides access to the HBase shell
  105. Pig No Shell Optionally provides access to the Pig shell
  106. -------------------------------------------------------------------------------------
  107. Hadoop Configuration
  108. ~~~~~~~~~~~~~~~~~~~~
  109. Configure WebHdfs
  110. ^^^^^^^^^^^^^^^^^
  111. You need to enable WebHdfs or run an HttpFS server. To turn on WebHDFS,
  112. add this to your `hdfs-site.xml` and *restart* your HDFS cluster.
  113. Depending on your setup, your `hdfs-site.xml` might be in `/etc/hadoop/conf`.
  114. <property>
  115. <name>dfs.webhdfs.enabled</name>
  116. <value>true</value>
  117. </property>
  118. You also need to add this to `core-site.html`.
  119. <property>
  120. <name>hadoop.proxyuser.hue.hosts</name>
  121. <value>*</value>
  122. </property>
  123. <property>
  124. <name>hadoop.proxyuser.hue.groups</name>
  125. <value>*</value>
  126. </property>
  127. If you place your Hue Server outside the Hadoop cluster, you can run
  128. an HttpFS server to provide Hue access to HDFS. The HttpFS service requires
  129. only one port to be opened to the cluster.
  130. Also add this in `httpfs-site.xml` which might be in `/etc/hadoop-httpfs/conf`.
  131. <property>
  132. <name>httpfs.proxyuser.hue.hosts</name>
  133. <value>*</value>
  134. </property>
  135. <property>
  136. <name>httpfs.proxyuser.hue.groups</name>
  137. <value>*</value>
  138. </property>
  139. Configure MapReduce 0.20 (MR1)
  140. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  141. Hue communicates with the JobTracker via the Hue plugins, which is a jar
  142. file that you place in your MapReduce `lib` directory.
  143. If you JobTracker and Hue are located on the same host, copy it over.
  144. If you are using CDH3, your MapReduce library directory might be in `/usr/lib/hadoop/lib`.
  145. $ cd /usr/share/hue
  146. $ cp desktop/libs/hadoop/java-lib/hue-plugins-*.jar /usr/lib/hadoop-0.20-mapreduce/lib
  147. If you JobTracker runs on a different host, you need to `scp` the Hue plugins
  148. jar to the JobTracker host.
  149. Then add this to your `mapred-site.xml` and *restart* your JobTracker.
  150. Depending on your setup, your `mapred-site.xml` might be in `/etc/hadoop/conf`.
  151. <property>
  152. <name>jobtracker.thrift.address</name>
  153. <value>0.0.0.0:9290</value>
  154. </property>
  155. <property>
  156. <name>mapred.jobtracker.plugins</name>
  157. <value>org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin</value>
  158. <description>Comma-separated list of jobtracker plug-ins to be activated.</description>
  159. </property>
  160. You can confirm that the plugins are running correctly by tailing the daemon
  161. logs:
  162. $ tail --lines=500 /var/log/hadoop-0.20/hadoop*jobtracker*.log | grep ThriftPlugin
  163. 2009-09-28 16:30:44,337 INFO org.apache.hadoop.thriftfs.ThriftPluginServer: Starting Thrift server
  164. 2009-09-28 16:30:44,419 INFO org.apache.hadoop.thriftfs.ThriftPluginServer:
  165. Thrift server listening on 0.0.0.0:9290
  166. Configure Oozie
  167. ^^^^^^^^^^^^^^^
  168. Hue submits MapReduce jobs to Oozie as the logged in user. You need to
  169. configure Oozie to accept the `hue` user to be a proxyuser. Specify this in
  170. your `oozie-site.xml` (even in a non-secure cluster), and restart Oozie:
  171. <property>
  172. <name>oozie.service.ProxyUserService.proxyuser.hue.hosts</name>
  173. <value>*</value>
  174. </property>
  175. <property>
  176. <name>oozie.service.ProxyUserService.proxyuser.hue.groups</name>
  177. <value>*</value>
  178. </property>
  179. Further Hadoop Configuration and Caveats
  180. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  181. `HADOOP_CLASSPATH` Caveat
  182. ^^^^^^^^^^^^^^^^^^^^^^^^^
  183. If you are setting `$HADOOP_CLASSPATH` in your `hadoop-env.sh`, be sure
  184. to set it in such a way that user-specified options are preserved. For example:
  185. Correct:
  186. # HADOOP_CLASSPATH=<your_additions>:$HADOOP_CLASSPATH
  187. Incorrect:
  188. # HADOOP_CLASSPATH=<your_additions>
  189. This enables certain components of Hue to add to
  190. Hadoop's classpath using the environment variable.
  191. `hadoop.tmp.dir`
  192. ^^^^^^^^^^^^^^^^
  193. If your users are likely to be submitting jobs both using Hue and from the
  194. same machine via the command line interface, they will be doing so as the `hue`
  195. user if they're using Hue and via their own user account on the command line.
  196. This leads to some contention on the directory specified by `hadoop.tmp.dir`,
  197. which defaults to `/tmp/hadoop-${user.name}`. Specifically, `hadoop.tmp.dir`
  198. is used to unpack jars in `bin/hadoop jar`. One work around to this is
  199. to set `hadoop.tmp.dir` to `/tmp/hadoop-${user.name}-${hue.suffix}` in the
  200. core-site.xml file:
  201. <property>
  202. <name>hadoop.tmp.dir</name>
  203. <value>/tmp/hadoop-${user.name}${hue.suffix}</value>
  204. </property>
  205. Unfortunately, when the variable is unset, you'll end up
  206. with directories named `/tmp/hadoop-user_name-${hue.suffix}` in
  207. `/tmp`. Despite that, Hue will still work.
  208. IMPORTANT: The Beeswax server writes into a local directory on the Hue machine
  209. that is specified by `hadoop.tmp.dir` to unpack its jars. That directory
  210. needs to be writable by the `hue` user, which is the default user who starts
  211. Beeswax Server, or else Beeswax server will not start. You may also make that
  212. directory world-writable.
  213. Configuring Your Firewall for Hue
  214. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  215. Hue currently requires that the machines within your cluster can connect to
  216. each other freely over TCP. The machines outside your cluster must be able to
  217. open TCP port 8888 on the Hue Server (or the configured Hue web HTTP port)
  218. to interact with the system.
  219. Hive Configuration
  220. ~~~~~~~~~~~~~~~~~~
  221. Hue's Beeswax application helps you use Hive to query your data.
  222. It depends on a Hive installation on your system. Please read
  223. this section to ensure a proper integration.
  224. Your Hive data is stored in HDFS, normally under `/user/hive/warehouse`
  225. (or any path you specify as `hive.metastore.warehouse.dir` in your
  226. `hive-site.xml`). Make sure this location exists and is writable by
  227. the users whom you expect to be creating tables. `/tmp` (on the local file
  228. system) must be world-writable (1777), as Hive makes extensive use of it.
  229. [NOTE]
  230. If you used the embedded Hive MetaStore functionality of Beeswax in Hue from
  231. versions prior to Hue 1.2, read this section. Hue 1.2 includes changes in the
  232. Hive MetaStore schema that are part of the Hive 0.7 release. If you want to use
  233. Beeswax in Hue 1.2, it is imperative that you upgrade the Hive MetaStore schema
  234. by running the appropriate schema upgrade script located in the
  235. `apps/beeswax/hive/scripts/metastore/upgrade` directory in the Hue installation.
  236. Scripts for Derby and MySQL databases are available. If you are using a
  237. different database for your MetaStore, you will need to provide your own
  238. upgrade script.
  239. No Existing Hive Installation
  240. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  241. Familiarize yourself with the configuration options in
  242. `hive-site.xml`. See
  243. https://ccp.cloudera.com/display/CDH4DOC/Hive+Installation#HiveInstallation-ConfiguringtheHiveMetastore[Hive
  244. Installation and Configuration].
  245. Having a `hive-site.xml` is optional but often useful, particularly on setting
  246. up a http://wiki.apache.org/hadoop/Hive/AdminManual/MetastoreAdmin[metastore].
  247. You may store the `hive-site.xml` in `/etc/hue/conf`, or instruct
  248. Beeswax to locate it using the `hive_conf_dir` configuration variable. See
  249. `/etc/hue/conf/hue.ini`.
  250. Existing Hive Installation
  251. ^^^^^^^^^^^^^^^^^^^^^^^^^^
  252. In `/etc/hue/conf/hue.ini`, modify `hive_conf_dir` to point to the
  253. directory containing `hive-site.xml`.
  254. Configuring Hue
  255. ---------------
  256. Hue ships with a default configuration that will work for
  257. pseudo-distributed clusters. If you are running on a real cluster, you must
  258. make a few changes to the `/etc/hue/hue.ini` configuration file (or ``pseudo-distributed.ini`
  259. when in development mode). The following sections describe the key configuration options you must make to
  260. configure Hue.
  261. [TIP]
  262. .Listing all Configuration Options
  263. ============================================================
  264. To list all available configuration options, run:
  265. $ /usr/share/hue/build/env/bin/hue config_help | less
  266. This commands outlines the various sections and options in the configuration,
  267. and provides help and information on the default values.
  268. ============================================================
  269. [TIP]
  270. .Viewing Current Configuration Options
  271. ============================================================
  272. To view the current configuration from within Hue, open:
  273. http://<hue>/dump_config
  274. ============================================================
  275. [TIP]
  276. .Using Multiple Files to Store Your Configuration
  277. ============================================================
  278. Hue loads and merges all of the files with extension `.ini`
  279. located in the `/etc/hue` directory. Files that are alphabetically later
  280. take precedence.
  281. ============================================================
  282. Web Server Configuration
  283. ~~~~~~~~~~~~~~~~~~~~~~~~
  284. These configuration variables are under the `[desktop]` section in
  285. the `/etc/hue/hue.ini` configuration file.
  286. Specifying the Hue HTTP Address
  287. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  288. Hue uses a Spawning or a CherryPy web server (configurable). You can use
  289. the following options to change the IP address and port that the web server
  290. listens on. The default setting is port 8888 on all configured IP addresses.
  291. # Webserver listens on this address and port
  292. http_host=0.0.0.0
  293. http_port=8888
  294. Specifying the Secret Key
  295. ^^^^^^^^^^^^^^^^^^^^^^^^^
  296. For security, you should also specify the secret key that is used for secure
  297. hashing in the session store. Enter a long series of random characters
  298. (30 to 60 characters is recommended).
  299. secret_key=jFE93j;2[290-eiw.KEiwN2s3['d;/.q[eIW^y#e=+Iei*@Mn<qW5o
  300. NOTE: If you don't specify a secret key, your session cookies will not be
  301. secure. Hue will run but it will also display error messages telling you to
  302. set the secret key.
  303. Authentication
  304. ^^^^^^^^^^^^^^
  305. By default, the first user who logs in to Hue can choose any
  306. username and password and becomes an administrator automatically. This
  307. user can create other user and administrator accounts. User information is
  308. stored in the Django database in the Django backend.
  309. The authentication system is pluggable. For more information, see the
  310. link:sdk/sdk.html[Hue SDK Documentation].
  311. Configuring Hue for SSL
  312. ^^^^^^^^^^^^^^^^^^^^^^^
  313. You can configure Hue to serve over HTTPS. To do so, you must install
  314. "pyOpenSSL" within Hue's context and configure your keys.
  315. To install `pyOpenSSL`, from the root of your Hue installation path,
  316. do the following steps:
  317. 1. Run this command:
  318. $ ./build/env/bin/easy_install pyOpenSSL
  319. 2. Configure Hue to use your private key by adding the following
  320. options to the `/etc/hue/hue.ini` configuration file:
  321. ssl_certificate=/path/to/certificate
  322. ssl_private_key=/path/to/key
  323. 3. Ideally, you would have an appropriate key signed by a Certificate Authority.
  324. If you're just testing, you can create a self-signed key using the `openssl`
  325. command that may be installed on your system:
  326. ### Create a key
  327. $ openssl genrsa 1024 > host.key
  328. ### Create a self-signed certificate
  329. $ openssl req -new -x509 -nodes -sha1 -key host.key > host.cert
  330. [NOTE]
  331. .Self-signed Certificates and File Uploads
  332. ============================================================
  333. To upload files using the Hue File Browser over HTTPS requires
  334. using a proper SSL Certificate. Self-signed certificates don't
  335. work.
  336. ============================================================
  337. Hue Configuration for Hadoop
  338. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  339. These configuration variables are under the `[hadoop]` section in
  340. the `/etc/hue/hue.ini` configuration file.
  341. HDFS Cluster
  342. ^^^^^^^^^^^^
  343. Hue only support one HDFS cluster currently. That cluster should be defined
  344. under the `[[[default]]]` sub-section.
  345. fs_defaultfs::
  346. This is the equivalence of `fs.defaultFS` (aka `fs.default.name`) in
  347. Hadoop configuration.
  348. webhdfs_url::
  349. You can also set this to be the HttpFS url. The default value is the HTTP
  350. port on the NameNode.
  351. hadoop_hdfs_home::
  352. This is the home of your Hadoop HDFS installation. It is the
  353. root of the Hadoop untarred directory, or usually
  354. `/usr/lib/hadoop`.
  355. hadoop_bin::
  356. Use this as the HDFS Hadoop launcher script, which is usually
  357. `/usr/bin/hadoop`.
  358. hadoop_conf_dir::
  359. This is the configuration directory of the HDFS, typically
  360. `/etc/hadoop/conf`.
  361. MapReduce (MR1) Cluster
  362. ^^^^^^^^^^^^^^^^^^^^^^^
  363. Hue only support one MapReduce cluster currently. That cluster should be defined
  364. under the `[[[default]]]` sub-section. Note that JobBrowser only works with MR1.
  365. jobtracker_host::
  366. The host running the JobTracker. In a secured environment, this needs to
  367. be the FQDN of the JobTracker host, and match the "host" portion of the
  368. `mapred' Kerberos principal full name.
  369. jobtracker_port::
  370. The port for the JobTracker IPC service.
  371. submit_to::
  372. If your Oozie is configured with to talk to a 0.20 MapReduce service, then
  373. set this to `true`. Hue will be submitting jobs to this MapReduce cluster.
  374. hadoop_mapred_home::
  375. This is the home of your Hadoop MapReduce installation. It is the
  376. root of the Hadoop MR1 untarred directory, or the root of the
  377. Hadoop 0.20 untarred directory, or `/usr/lib/hadoop-0.20-mapreduce` for
  378. CDH packages. If `submit_to` is true for this cluster, this
  379. config value becomes the `$HADOOP_MAPRED_HOME` for
  380. BeeswaxServer and child shell processes.
  381. hadoop_bin::
  382. Use this as the MR1 Hadoop launcher script, which is usually
  383. `/usr/bin/hadoop`.
  384. hadoop_conf_dir::
  385. This is the configuration directory of the MR1 service,
  386. typically `/etc/hadoop/conf`.
  387. Yarn (MR2) Cluster
  388. ^^^^^^^^^^^^^^^^^^
  389. Hue only support one Yarn cluster currently. That cluster should be defined
  390. under the `[[[default]]]` sub-section.
  391. resourcemanager_host::
  392. The host running the ResourceManager.
  393. resourcemanager_port::
  394. The port for the ResourceManager IPC service.
  395. submit_to::
  396. If your Oozie is configured with to talk to a Yarn cluster, then
  397. set this to `true`. Hue will be submitting jobs to this Yarn cluster.
  398. But note that JobBrowser will not be able to show MR2 jobs.
  399. hadoop_mapred_home::
  400. This is the home of your Hadoop MapReduce installation. It is the
  401. root of the Hadoop 0.23 untarred directory, or
  402. `/usr/lib/hadoop-mapreduce` for CDH packages. If `submit_to` is
  403. true for this cluster, this config value becomes the
  404. `$HADOOP_MAPRED_HOME` for BeeswaxServer and child shell
  405. processes.
  406. hadoop_bin::
  407. Use this as the Yarn/MR2 Hadoop launcher script, which is usually
  408. `/usr/bin/hadoop`.
  409. hadoop_conf_dir::
  410. This is the configuration directory of the Yarn/MR2 service,
  411. typically `/etc/hadoop/conf`.
  412. Beeswax Configuration
  413. ~~~~~~~~~~~~~~~~~~~~~
  414. In the `[beeswax]` section of the configuration file, you can
  415. _optionally_ specify the following:
  416. beeswax_server_host::
  417. The hostname or IP that the Beeswax Server should bind to. By
  418. default it binds to `localhost`, and therefore only serves local
  419. IPC clients.
  420. hive_home_dir::
  421. The base directory of your Hive installation.
  422. hive_conf_dir::
  423. The directory containing your `hive-site.xml` Hive
  424. configuration file.
  425. beeswax_server_heapsize::
  426. The heap size (-Xmx) of the Beeswax Server.
  427. JobDesigner and Oozie Configuration
  428. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  429. In the `[liboozie]` section of the configuration file, you should
  430. specify:
  431. oozie_url::
  432. The URL of the Oozie service. It is the same as the `OOZIE_URL`
  433. environment variable for Oozie.
  434. UserAdmin Configuration
  435. ~~~~~~~~~~~~~~~~~~~~~~~
  436. In the `[useradmin]` section of the configuration file, you can
  437. _optionally_ specify the following:
  438. default_user_group::
  439. The name of a default group that is suggested when creating a
  440. user manually. If the LdapBackend or PamBackend are configured
  441. for doing user authentication, new users will automatically be
  442. members of the default group.
  443. Configuration Validation
  444. ~~~~~~~~~~~~~~~~~~~~~~~~
  445. Hue can detect certain invalid configuration. It will show a red alert icon on
  446. the top navigation bar. image:images/val.png[]
  447. To view the configuration of a running Hue instance, navigate to
  448. `http://myserver:8888/dump_config`, also accessible through the About
  449. application.
  450. Starting Hue from the Tarball
  451. -----------------------------
  452. After your cluster is running with the plugins enabled, you can start Hue on
  453. your Hue Server by running:
  454. # build/env/bin/supervisor
  455. This will start several subprocesses, corresponding to the different Hue
  456. components. Your Hue installation is now running.
  457. Administering Hue
  458. -----------------
  459. Now that you've installed and started Hue, you can feel free to skip ahead
  460. to the <<usage,Using Hue>> section. Administrators may want to refer to this
  461. section for more details about managing and operating a Hue installation.
  462. Hue Processes
  463. ~~~~~~~~~~~~~
  464. Process User
  465. ^^^^^^^^^^^^
  466. Filebrowser requires Hue to be running as the 'hue' user.
  467. Process Hierarchy
  468. ^^^^^^^^^^^^^^^^^
  469. A script called `supervisor` manages all Hue processes. The supervisor is a
  470. watchdog process -- its only purpose is to spawn and monitor other processes.
  471. A standard Hue installation starts and monitors the following processes:
  472. * `runcpserver` - a web server based on CherryPy that provides the core web
  473. functionality of Hue
  474. * `beeswax server` - a daemon that manages concurrent Hive queries
  475. If you have installed other applications into your Hue instance, you may see
  476. other daemons running under the supervisor as well.
  477. You can see the supervised processes running in the output of `ps -f -u hue`:
  478. UID PID PPID C STIME TTY TIME CMD
  479. hue 8685 8679 0 Aug05 ? 00:01:39 /usr/share/hue/build/env/bin/python /usr/share/hue/build/env/bin/desktop runcpserver
  480. hue 8695 8679 0 Aug05 ? 00:00:06 /usr/java/jdk1.6.0_14/bin/java -Xmx1000m -Dhadoop.log.dir=/usr/lib/hadoop-0.20/logs -Dhadoop.log.file=hadoop.log ...
  481. Note that the supervisor automatically restarts these processes if they fail for
  482. any reason. If the processes fail repeatedly within a short time, the supervisor
  483. itself shuts down.
  484. [[logging]]
  485. Hue Logging
  486. ~~~~~~~~~~~
  487. The Hue logs are found in `/var/log/hue`, or in a `logs` directory under your
  488. Hue installation root. Inside the log directory you can find:
  489. * An `access.log` file, which contains a log for all requests against the Hue
  490. web server.
  491. * A `supervisor.log` file, which contains log information for the supervisor
  492. process.
  493. * A `supervisor.out` file, which contains the stdout and stderr for the
  494. supervisor process.
  495. * A `.log` file for each supervised process described above, which contains
  496. the logs for that process.
  497. * A `.out` file for each supervised process described above, which contains
  498. the stdout and stderr for that process.
  499. If users on your cluster have problems running Hue, you can often find error
  500. messages in these log files. If you are unable to start Hue from the init
  501. script, the `supervisor.log` log file can often contain clues.
  502. Viewing Recent Log Messages Online
  503. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  504. In addition to logging `INFO` level messages to the `logs` directory, the Hue
  505. web server keeps a small buffer of log messages at all levels in memory. You can
  506. view these logs by visiting `http://myserver:8888/logs`. The `DEBUG` level
  507. messages shown can sometimes be helpful in troubleshooting issues.
  508. The Hue Database
  509. ~~~~~~~~~~~~~~~~
  510. Hue requires a SQL database to store small amounts of data, including user
  511. account information as well as history of job submissions and Hive queries.
  512. By default, Hue is configured to use the embedded database SQLite for this
  513. purpose, and should require no configuration or management by the administrator.
  514. However, MySQL is the recommended database to use. This section contains
  515. instructions for configuring Hue to access MySQL and other databases.
  516. Inspecting the Hue Database
  517. ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  518. The default SQLite database used by Hue is located in: `/usr/share/hue/desktop/desktop.db`.
  519. You can inspect this database from the command line using the `sqlite3`
  520. program or typing `/usr/share/hue/build/env/bin/hue dbshell'. For example:
  521. # sqlite3 /usr/share/hue/desktop/desktop.db
  522. SQLite version 3.6.22
  523. Enter ".help" for instructions
  524. Enter SQL statements terminated with a ";"
  525. sqlite> select username from auth_user;
  526. admin
  527. test
  528. sample
  529. sqlite>
  530. It is strongly recommended that you avoid making any modifications to the
  531. database directly using SQLite, though this trick can be useful for management
  532. or troubleshooting.
  533. Backing up the Hue Database
  534. ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  535. If you use the default SQLite database, then copy the `desktop.db` file to
  536. another node for backup. It is recommended that you back it up on a regular
  537. schedule, and also that you back it up before any upgrade to a new version of
  538. Hue.
  539. Configuring Hue to Access Another Database
  540. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  541. Although SQLite is the default database type, some advanced users may prefer
  542. to have Hue access an alternate database type. Note that if you elect to
  543. configure Hue to use an external database, upgrades may require more manual
  544. steps in the future.
  545. The following instructions are for MySQL, though you can also configure Hue to
  546. work with other common databases such as PostgreSQL and Oracle.
  547. [NOTE]
  548. .Tested Database Backends
  549. ============================================================
  550. Note that Hue has only been tested with SQLite and MySQL database backends.
  551. ============================================================
  552. Configuring Hue to Store Data in MySQL
  553. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  554. To configure Hue to store data in MySQL:
  555. 1. Create a new database in MySQL and grant privileges to a Hue user to manage
  556. this database.
  557. mysql> create database hue;
  558. Query OK, 1 row affected (0.01 sec)
  559. mysql> grant all on hue.* to 'hue'@'localhost' identified by 'secretpassword';
  560. Query OK, 0 rows affected (0.00 sec)
  561. 2. Shut down Hue if it is running.
  562. 3. To migrate your existing data to MySQL, use the following command to dump the
  563. existing database data to a text file. Note that using the ".json" extension
  564. is required.
  565. $ /usr/share/hue/build/env/bin/hue dumpdata > <some-temporary-file>.json
  566. 4. Open the `/etc/hue/hue.ini` file in a text editor. Directly below the
  567. `[[database]]` line, add the following options (and modify accordingly for
  568. your MySQL setup):
  569. host=localhost
  570. port=3306
  571. engine=mysql
  572. user=hue
  573. password=secretpassword
  574. name=hue
  575. 5. As the Hue user, configure Hue to load the existing data and create the
  576. necessary database tables:
  577. $ /usr/share/hue/build/env/bin/hue syncdb --noinput
  578. $ mysql -uhue -psecretpassword -e "DELETE FROM hue.django_content_type;"
  579. $ /usr/share/hue/build/env/bin/hue loaddata <temporary-file-containing-dumped-data>.json
  580. Your system is now configured and you can start the Hue server as normal.
  581. [[usage]]
  582. Using Hue
  583. ---------
  584. After installation, you can use Hue by navigating to `http://myserver:8888/`.
  585. The following login screen appears:
  586. image:images/login.png[]
  587. The Help application guides users through the various installed applications.
  588. Supported Browsers
  589. ~~~~~~~~~~~~~~~~~~
  590. Hue is primarily tested on Firefox, Chrome and Safari on Windows, Mac,
  591. and Linux.
  592. Most of Hue should work in IE8+ and Opera is not tested.
  593. Feedback
  594. ~~~~~~~~
  595. Your feedback is welcome. The best way to send feedback is to join the
  596. https://groups.google.com/a/cloudera.org/group/hue-user[mailing list], and
  597. send e-mail, to mailto:hue-user@cloudera.org[hue-user@cloudera.org].
  598. Reporting Bugs
  599. ~~~~~~~~~~~~~~
  600. If you find that something doesn't work, it'll often be helpful to include logs
  601. from your server. (See the <<logging,Hue Logging>> section. Please include the
  602. logs as a zip (or cut and paste the ones that look relevant) and send those with
  603. your bug reports.
  604. image:images/logs.png[]