title: Automatic High Availability with Hue and Cloudera Manager author: admin type: post date: 2015-01-21T17:42:45+00:00 url: /automatic-high-availability-with-hue-and-cloudera-manager/ sf_thumbnail_type:
December 8th 2015 update: this post is now deprecated as of Hue 3.9: https://gethue.com/automatic-high-availability-and-load-balancing-of-hue-in-cloudera-manager-with-monitoring/
By default, Hue installs on a single machine, which means Hue is both constrained to that machine’s CPU and memory, which can limit the total number of active users before Hue becomes unstable. Furthermore, even a lightly loaded machine could crash, which would bring Hue out of service. This tutorial demonstrates hue-lb-example, an example load balancer that can automatically configure NGINX and HAProxy for a Cloudera Manager-managed Hue.
Before we demonstrate its use, we need to install a couple things first.
Hue should be set up on at least two of the nodes in Cloudera Manager and be configured to use a database like MySQL, PostgreSQL, or Oracle configured in a high availability manner. Furthermore, the database must be configured to be accessible from all the Hue instances. You can find detailed instructions on setting up or migrating the database from SQLite here.
Once the database has been set up, the following instructions describe setting up a fresh install. If you have an existing Hue, jump to step 5.
Go to “Add a Service -> Hue”, and follow the directions to create the first Hue instance.
[
]4




On a Redhat/Fedora-based system:
{{< highlight bash >}}% sudo yum install git nginx haproxy python python-pip
% pip install virtualenv
{{< /highlight >}}
On a Debian/Ubuntu-based system:
{{< highlight bash >}}% sudo apt-get install git nginx haproxy python python-pip
% pip install virtualenv
{{< /highlight >}}
First we want to start the load balancer:
{{< highlight bash >}}% cd $HUE_HOME_DIR/tools/load-balancer
{{< /highlight >}}
Next we install the load balancer specific dependencies in a python virtual environment to keep those dependencies from affecting other projects on the system.
{{< highlight bash >}}% virtualenv build
% source build/bin/activate
% pip install -r requirements.txt
{{< /highlight >}}
Finally, modify etc/hue-lb.toml to point at your instance of Cloudera Manager (as in "cloudera-manager.example.com" without the port or "http://"), and provide a username and password for an account that has read access to the Hue state.
Now we are ready to start the load balancers. Run:
{{< highlight bash >}}% ./bin/supervisord
% ./bin/supervisorctl status
haproxy RUNNING pid 36920, uptime 0:00:01
monitor-hue-lb RUNNING pid 36919, uptime 0:00:01
nginx RUNNING pid 36921, uptime 0:00:01
{{< /highlight >}}
You should be able to access Hue from either http://HUE-LB-HOSTNAME:8000 for NGINX, or http://HUE-LB-HOSTNAME:8001 for HAProxy. To demonstrate the that it’s load balancing:
##
Finally, if you want to shut down the load balancers, run:
{{< highlight bash >}}% ./bin/supervisorctl shutdown
{{< /highlight >}}
The hue load balancer uses Supervisor, a service that monitors and controls other services. It can be configured to automatically restart services if they crashed, or trigger scripts if certain events occur. The load balancer starts and monitors the NGINX or HAProxy through another process named monitor-hue-lb. It accomplishes this through the use of Cloudera Manager API to access the status of Hue in Cloudera Manager, and automatically add and remove Hue from the load balancers. If it detects that a new Hue instances has been added or removed, it updates the configuration of all the active load balancers and triggers them to reload without dropping any connections.
Both NGINX and HAProxy are configured to route users to the same backend, otherwise known as sticky sessions. This is both done for performance issues as it’s more likely the Hue backend will have the user’s data cached in the same Hue instance, but also because Impala currently does not yet support native high availability (IMPALA-1653). This means that the underlying Impala session opened by one Hue instance cannot be accessed by another Hue instance. By using sticky sessions, users will be always routed to the same Hue instance, so they will be able to still access their Impala sessions. That is, of course, assuming that Hue instance is still active. If not, the user will be routed to one of the other active Hue sessions.
Next steps (for C6) will be to make all the above done with one click in Cloudera Manager by shipping a parcel with all the dependencies (or downloading them automatically) and adding a new 'HA' role in the Hue service.
Have any questions? Feel free to contact us on hue-user or @gethue!