Browse Source

[gethue] Add highlight.js and port all the highlights to it, fix the list styling

Enrico Berti 6 years ago
parent
commit
a2aae7041d
100 changed files with 859 additions and 859 deletions
  1. 2 2
      docs/gethue/content/posts/2012-12-15-how-to-manage-permissions-in-hue.md
  2. 3 3
      docs/gethue/content/posts/2013-03-11-tutorial-analyzing-data-with-hue-and-hive.md
  3. 5 5
      docs/gethue/content/posts/2013-08-19-hadoop-tutorial-hive-udf-in-1-minute.md
  4. 2 2
      docs/gethue/content/posts/2013-08-23-the-web-ui-for-hbase-hbase-browser.md
  5. 9 9
      docs/gethue/content/posts/2013-09-11-hadoop-tutorials-ii-2-execute-hive-queries-and.md
  6. 2 2
      docs/gethue/content/posts/2013-09-27-fast-sql-with-the-impala-query-editor.md
  7. 2 2
      docs/gethue/content/posts/2013-10-04-move-data-in-out-your-hadoop-cluster-with-the-sqoop.md
  8. 13 13
      docs/gethue/content/posts/2013-10-10-password-management-in-hue.md
  9. 7 7
      docs/gethue/content/posts/2013-10-23-tutorial-better-file-formats-for-impala-and-quick-sql.md
  10. 4 4
      docs/gethue/content/posts/2013-11-08-hadoop-tutorials-series-ii-8-how-to-transfer-data.md
  11. 2 2
      docs/gethue/content/posts/2013-11-11-dbquery-app-mysql-postgresql-oracle-and-sqlite-query.md
  12. 5 5
      docs/gethue/content/posts/2013-12-16-use-the-impala-app-with-sentry-for-real-security.md
  13. 4 4
      docs/gethue/content/posts/2013-12-30-jobtracker-high-availability-ha-in-mr1.md
  14. 11 11
      docs/gethue/content/posts/2014-01-02-a-new-spark-web-ui-spark-app.md
  15. 2 2
      docs/gethue/content/posts/2014-01-13-using-hadoop-mr2-and-yarn-with-an-alternative-job.md
  16. 18 18
      docs/gethue/content/posts/2014-02-03-how-to-manage-the-hue-database-with-the-shell.md
  17. 24 24
      docs/gethue/content/posts/2014-02-03-making-hadoop-accessible-to-your-employees-with-ldap.md
  18. 2 2
      docs/gethue/content/posts/2014-02-03-solving-the-hue-2-x-hanging-problem.md
  19. 3 3
      docs/gethue/content/posts/2014-03-14-how-to-fix-the-multipleobjectsreturned-error-in-hue.md
  20. 6 6
      docs/gethue/content/posts/2014-03-23-tutorial-live-demo-of-search-on-hadoop.md
  21. 12 12
      docs/gethue/content/posts/2014-04-02-hadoop-tutorial-oozie-workflow-credentials-with-a-hive-action-with-kerberos.md
  22. 8 8
      docs/gethue/content/posts/2014-04-03-hadoop-tutorial-monitor-and-get-alerts-for-your-workflows-with-the-oozie-slas.md
  23. 5 5
      docs/gethue/content/posts/2014-04-17-hadoop-tutorial-how-to-create-a-real-hadoop-cluster-in-10-minutes.md
  24. 4 4
      docs/gethue/content/posts/2014-05-19-hadoop-tutorial-how-to-distribute-impala-query-load.md
  25. 4 4
      docs/gethue/content/posts/2014-05-20-visualize-snappy-compressed-avro-files.md
  26. 2 2
      docs/gethue/content/posts/2014-05-29-hadoop-tutorial-make-hadoop-more-accessible-by-integrating-multiple-ldap-servers.md
  27. 4 4
      docs/gethue/content/posts/2014-05-30-hadoop-tutorial-how-to-integrate-unix-users-and-groups.md
  28. 6 6
      docs/gethue/content/posts/2014-06-12-i-put-a-proxy-on-hue.md
  29. 17 17
      docs/gethue/content/posts/2014-06-16-get-started-with-spark-deploy-spark-server-and-compute-pi-from-your-web-browser.md
  30. 2 2
      docs/gethue/content/posts/2014-06-18-hadoop-tutorial-yarn-resource-manager-high-availability-ha-in-mr2.md
  31. 15 15
      docs/gethue/content/posts/2014-07-17-rbtools-example-how-do-easily-do-code-reviews-with-review-board.md
  32. 14 14
      docs/gethue/content/posts/2014-07-24-tutorial-how-to-run-the-hue-integration-tests.md
  33. 21 21
      docs/gethue/content/posts/2014-09-11-how-to-build-hue-on-ubuntu-14-04-trusty.md
  34. 11 11
      docs/gethue/content/posts/2014-09-17-hadoop-tutorial-hive-and-impala-queries-life-cycle.md
  35. 15 15
      docs/gethue/content/posts/2014-09-17-hadoop-tutorial-kerberos-security-and-sentry-authorization-for-solr-search-app.md
  36. 14 14
      docs/gethue/content/posts/2014-09-22-hadoop-tutorial-ssl-encryption-between-hue-and-hive.md
  37. 26 26
      docs/gethue/content/posts/2014-10-02-how-to-configure-hue-in-your-hadoop-cluster.md
  38. 14 14
      docs/gethue/content/posts/2014-10-03-running-an-oozie-workflow-and-getting-split-class-org-apache-oozie-action-hadoop-oozielauncherinputformatemptysplit-not-found.md
  39. 18 18
      docs/gethue/content/posts/2014-10-07-apache-sentry-made-easy-with-the-new-hue-security-app.md
  40. 11 11
      docs/gethue/content/posts/2014-10-09-bay-area-bike-share-analysis-with-the-hadoop-notebook-and-spark-sql.md
  41. 6 6
      docs/gethue/content/posts/2014-12-03-hadoop-yarn-11-local-dirs-are-bad-varlibhadoop-yarncacheyarnnm-local-dir-11-log-dirs-are-bad-varloghadoop-yarncontainers.md
  42. 12 12
      docs/gethue/content/posts/2014-12-09-how-to-use-hcatalog-with-pig-in-a-secured-cluster.md
  43. 6 6
      docs/gethue/content/posts/2014-12-11-how-to-run-hue-with-the-apache-server.md
  44. 8 8
      docs/gethue/content/posts/2014-12-12-how-to-use-hue-with-hive-and-impala-configured-with-ldap-authentication-and-ssl.md
  45. 6 6
      docs/gethue/content/posts/2014-12-16-how-to-deploy-hue-on-hdp.md
  46. 14 14
      docs/gethue/content/posts/2015-01-16-configure-hue-with-https-ssl.md
  47. 12 12
      docs/gethue/content/posts/2015-01-21-automatic-high-availability-with-hue-and-cloudera-manager.md
  48. 8 8
      docs/gethue/content/posts/2015-02-06-export-and-import-your-search-dashboards.md
  49. 24 24
      docs/gethue/content/posts/2015-02-08-hue-api-execute-some-builtin-commands.md
  50. 9 9
      docs/gethue/content/posts/2015-02-12-hadoop-hue-3-on-hdp-installation-tutorial.md
  51. 6 6
      docs/gethue/content/posts/2015-03-10-fixing-the-yarn-invalid-resource-request-requested-memory-0-or-requested-memory-max-configured.md
  52. 15 15
      docs/gethue/content/posts/2015-03-11-export-and-import-your-oozie-workflows.md
  53. 5 5
      docs/gethue/content/posts/2015-03-12-solr-search-ui-only.md
  54. 15 15
      docs/gethue/content/posts/2015-03-23-start-developing-hue-on-a-mac-in-a-few-minutes.md
  55. 22 22
      docs/gethue/content/posts/2015-03-25-hbase-browsing-with-doas-impersonation-and-kerberos.md
  56. 6 6
      docs/gethue/content/posts/2015-03-26-add-a-top-banner-to-hue.md
  57. 6 6
      docs/gethue/content/posts/2015-03-26-using-nginx-to-speed-up-hue-3-8-0.md
  58. 24 24
      docs/gethue/content/posts/2015-04-08-developer-guide-on-upgrading-apps-for-django-1-6.md
  59. 2 2
      docs/gethue/content/posts/2015-04-10-hive-1-1-and-impala-2-2-support.md
  60. 8 8
      docs/gethue/content/posts/2015-04-23-new-notebook-application-for-spark-sql.md
  61. 4 4
      docs/gethue/content/posts/2015-05-21-build-a-real-time-analytic-dashboard-with-solr-search-and-spark-streaming.md
  62. 6 6
      docs/gethue/content/posts/2015-06-15-install-hue-3-on-pivotal-hd-3-0.md
  63. 2 2
      docs/gethue/content/posts/2015-07-07-bay-area-bikeshare-data-analysis-with-search-and-spark-notebook.md
  64. 2 2
      docs/gethue/content/posts/2015-07-08-analizziamo-i-dati-bikeshare-della-bay-area-con-solr-search-e-spark-notebook.md
  65. 2 2
      docs/gethue/content/posts/2015-07-08-analyse-des-donnees-des-velib-de-san-francisco-avec-solr-search-et-un-spark-notebook.md
  66. 8 8
      docs/gethue/content/posts/2015-07-27-enhance-search-results.md
  67. 4 4
      docs/gethue/content/posts/2015-07-30-filter-sort-browse-hive-partitions-with-hues-metastore.md
  68. 2 2
      docs/gethue/content/posts/2015-08-07-configuring-hue-multiple-authentication-backends-and-ldap.md
  69. 7 7
      docs/gethue/content/posts/2015-08-20-dynamic-search-dashboard-improvements-3.md
  70. 2 2
      docs/gethue/content/posts/2015-08-28-mini-task-configure-hue-with-a-proxy.md
  71. 2 2
      docs/gethue/content/posts/2015-09-02-mini-how-to-disabling-some-apps-from-showing-up.md
  72. 4 4
      docs/gethue/content/posts/2015-09-09-storing-passwords-in-script-rather-than-hue-ini-files.md
  73. 6 6
      docs/gethue/content/posts/2015-09-10-ldap-or-pam-pass-through-authentication-with-hive-or-impala.md
  74. 31 31
      docs/gethue/content/posts/2015-09-24-how-to-use-the-livy-spark-rest-job-server-for-interactive-spark-2-2.md
  75. 11 11
      docs/gethue/content/posts/2015-09-25-bay-area-bike-share-data-analysis-with-spark-notebook-part-2.md
  76. 26 26
      docs/gethue/content/posts/2015-10-13-how-to-use-the-livy-spark-rest-job-server-api-for-sharing-spark-rdds-and-contexts.md
  77. 27 27
      docs/gethue/content/posts/2015-10-21-how-to-use-the-livy-spark-rest-job-server-api-for-submitting-batch-jar-python-and-streaming-spark-jobs.md
  78. 2 2
      docs/gethue/content/posts/2015-10-22-use-the-shell-action-in-oozie.md
  79. 4 4
      docs/gethue/content/posts/2015-12-07-auditing-user-administration-operations-with-hue-and-cloudera-navigator-2.md
  80. 12 12
      docs/gethue/content/posts/2015-12-18-getting-started-with-hue-in-2-minutes-with-docker.md
  81. 20 20
      docs/gethue/content/posts/2016-03-03-custom-sql-query-editors.md
  82. 4 4
      docs/gethue/content/posts/2016-04-06-suggest-for-solr-search-dashboards.md
  83. 2 2
      docs/gethue/content/posts/2016-05-04-the-hue-team-development-process.md
  84. 2 2
      docs/gethue/content/posts/2016-06-13-introducing-the-new-login-modal-and-idle-session-timeout.md
  85. 4 4
      docs/gethue/content/posts/2016-07-19-change-your-maps-look-and-feel.md
  86. 2 2
      docs/gethue/content/posts/2016-08-22-easy-indexing-of-data-into-solr.md
  87. 6 6
      docs/gethue/content/posts/2016-08-25-introducing-s3-support-in-hue.md
  88. 16 16
      docs/gethue/content/posts/2016-09-22-hue-security-improvements.md
  89. 4 4
      docs/gethue/content/posts/2016-12-19-security-improvements-http-only-flag-sasl-qop-and-more.md
  90. 2 2
      docs/gethue/content/posts/2016-12-22-extract-archives-as-oozie-job.md
  91. 2 2
      docs/gethue/content/posts/2016-12-22-sql-improvements-with-row-counts-sample-popup-and-more.md
  92. 8 8
      docs/gethue/content/posts/2017-02-06-hue-3-12-the-improved-editor-for-sql-developers-and-analysts-is-out.md
  93. 4 4
      docs/gethue/content/posts/2017-04-03-hue-with-a-custom-logo.md
  94. 2 2
      docs/gethue/content/posts/2017-07-20-the-hue-4-user-interface-in-detail.md
  95. 8 8
      docs/gethue/content/posts/2017-11-20-browsing-adls-data-querying-it-with-sql-and-exporting-the-results-back-in-hue-4-2.md
  96. 4 4
      docs/gethue/content/posts/2017-12-08-browsing-impala-query-execution-within-the-sql-editor.md
  97. 22 22
      docs/gethue/content/posts/2017-12-13-using-hue-to-interact-with-apache-kylin.md
  98. 2 2
      docs/gethue/content/posts/2018-01-16-intuitively-discovering-and-exploring-a-wine-dataset-with-the-dynamic-dashboards.md
  99. 8 8
      docs/gethue/content/posts/2018-04-05-sql-editor-variables.md
  100. 10 10
      docs/gethue/content/posts/2018-08-16-live-analytics-of-live-apache-log-files.md

+ 2 - 2
docs/gethue/content/posts/2012-12-15-how-to-manage-permissions-in-hue.md

@@ -96,11 +96,11 @@ By explicitly setting the app level permissions, the apps that these users will
 
 You can also blacklist the apps at the code level, e.g. in the hue.ini:
 
-{{< highlight bash >}}[desktop]
+<pre><code class="bash">[desktop]
   
 app_blacklist=search,security,oozie,jobbrowser,pig,beeswax,search,zookeeper,impala,rdbms,spark,metastore,hbase,sqoop,jobsub
   
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 

+ 3 - 3
docs/gethue/content/posts/2013-03-11-tutorial-analyzing-data-with-hue-and-hive.md

@@ -107,7 +107,7 @@ convert.py notes.txt READ_FIRST-Phoenix_Academic_Dataset_Agreement-3-11-13.pdf y
 
     **Top 25: business with most of the reviews**
 
-    {{< highlight sql >}}
+    <pre><code class="sql">
     SELECT name, review_count
     FROM business
     ORDER BY review_count DESC
@@ -116,7 +116,7 @@ convert.py notes.txt READ_FIRST-Phoenix_Academic_Dataset_Agreement-3-11-13.pdf y
 
     **Top 25: coolest restaurants**
 
-    {{< highlight sql >}}SELECT r.review_id, name, SUM(cool) AS coolness
+    <pre><code class="sql">SELECT r.review_id, name, SUM(cool) AS coolness
 
     FROM review r JOIN business b
 
@@ -130,7 +130,7 @@ convert.py notes.txt READ_FIRST-Phoenix_Academic_Dataset_Agreement-3-11-13.pdf y
 
     LIMIT 25
 
-    {{< /highlight >}}
+    </code></pre>
 
     [<img title="hue4" src="http://www.cloudera.com/wp-content/uploads/2013/04/hue4.png"/>][11]
 

+ 5 - 5
docs/gethue/content/posts/2013-08-19-hadoop-tutorial-hive-udf-in-1-minute.md

@@ -83,23 +83,23 @@ Then open up Beeswax in the [Hadoop UI Hue][5], click on the 'Settings' tab.
 
 In File Resources, upload _<span class="code">myudfs.jar</span>_, pick the jar file and point to it, e.g.:
 
-{{< highlight bash >}}/user/hue/myudf.jar{{< /highlight >}}
+<pre><code class="bash">/user/hue/myudf.jar</code></pre>
 
 Make the UDF available by registering a UDF (User Defined Function ):
 
 Name
 
-{{< highlight bash >}}myUpper{{< /highlight >}}
+<pre><code class="bash">myUpper</code></pre>
 
 Class
 
-{{< highlight bash >}}org.hue.udf.MyUpper{{< /highlight >}}
+<pre><code class="bash">org.hue.udf.MyUpper</code></pre>
 
 &nbsp;
 
 **That’s it**! Just test it on one of the Hue example tables:
 
-{{< highlight sql >}}select myUpper(description) FROM sample_07 limit 10{{< /highlight >}}
+<pre><code class="sql">select myUpper(description) FROM sample_07 limit 10</code></pre>
 
 # Summary
 
@@ -115,7 +115,7 @@ Have any questions? Feel free to contact us on [hue-user][7] or [@gethue][8]!
 
 If you did not register the UDF as explained above, you will get this error:
 
-{{< highlight bash >}}error while compiling statement: failed: parseexception line 1:0 cannot recognize input near 'myupper' " "{{< /highlight >}}
+<pre><code class="bash">error while compiling statement: failed: parseexception line 1:0 cannot recognize input near 'myupper' " "</code></pre>
 
  [1]: https://github.com/romainr/hadoop-tutorials-examples
  [2]: https://github.com/romainr/hadoop-tutorials-examples/raw/master/hive-udf/myudfs.jar

+ 2 - 2
docs/gethue/content/posts/2013-08-23-the-web-ui-for-hbase-hbase-browser.md

@@ -60,7 +60,7 @@ Prerequisites before starting Hue:
 
 3. Configure your list of HBase Clusters in <a href="https://github.com/cloudera/hue/blob/master/desktop/conf.dist/hue.ini#L467" target="_blank" rel="noopener noreferrer">hue.ini</a> to point to your Thrift IP/Port
 
-{{< highlight bash >}}[hbase]
+<pre><code class="bash">[hbase]
 
 \# Comma-separated list of HBase Thrift servers for clusters in the format of '(name|host:port)'.
 
@@ -70,7 +70,7 @@ hbase_clusters=(Cluster|my-host1:9090),(Cluster2|localhost:9090)
 
 \## truncate_limit = 500
 
-{{< /highlight >}}
+</code></pre>
 
 In this video, we’re walking through two main features of this app.  Let’s talk about HBase Browser!
 

+ 9 - 9
docs/gethue/content/posts/2013-09-11-hadoop-tutorials-ii-2-execute-hive-queries-and.md

@@ -73,7 +73,7 @@ Goal: we want to get the 10 coolest restaurants for a day.
 
 Let’s open Beeswax Hive Editor and explore the range of dates that we have:
 
-{{< highlight sql >}}SELECT DISTINCT \`date\` FROM review ORDER BY \`date\` DESC;{{< /highlight >}}
+<pre><code class="sql">SELECT DISTINCT \`date\` FROM review ORDER BY \`date\` DESC;</code></pre>
 
 Notice that you need to use backticks in order to use date as a column name in Hive.
 
@@ -83,7 +83,7 @@ The data is a bit old, so let’s pick 2012-12-01 as our target date. We can joi
 
 &nbsp;
 
-{{< highlight sql >}}SELECT r.business_id, name, AVG(cool) AS coolness
+<pre><code class="sql">SELECT r.business_id, name, AVG(cool) AS coolness
 
 FROM review r JOIN business b
 
@@ -99,11 +99,11 @@ ORDER BY coolness DESC
 
 LIMIT 10
 
-{{< /highlight >}}
+</code></pre>
 
 We have a good Hive query. Let’s [create][2] a result table ‘top_cool’ that will contain the top 10:
 
-{{< highlight sql >}}CREATE TABLE top_cool AS
+<pre><code class="sql">CREATE TABLE top_cool AS
 
 SELECT r.business_id, name, SUM(cool) AS coolness, '$date' as \`date\`
 
@@ -121,11 +121,11 @@ ORDER BY coolness DESC
 
 LIMIT 10
 
-{{< /highlight >}}
+</code></pre>
 
 And later replace ‘CREATE TABLE top_cool AS’ by ‘INSERT INTO TABLE top_cool’ in the Hive script as we want to create the table only the first time:
 
-{{< highlight sql >}}INSERT INTO TABLE top_cool
+<pre><code class="sql">INSERT INTO TABLE top_cool
 
 SELECT r.business_id, name, SUM(cool) AS coolness, '${date}' as \`date\`
 
@@ -143,7 +143,7 @@ ORDER BY coolness DESC
 
 LIMIT 10
 
-{{< /highlight >}}
+</code></pre>
 
 # Hive action in Apache Oozie
 
@@ -169,11 +169,11 @@ Note: when using a real cluster, as the workflow is going to run somewhere in th
 
 Lets specify that we are using a ‘date’ parameter in the Hive script. In our case we add the parameter in the Hive action:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 date=${date}
 
-{{< /highlight >}}
+</code></pre>
 
 The we save the workflow, fill up the date when prompted and look at the dynamic progress of the workflow! The output of the query will appear when you click on the ‘View the logs’ button on the action graph. In practice, INSERT, LOAD DATA would be used instead of SELECT in order to persist the calculation.
 

+ 2 - 2
docs/gethue/content/posts/2013-09-27-fast-sql-with-the-impala-query-editor.md

@@ -69,7 +69,7 @@ categories:
 
 <span>Then we are back to our Yelp data. Let’s take the query from </span>[<span>episode one</span>][1] <span>and execute it in both apps:</span>
 
-{{< highlight sql >}}
+<pre><code class="sql">
 
 SELECT r.business_id, name, SUM(cool) AS coolness
 
@@ -87,7 +87,7 @@ ORDER BY coolness DESC
 
 LIMIT 10
 
-{{< /highlight >}}
+</code></pre>
 
 <span>Again, you can see the benefits of Impala’s </span>[<span>architecture and optimization</span>][3]<span>.</span>
 

+ 2 - 2
docs/gethue/content/posts/2013-10-04-move-data-in-out-your-hadoop-cluster-with-the-sqoop.md

@@ -61,7 +61,7 @@ The following is the canonical import job example sourced from <http://sqoop.apa
 
 First, make sure that Sqoop2 is up and running and the Hue points to it in its hue.ini:
 
-{{< highlight bash >}}###########################################################################
+<pre><code class="bash">###########################################################################
 
 \# Settings to configure Sqoop
 
@@ -73,7 +73,7 @@ First, make sure that Sqoop2 is up and running and the Hue points to it in its h
 
 server_url=http://sqoop2.com:12000/sqoop
 
-{{< /highlight >}}
+</code></pre>
 
 ### Troubleshooting
 

+ 13 - 13
docs/gethue/content/posts/2013-10-10-password-management-in-hue.md

@@ -61,27 +61,27 @@ When a Hue administrator loses their password, a more programmatic approach is r
 
 If using CM, export this variable in order to point to the correct database:
 
-{{< highlight bash >}}HUE_CONF_DIR=/var/run/cloudera-scm-agent/process/-hue-HUE_SERVER-id
+<pre><code class="bash">HUE_CONF_DIR=/var/run/cloudera-scm-agent/process/-hue-HUE_SERVER-id
 
 echo $HUE_CONF_DIR
 
-export HUE_CONF_DIR{{< /highlight >}}
+export HUE_CONF_DIR</code></pre>
 
 Where <id> is the most recent ID in that process directory for hue-HUE_SERVER.
 
 A quick way to get the correct directory is to use this script:
 
-{{< highlight bash >}}export HUE_CONF_DIR="/var/run/cloudera-scm-agent/process/\`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk '{print $9}'\`"{{< /highlight >}}
+<pre><code class="bash">export HUE_CONF_DIR="/var/run/cloudera-scm-agent/process/\`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk '{print $9}'\`"</code></pre>
 
 Then:
 
-{{< highlight bash >}}cd /usr/lib/hue (or /opt/cloudera/parcels/CDH-XXXXX/share/hue if using parcels and CM)
+<pre><code class="bash">cd /usr/lib/hue (or /opt/cloudera/parcels/CDH-XXXXX/share/hue if using parcels and CM)
 
-build/env/bin/hue shell{{< /highlight >}}
+build/env/bin/hue shell</code></pre>
 
 The following is a small script, that can be executed within the Hue shell, to change the password for a user named “example”:
 
-{{< highlight python >}}from django.contrib.auth.models import User
+<pre><code class="python">from django.contrib.auth.models import User
 
 user = User.objects.get(username='example')
 
@@ -89,19 +89,19 @@ user.set_password('some password')
 
 user.save()
 
-{{< /highlight >}}
+</code></pre>
 
 The script can also be invoked in the shell by using input redirection (assuming the script is in a file named script.py):
 
-{{< highlight bash >}}build/env/bin/hue shell < script.py{{< /highlight >}}
+<pre><code class="bash">build/env/bin/hue shell < script.py</code></pre>
 
 # How to make a certain user a Hue admin
 
-{{< highlight bash >}}build/env/bin/hue shell{{< /highlight >}}
+<pre><code class="bash">build/env/bin/hue shell</code></pre>
 
 Then set these properties to true:
 
-{{< highlight python >}}from django.contrib.auth.models import User
+<pre><code class="python">from django.contrib.auth.models import User
 
 a = User.objects.get(username='hdfs')
 
@@ -113,7 +113,7 @@ a.set_password('my_secret')
 
 a.save()
 
-{{< /highlight >}}
+</code></pre>
 
 # How to change or reset a forgotten password?
 
@@ -121,11 +121,11 @@ Go on the Hue machine, then in the Hue home directory and either type:
 
 To change the password of the currently logged in Unix user:
 
-{{< highlight bash >}}build/env/bin/hue changepassword{{< /highlight >}}
+<pre><code class="bash">build/env/bin/hue changepassword</code></pre>
 
 If you don’t remember the admin username, create a new Hue admin (you will then also be able to login and could change the password of another user in Hue):
 
-{{< highlight bash >}}build/env/bin/hue createsuperuser{{< /highlight >}}
+<pre><code class="bash">build/env/bin/hue createsuperuser</code></pre>
 
 &nbsp;
 

+ 7 - 7
docs/gethue/content/posts/2013-10-23-tutorial-better-file-formats-for-impala-and-quick-sql.md

@@ -58,7 +58,7 @@ categories:
 
 &nbsp;
 
-{{< highlight sql >}}REGISTER piggybank.jar
+<pre><code class="sql">REGISTER piggybank.jar
 
 data = load '/user/hive/warehouse/review/yelp_academic_dataset_review_clean.json'
 
@@ -102,7 +102,7 @@ USING org.apache.pig.piggybank.storage.avro.AvroStorage(
 
 ]}
 
-}');{{< /highlight >}}
+}');</code></pre>
 
 &nbsp;
 
@@ -110,7 +110,7 @@ USING org.apache.pig.piggybank.storage.avro.AvroStorage(
 
 &nbsp;
 
-{{< highlight sql >}}CREATE TABLE review_avro
+<pre><code class="sql">CREATE TABLE review_avro
 
 ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
 
@@ -150,7 +150,7 @@ tblproperties ('avro.schema.literal'='{
 
 {"name":"user_id", "type":"string"}]}'
 
-);{{< /highlight >}}
+);</code></pre>
 
 &nbsp;
 
@@ -158,7 +158,7 @@ tblproperties ('avro.schema.literal'='{
 
 &nbsp;
 
-{{< highlight sql >}}REFRESH avro_table{{< /highlight >}}
+<pre><code class="sql">REFRESH avro_table</code></pre>
 
 &nbsp;
 
@@ -168,7 +168,7 @@ tblproperties ('avro.schema.literal'='{
 
 &nbsp;
 
-{{< highlight sql >}}CREATE TABLE review_parquet LIKE review STORED AS PARQUETFILE;{{< /highlight >}}
+<pre><code class="sql">CREATE TABLE review_parquet LIKE review STORED AS PARQUETFILE;</code></pre>
 
 &nbsp;
 
@@ -176,7 +176,7 @@ tblproperties ('avro.schema.literal'='{
 
 &nbsp;
 
-{{< highlight sql >}}INSERT OVERWRITE review_parquet SELECT * FROM review;{{< /highlight >}}
+<pre><code class="sql">INSERT OVERWRITE review_parquet SELECT * FROM review;</code></pre>
 
 &nbsp;
 

+ 4 - 4
docs/gethue/content/posts/2013-11-08-hadoop-tutorials-series-ii-8-how-to-transfer-data.md

@@ -63,7 +63,7 @@ We are going to save our data analysis into this format with a [Pig script][3] w
 
 We previously created a MySql table ‘stats’ with this [SQL script][5]. This table is going to store the exported data. Here are the properties of our job. They are explained in more depth in the previous Sqoop2 App blog post.
 
-{{< highlight bash >}}Table name: yelp_cool_test
+<pre><code class="bash">Table name: yelp_cool_test
 
 Input directory: /user/hdfs/test_sqoop
 
@@ -73,13 +73,13 @@ JDBC Driver Class : com.mysql.jdbc.Driver
 
 JDBC Connection String: jdbc:mysql://hue.com/test
 
-{{< /highlight >}}
+</code></pre>
 
 Then click ‘Save & Execute’, and here we go, the data is now available in MySql!
 
 &nbsp;
 
-{{< highlight bash >}}mysql> select * from yelp_cool_test limit 2;
+<pre><code class="bash">mysql> select * from yelp_cool_test limit 2;
 
 +--+--+--+--+
 
@@ -95,7 +95,7 @@ Then click ‘Save & Execute’, and here we go, the data is now available in My
 
 2 rows in set (0.00 sec)
 
-{{< /highlight >}}
+</code></pre>
 
 Data stored in Hive or HBase can not be sqooped natively yet by Sqoop2. A current (less efficient) workaround would be to dump it to a HDFS directory with [Hive or Pig][6] and then do a similar Sqoop export.
 

+ 2 - 2
docs/gethue/content/posts/2013-11-11-dbquery-app-mysql-postgresql-oracle-and-sqlite-query.md

@@ -50,7 +50,7 @@ Inspired from the Beeswax application, it allows you to query a relational datab
 
 Example of configuration in hue.ini:
 
-{{< highlight bash >}}[librdbms]
+<pre><code class="bash">[librdbms]
 
 \# The RDBMS app can have any number of databases configured in the databases
 
@@ -126,7 +126,7 @@ user=root
 
 password=root
 
-{{< /highlight >}}
+</code></pre>
 
 **Note**: you can look at the [Hue database guide][2] for installing the DB connectors
 

+ 5 - 5
docs/gethue/content/posts/2013-12-16-use-the-impala-app-with-sentry-for-real-security.md

@@ -48,11 +48,11 @@ categories:
 
 First enable impersonation in the [hue.ini][1] that way permissions will be checked against the current user and not ‘hue’ which acts as a proxy:
 
-{{< highlight bash >}}[impala]
+<pre><code class="bash">[impala]
 
 impersonation_enabled=True
 
-{{< /highlight >}}
+</code></pre>
 
 Then you might hit this error:
 
@@ -60,7 +60,7 @@ Then you might hit this error:
 
 This is because Hue is not authorized to be a proxy. To fix it, startup Impala with this flag:
 
-{{< highlight bash >}}-authorized_proxy_user_config=hue=*{{< /highlight >}}
+<pre><code class="bash">-authorized_proxy_user_config=hue=*</code></pre>
 
 Note: if you use Cloudera Manager, add it to the ‘Impalad Command Line Argument Safety Valve’
 
@@ -72,7 +72,7 @@ And that’s it! You can now benefit from real security similar to [Hive][2]! As
 
 Note: if you are on CDH4/Hue 2.x, make sure that Hue is configured to talk to Impala with the HiveServer2 API:
 
-{{< highlight bash >}}[impala]
+<pre><code class="bash">[impala]
 
 \# Host of the Impala Server (one of the Impalad)
 
@@ -102,7 +102,7 @@ server_port=21050
 
 impersonation_enabled=True
 
-{{< /highlight >}}
+</code></pre>
 
 Note: to give a concrete idea, here is video demo that shows the end user interaction in the UI (it is using the <a href="https://gethue.com/hadoop-tutorial-hive-query-editor-with-hiveserver2-and/" target="_blank" rel="noopener noreferrer">Hive App</a> but you will get the exact same result with the Impala app)
 

+ 4 - 4
docs/gethue/content/posts/2013-12-30-jobtracker-high-availability-ha-in-mr1.md

@@ -56,7 +56,7 @@ Note: in MR1 Hue is using a [plugin][3] to communicate with the Job Tracker. Thi
 
 We configure two Job Trackers in the [hue.ini][5]:
 
-{{< highlight bash >}}[hadoop]
+<pre><code class="bash">[hadoop]
 
 ...
 
@@ -82,7 +82,7 @@ jobtracker_host=host-2
 
 submit_to=True
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -92,7 +92,7 @@ And that’s it! Hue will communicate with the available Job Tracker automatical
 
 Notice that in the case of Oozie jobs, Oozie will try to re-submit the job but will need a logical name ([HUE-1631][6]). To enable this in Hue, specify it in each MapReduce cluster, e.g.:
 
-{{< highlight bash >}}[hadoop]
+<pre><code class="bash">[hadoop]
 
 [[mapred_clusters]]
 
@@ -102,7 +102,7 @@ Notice that in the case of Oozie jobs, Oozie will try to re-submit the job but w
 
 \## logical_name=MY_NAME
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 

+ 11 - 11
docs/gethue/content/posts/2014-01-02-a-new-spark-web-ui-spark-app.md

@@ -62,13 +62,13 @@ Currently only Scala jobs are supported and programs need to implement this trai
 
 If you are using Cloudera Manager, enable the Spark App by removing it from the blacklist by adding this in the Hue Safety Valve:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [desktop]
 
 app_blacklist=
 
-{{< /highlight >}}
+</code></pre>
 
 ## Requirements
 
@@ -78,41 +78,41 @@ We assume you have Spark 0.9.0, Scala 2.10. installed on your system. Make sur
 
 Currently on github on this branch:
 
-{{< highlight bash >}}git clone https://github.com/ooyala/spark-jobserver.git
+<pre><code class="bash">git clone https://github.com/ooyala/spark-jobserver.git
 
 cd spark-jobserver
 
-{{< /highlight >}}
+</code></pre>
 
 Then type:
 
-{{< highlight bash >}}sbt
+<pre><code class="bash">sbt
 
-re-start{{< /highlight >}}
+re-start</code></pre>
 
 ## Get Hue
 
 <span style="line-height: 1.5em;">If Hue and Spark Job Server are not on the same machine update the </span><a style="line-height: 1.5em;" href="https://github.com/cloudera/hue/blob/master/desktop/conf.dist/hue.ini">hue.ini</a> <span style="line-height: 1.5em;">property in desktop/conf/pseudo-distributed.ini:</span>
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [spark]
 
 \# URL of the Spark Job Server.
 
-server_url=http://localhost:8090/{{< /highlight >}}
+server_url=http://localhost:8090/</code></pre>
 
 To point to your Spark Cluster
 
-{{< highlight bash >}}vim ./job-server/src/main/resources/application.conf{{< /highlight >}}
+<pre><code class="bash">vim ./job-server/src/main/resources/application.conf</code></pre>
 
 Replace:
 
-{{< highlight bash >}}master = "local[4]"{{< /highlight >}}
+<pre><code class="bash">master = "local[4]"</code></pre>
 
 With the Spark Master URL (you can get it from the Spark Master UI: http://SPARK-HOST:18080/):
 
-{{< highlight bash >}}master = "spark://localhost:7077"{{< /highlight >}}
+<pre><code class="bash">master = "spark://localhost:7077"</code></pre>
 
 ## Get a Spark example to run
 

+ 2 - 2
docs/gethue/content/posts/2014-01-13-using-hadoop-mr2-and-yarn-with-an-alternative-job.md

@@ -54,7 +54,7 @@ First, it is a bit simpler to configure Hue with MR2 than in MR1 as Hue does not
 
 Here is how to configure the clusters in [hue.ini][3]. Mainly, if you are using a pseudo distributed cluster it will work by default. If not, you will just need to update all the localhost to the hostnames of the Resource Manager and History Server:
 
-{{< highlight bash >}}[hadoop]
+<pre><code class="bash">[hadoop]
 
 ...
 
@@ -102,7 +102,7 @@ history_server_api_url=http://localhost:19888
 
 submit_to=False
 
-{{< /highlight >}}
+</code></pre>
 
 <span>And that’s it! You can now look at jobs in Job Browser, get logs and submit jobs to Yarn!</span>
 

+ 18 - 18
docs/gethue/content/posts/2014-02-03-how-to-manage-the-hue-database-with-the-shell.md

@@ -49,31 +49,31 @@ _Last update on March 9 2016_
 
 First, **<span style="color: #ff0000;">backup</span>** the database. By default this is this SqlLite file:
 
-{{< highlight bash >}}cp /var/lib/hue/desktop.db ~/{{< /highlight >}}
+<pre><code class="bash">cp /var/lib/hue/desktop.db ~/</code></pre>
 
 Then if using CM, export this variable in order to point to the correct database:
 
-{{< highlight bash >}}HUE_CONF_DIR=/var/run/cloudera-scm-agent/process/-hue-HUE_SERVER-id
+<pre><code class="bash">HUE_CONF_DIR=/var/run/cloudera-scm-agent/process/-hue-HUE_SERVER-id
 
 echo $HUE_CONF_DIR
 
-export HUE_CONF_DIR{{< /highlight >}}
+export HUE_CONF_DIR</code></pre>
 
 Where <id> is the most recent ID in that process directory for hue-HUE_SERVER.
 
 A quick way to get the correct directory is to use this script:
 
-{{< highlight bash >}}export HUE_CONF_DIR="/var/run/cloudera-scm-agent/process/\`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk '{print $9}'\`"{{< /highlight >}}
+<pre><code class="bash">export HUE_CONF_DIR="/var/run/cloudera-scm-agent/process/\`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk '{print $9}'\`"</code></pre>
 
 Then go in the Database. From the Hue root (/use/lib/hue by default):
 
-{{< highlight bash >}}root@hue:hue# build/env/bin/hue dbshell{{< /highlight >}}
+<pre><code class="bash">root@hue:hue# build/env/bin/hue dbshell</code></pre>
 
 Note:
 
 You might hit some permissions error about the logs:
 
-{{< highlight bash >}}build/env/bin/hue dbshell
+<pre><code class="bash">build/env/bin/hue dbshell
 
 Traceback (most recent call last):
 
@@ -119,15 +119,15 @@ stream = open(self.baseFilename, self.mode)
 
 IOError: [Errno 13] Permission denied: '/tmp/logs/dbshell.log'
 
-{{< /highlight >}}
+</code></pre>
 
 A "workaround" is to run the command as root:
 
-{{< highlight bash >}}sudo HUE_CONF_DIR=/var/run/cloudera-scm-agent/process/9679-hue-HUE_SERVER /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hue/build/env/bin/hue dbshell{{< /highlight >}}
+<pre><code class="bash">sudo HUE_CONF_DIR=/var/run/cloudera-scm-agent/process/9679-hue-HUE_SERVER /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hue/build/env/bin/hue dbshell</code></pre>
 
 And you can start typing SQL queries:
 
-{{< highlight bash >}}sqlite> .tables
+<pre><code class="bash">sqlite> .tables
 
 auth_group oozie_dataset
 
@@ -203,13 +203,13 @@ oozie_coordinator useradmin_ldapgroup
 
 oozie_datainput useradmin_userprofile
 
-oozie_dataoutput{{< /highlight >}}
+oozie_dataoutput</code></pre>
 
 Or migrating the database manually:
 
-{{< highlight bash >}}build/env/bin/hue syncdb
+<pre><code class="bash">build/env/bin/hue syncdb
 
-build/env/bin/hue migrate{{< /highlight >}}
+build/env/bin/hue migrate</code></pre>
 
 If you want to switch to another database (we recommend MySql), this [guide][1] details the migration process.
 
@@ -225,7 +225,7 @@ Transfer Oozie workflows belonging to the user Bob to Joe.
 
 **until** Hue 3.8
 
-{{< highlight bash >}}# First move the objects
+<pre><code class="bash"># First move the objects
 
 from oozie.models import Job
 
@@ -253,11 +253,11 @@ Job.objects.filter(owner=u2)
 
 wfs = Job.objects.filter(owner=u2)
 
-{{< /highlight >}}
+</code></pre>
 
 **For** Hue 3.9+
 
-{{< highlight bash >}}# First move the objects
+<pre><code class="bash"># First move the objects
 
 from desktop.models import Document2
 
@@ -285,11 +285,11 @@ Document2.objects.filter(owner=u2, type='oozie-workflow2')
 
 wfs = Document2.objects.filter(owner=u2, type='oozie-workflow2')
 
-{{< /highlight >}}
+</code></pre>
 
 **For** both
 
-{{< highlight bash >}}# The list of ALL the workflows (will also list the already known ones) of the second user
+<pre><code class="bash"># The list of ALL the workflows (will also list the already known ones) of the second user
 
 \# Then move the documents
 
@@ -307,7 +307,7 @@ Document.objects.filter(object_id__in=wfs).update(owner=u2)
 
 > [<Document: workflow MyWf joe>]
 
-{{< /highlight >}}
+</code></pre>
 
 **Note**: it will change again in Hue 3.10 and be easier.
 

+ 24 - 24
docs/gethue/content/posts/2014-02-03-making-hadoop-accessible-to-your-employees-with-ldap.md

@@ -102,23 +102,23 @@ When authenticating via LDAP, Hue validates login credentials against a director
 
 &nbsp;
 
-{{< highlight bash >}}'desktop]
+<pre><code class="bash">'desktop]
    
 [[auth]]
    
 backend=desktop.auth.backend.LdapBackend
   
-{{< /highlight >}}
+</code></pre>
 
 The LDAP authentication backend will automatically create users that don’t exist in Hue by default. Hue needs to import users in order to properly perform the authentication. The password is never imported when importing users. The following configuration can be used to disable automatic import:
 
-{{< highlight bash >}}'desktop]
+<pre><code class="bash">'desktop]
     
 [[ldap]]
     
 create_users_on_login=false
   
-{{< /highlight >}}
+</code></pre>
 
 The purpose of disabling the automatic import is to only allow to login a predefined list of manually imported users.
 
@@ -156,25 +156,25 @@ If ‘nt_domain’ is provided, then Hue will use a UPN to bind to the LDAP serv
 
 &nbsp;
 
-{{< highlight bash >}}'desktop]
+<pre><code class="bash">'desktop]
     
 [[ldap]]
     
 nt_domain=example.com
   
-{{< /highlight >}}
+</code></pre>
 
 Otherwise, the ‘ldap_username_pattern’ configuration is used (the <username> parameter will be replaced with the username provided at login):
 
 &nbsp;
 
-{{< highlight bash >}}'desktop]
+<pre><code class="bash">'desktop]
       
 [[ldap]]
       
 ldap_username_pattern="uid=<username>,ou=People,DC=hue-search,DC=ent,DC=cloudera,DC=com"
   
-{{< /highlight >}}
+</code></pre>
 
 Typical attributes to search for include:
 
@@ -189,13 +189,13 @@ To enable direct bind authentication, the ‘search_bind_authentication’ confi
 
 &nbsp;
 
-{{< highlight bash >}}'desktop]
+<pre><code class="bash">'desktop]
       
 [[ldap]]
       
 search_bind_authentication=false
   
-{{< /highlight >}}
+</code></pre>
 
 # 2.    Importing users {#t4}
 
@@ -234,7 +234,7 @@ Users and groups can be synchronized with the directory service via the Useradmi
 
 The groups of a user can be synced when he logs in (to keep its permission in sync):
 
-{{< highlight bash >}}'desktop]
+<pre><code class="bash">'desktop]
     
 [[ldap]]
     
@@ -242,7 +242,7 @@ The groups of a user can be synced when he logs in (to keep its permission in sy
     
 \## sync_groups_on_login=false
   
-{{< /highlight >}}
+</code></pre>
 
 ## 4.1.    Attributes synchronized {#t7}
 
@@ -269,7 +269,7 @@ There are two configurations for restricting the search process:
 
 Here is an example configuration:
 
-{{< highlight bash >}}'desktop]
+<pre><code class="bash">'desktop]
       
 [[ldap]]
       
@@ -283,7 +283,7 @@ user_name_attr=uid
       
 \## follow_referrals=false
   
-{{< /highlight >}}
+</code></pre>
 
 With the above configuration, the LDAP search filter will take on the form:
 
@@ -295,7 +295,7 @@ Hue can be configured to ignore the case of usernames as well as force usernames
 
 [desktop]
 
-{{< highlight bash >}}'desktop]
+<pre><code class="bash">'desktop]
       
 [[ldap]]
       
@@ -303,29 +303,29 @@ ignore_username_case=true
       
 force_username_lowercase=true
   
-{{< /highlight >}}
+</code></pre>
 
 # 7.    LDAPS/StartTLS support {#t12}
 
 Secure communication with LDAP is provided via the SSL/TLS and StartTLS protocols. It allows Hue to validate the directory service it’s going to converse with. Practically speaking, if a Certificate Authority Certificate file is provided, Hue will communicate via LDAPS:
 
-{{< highlight bash >}}'desktop]
+<pre><code class="bash">'desktop]
       
 [[ldap]]
       
 ldap_cert=/etc/hue/ca.crt
   
-{{< /highlight >}}
+</code></pre>
 
 The StartTLS protocol can be used as well (step up to SSL/TLS):
 
-{{< highlight bash >}}'desktop]
+<pre><code class="bash">'desktop]
       
 [[ldap]]
       
 use_start_tls=true
   
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -333,7 +333,7 @@ use_start_tls=true
 
 Get more information when querying LDAP and use the ldapsearch tool:
 
-{{< highlight bash >}}'desktop]
+<pre><code class="bash">'desktop]
       
 [[ldap]]
       
@@ -349,15 +349,15 @@ debug=true
       
 trace_level=0
   
-{{< /highlight >}}
+</code></pre>
 
 **Note**
 
 Make sure to add to the Hue server environment:
 
-{{< highlight bash >}}DESKTOP_DEBUG=true
+<pre><code class="bash">DESKTOP_DEBUG=true
   
-DEBUG=true{{< /highlight >}}
+DEBUG=true</code></pre>
 
 &nbsp;
 

+ 2 - 2
docs/gethue/content/posts/2014-02-03-solving-the-hue-2-x-hanging-problem.md

@@ -45,11 +45,11 @@ categories:
 ---
 In the Hue versions before [3][1], Hue is sometimes getting slow and “stuck”. To fix this problem, it is recommended to switch Hue to use the CherryPy server instead of Spawning. In the [hue.ini][2] or the Hue Safety Valve in CM, enter:
 
-{{< highlight bash >}}[desktop]
+<pre><code class="bash">[desktop]
 
 use_cherrypy_server = true
 
-{{< /highlight >}}
+</code></pre>
 
 **Cause**:
 

+ 3 - 3
docs/gethue/content/posts/2014-03-14-how-to-fix-the-multipleobjectsreturned-error-in-hue.md

@@ -47,7 +47,7 @@ categories:
 ---
 When going on the Home page (/home) in Hue 3.0, this error could appear:
 
-{{< highlight bash >}}MultipleObjectsReturned: get() returned more than one DocumentPermission - it returned 2! Lookup parameters were {'perms': 'read', 'doc': <Document: saved query Sample: Job loss sample>}{{< /highlight >}}
+<pre><code class="bash">MultipleObjectsReturned: get() returned more than one DocumentPermission - it returned 2! Lookup parameters were {'perms': 'read', 'doc': <Document: saved query Sample: Job loss sample>}</code></pre>
 
 This is fixed in Hue 3.6 and here is a way to repair it:
 
@@ -55,7 +55,7 @@ This is fixed in Hue 3.6 and here is a way to repair it:
 
 2. Run the cleanup script
 
-{{< highlight python >}}
+<pre><code class="python">
 
 from desktop.models import DocumentPermission, Document
 
@@ -79,4 +79,4 @@ print 'Deleting duplicate %s' % dup
 
 dup.delete()
 
-{{< /highlight >}}
+</code></pre>

+ 6 - 6
docs/gethue/content/posts/2014-03-23-tutorial-live-demo-of-search-on-hadoop.md

@@ -76,29 +76,29 @@ The next step is to create the indexed into Solr. First, make sure that Solr has
 
 In order to query a live dataset, you need to index some data. Go on the Hue machine:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 cd $HUE_HOME
 
 cd apps/search/examples/bin
 
-{{< /highlight >}}
+</code></pre>
 
 Then create the Solr collections:
 
-{{< highlight bash >}}./create_collections.sh{{< /highlight >}}
+<pre><code class="bash">./create_collections.sh</code></pre>
 
 In case Solr is not on the same machine, add this parameter in the script:
 
-{{< highlight bash >}}-solr http://localhost:8983/solr{{< /highlight >}}
+<pre><code class="bash">-solr http://localhost:8983/solr</code></pre>
 
 Then index some example data with:
 
-{{< highlight bash >}}./post.sh{{< /highlight >}}
+<pre><code class="bash">./post.sh</code></pre>
 
 Same, if Solr is on a different machine, update the url:
 
-{{< highlight bash >}}URL=http://localhost:8983/solr{{< /highlight >}}
+<pre><code class="bash">URL=http://localhost:8983/solr</code></pre>
 
 And that’s it! The above warning message will disappear and you will be able to query Solr indexes in live!
 

+ 12 - 12
docs/gethue/content/posts/2014-04-02-hadoop-tutorial-oozie-workflow-credentials-with-a-hive-action-with-kerberos.md

@@ -40,11 +40,11 @@ categories:
 ---
 When using Hadoop security and scheduling jobs using [Hive][1] (or Pig, [HBase][2]) you might have received this error:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: GSS initiate failed
 
-{{< /highlight >}}
+</code></pre>
 
 Indeed, in order to use an Oozie Hive action with the Hive metastore server when Kerberos is enabled, you need to use HCatalog credentials in your workflow.
 
@@ -69,7 +69,7 @@ Hive should not access directly the metastore database via JDBC, or it will bypa
 
 Include a <span style="color: #ff0000;">hive-config.xml</span> in the Job XML property of the Hive action with this type of configuration:
 
-{{< highlight xml >}}
+<pre><code class="xml">
 
 <property>
 
@@ -103,11 +103,11 @@ Include a <span style="color: #ff0000;">hive-config.xml</span> in the Job XML pr
 
 </property>
 
-{{< /highlight >}}
+</code></pre>
 
 Use this one:
 
-{{< highlight xml >}}
+<pre><code class="xml">
 
 <property>
 
@@ -133,21 +133,21 @@ Use this one:
 
 </property>
 
-{{< /highlight >}}
+</code></pre>
 
 **Note**:
 
 When the job will try to connect to MySql, you might hit this missing jar problem:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 Caused by: org.datanucleus.store.rdbms.datasource.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
 
-<pre>{{< /highlight >}}
+<pre></code></pre>
 
 To solve it, simply download the MySql jar connector from http://dev.mysql.com/downloads/connector/j/, and have HiveServer2 points to it with:
 
-{{< highlight xml >}}
+<pre><code class="xml">
 
 <property>
 
@@ -157,13 +157,13 @@ To solve it, simply download the MySql jar connector from http://dev.mysql.com/d
 
 </property>
 
-{{< /highlight >}}
+</code></pre>
 
 **Note**:
 
 To activate the credentials in Oozie itself, update this property in oozie-site.xml
 
-{{< highlight xml >}}
+<pre><code class="xml">
 
 <property>
 
@@ -179,7 +179,7 @@ To activate the credentials in Oozie itself, update this property in oozie-site.
 
 </property>
 
-{{< /highlight >}}
+</code></pre>
 
  [1]: https://gethue.com/hadoop-tutorial-how-to-access-hive-in-pig-with/
  [2]: https://gethue.com/hadoop-tutorial-use-pig-and-hive-with-hbase/

+ 8 - 8
docs/gethue/content/posts/2014-04-03-hadoop-tutorial-monitor-and-get-alerts-for-your-workflows-with-the-oozie-slas.md

@@ -96,27 +96,27 @@ SLAs can be setup in the Editor in the advanced tabs of:
   First make sure you are using Oozie 4. If you need to upgrade from Oozie 3, don’t forget to update the Oozie sharelib with:
 </p>
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 sudo -u oozie /usr/lib/oozie/bin/oozie-setup.sh sharelib create -fs hdfs://localhost:8020 -locallib /usr/lib/oozie/oozie-sharelib-yarn.tar.gz
 
-{{< /highlight >}}
+</code></pre>
 
 <p dir="ltr">
   If for some reason you need to reset the Oozie DB, delete it and recreate it with:
 </p>
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 sudo -u oozie /usr/lib/oozie/bin/ooziedb.sh create -sqlfile oozie.sql -run
 
-{{< /highlight >}}
+</code></pre>
 
 <p dir="ltr">
   <strong>Note</strong><br /> In order to avoid the exception below, you should not have the SLA properties in oozie-site.xml.
 </p>
 
-{{< highlight java >}}
+<pre><code class="java">
 
 Exception in thread "main" java.lang.NoClassDefFoundError: javax/mail/MessagingException
 
@@ -124,13 +124,13 @@ at java.lang.Class.forName0(Native Method)
 
 at java.lang.Class.forName(Class.java:270)
 
-{{< /highlight >}}
+</code></pre>
 
 <p dir="ltr">
   Then open oozie-site.xml and add these <a href="http://oozie.apache.org/docs/4.0.0/AG_Install.html#Notifications_Configuration">properties</a> and restart Oozie:
 </p>
 
-{{< highlight xml >}}
+<pre><code class="xml">
 
 <property>
 
@@ -160,4 +160,4 @@ org.apache.oozie.sla.listener.SLAEmailEventListener
 
 </property>
 
-{{< /highlight >}}
+</code></pre>

+ 5 - 5
docs/gethue/content/posts/2014-04-17-hadoop-tutorial-how-to-create-a-real-hadoop-cluster-in-10-minutes.md

@@ -216,7 +216,7 @@ categories:
 
 <!--email_off-->
 
-{{< highlight bash >}}ssh -i ~/demo.pem ubuntu@ec2-11-222-333-444.compute-1.amazonaws.com{{< /highlight >}}
+<pre><code class="bash">ssh -i ~/demo.pem ubuntu@ec2-11-222-333-444.compute-1.amazonaws.com</code></pre>
 
 &nbsp;
 
@@ -224,13 +224,13 @@ categories:
   Retrieve and start Cloudera Manager:
 </p>
 
-{{< highlight bash >}}wget http://archive.cloudera.com/cm5/installer/latest/cloudera-manager-installer.bin
+<pre><code class="bash">wget http://archive.cloudera.com/cm5/installer/latest/cloudera-manager-installer.bin
 
 chmod +x cloudera-manager-installer.bin
 
 sudo ./cloudera-manager-installer.bin
 
-{{< /highlight >}}
+</code></pre>
 
 <p dir="ltr">
   After, login with the default credentials admin/admin (note: you might need to wait 5 minutes before http://ec2-54-178-21-60.compute-1.amazonaws.com:7180/ becomes available).
@@ -256,9 +256,9 @@ sudo ./cloudera-manager-installer.bin
   If you are getting a "Bad Request (400)" error, you will need to enter in the hue.ini or CM safety valve:
 </p>
 
-{{< highlight bash >}}[desktop]
+<pre><code class="bash">[desktop]
 
-allowed_hosts=*{{< /highlight >}}
+allowed_hosts=*</code></pre>
 
 <p dir="ltr">
   <strong>Note</strong>

+ 4 - 4
docs/gethue/content/posts/2014-05-19-hadoop-tutorial-how-to-distribute-impala-query-load.md

@@ -90,7 +90,7 @@ For more information on configuring Hue via Cloudera Manager, see [Managing Clus
   1. Download and unzip the [binary distribution][8] of [HA Proxy 1.4][3] on the node that doesn’t have Hue installed.
   2. Add the following [HA Proxy configuration][9] to /etc/impala/haproxy-impala.conf:
 
-{{< highlight bash >}}global
+<pre><code class="bash">global
 
 daemon
 
@@ -128,11 +128,11 @@ server impala2 server2.cloudera.com:21050 check
 
 server impala3 server3.cloudera.com:21050 check
 
-{{< /highlight >}}
+</code></pre>
 
   1. Start HA Proxy:
 
-{{< highlight bash >}}haproxy -f /etc/impala/haproxy-impala.conf{{< /highlight >}}
+<pre><code class="bash">haproxy -f /etc/impala/haproxy-impala.conf</code></pre>
 
 &nbsp;
 
@@ -140,7 +140,7 @@ The key configuration options are [**balance**][10] and [**server**][11] in the
 
 &nbsp;
 
-{{< highlight bash >}}server <name> <address>[:port] [settings ...]{{< /highlight >}}
+<pre><code class="bash">server <name> <address>[:port] [settings ...]</code></pre>
 
 &nbsp;
 

+ 4 - 4
docs/gethue/content/posts/2014-05-20-visualize-snappy-compressed-avro-files.md

@@ -52,11 +52,11 @@ You can now view Snappy compressed <a href="http://avro.apache.org/" target="_bl
   1. Make sure Hue is stopped before installing.
   2. Install the snappy system packages on your system. They can either be downloaded from <https://code.google.com/p/snappy/> or, preferably, installed via your package management system (e.g. `yum install snappy-devel`).
   3. Install the python-snappy package via ‘pip’ from the Hue home (cd /usr/lib/hue or /opt/cloudera/parcels/CDH/lib/hue):
-    {{< highlight bash >}}yum install gcc gcc-c++ python-devel snappy-devel
+    <pre><code class="bash">yum install gcc gcc-c++ python-devel snappy-devel
 
     build/env/bin/pip install -U setuptools
 
-    build/env/bin/pip install python-snappy{{< /highlight >}}
+    build/env/bin/pip install python-snappy</code></pre>
 
   4. Start Hue!
 
@@ -74,7 +74,7 @@ Note: In this demo, we are using Avro files found in this [github][2] (1).
 
 It turns out that `python-snappy` is not compatible with the python library called `snappy`. If you see this error, uninstall `snappy`:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [03/Sep/2015 06:56:34 -0700] views WARNING Could not read avro file at //user/cconner/test_snappy.avro
 
@@ -114,7 +114,7 @@ raise PopupException(_("Failed to read Avro file."))
 
 PopupException: Failed to read Avro file.
 
-{{< /highlight >}}
+</code></pre>
 
 #
 

+ 2 - 2
docs/gethue/content/posts/2014-05-29-hadoop-tutorial-make-hadoop-more-accessible-by-integrating-multiple-ldap-servers.md

@@ -56,7 +56,7 @@ As described in [How to Make Hadoop Accessible to your Employees with Hue][1], t
 
 You can have multiple LDAP servers configured in the hue.ini by providing multiple server declarations:
 
-{{< highlight bash >}}[desktop]
+<pre><code class="bash">[desktop]
     
 [[ldap]]
       
@@ -106,7 +106,7 @@ group_name_attr="cn"
           
 group_member_attr="member"
   
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 

+ 4 - 4
docs/gethue/content/posts/2014-05-30-hadoop-tutorial-how-to-integrate-unix-users-and-groups.md

@@ -58,7 +58,7 @@ Here is a quick video demonstrating the above:
 
 From the Hue root (/use/lib/hue by default or /opt/cloudera/parcels/CDH/lib/hue/ with CM):
 
-{{< highlight bash >}}build/env/bin/hue useradmin_sync_with_unix{{< /highlight >}}
+<pre><code class="bash">build/env/bin/hue useradmin_sync_with_unix</code></pre>
 
 &nbsp;
 
@@ -68,7 +68,7 @@ Where <id> is the most recent ID in that process directory for hue-HUE_SERVER.
 
 A quick way to get the correct directory is to use this script:
 
-{{< highlight bash >}}export HUE_CONF_DIR="/var/run/cloudera-scm-agent/process/\`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk '{print $9}'\`"{{< /highlight >}}
+<pre><code class="bash">export HUE_CONF_DIR="/var/run/cloudera-scm-agent/process/\`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk '{print $9}'\`"</code></pre>
 
 &nbsp;
 
@@ -86,11 +86,11 @@ useradmin_sync_with_unix comes with a few useful command line arguments:
 
 To verify the hadoop group exists, you can use the ‘getent’ command:
 
-{{< highlight bash >}}getent group | grep hadoop{{< /highlight >}}
+<pre><code class="bash">getent group | grep hadoop</code></pre>
 
 To add the hadoop group, you can use the ‘groupadd’ command:
 
-{{< highlight bash >}}groupadd hadoop{{< /highlight >}}
+<pre><code class="bash">groupadd hadoop</code></pre>
 
 #
 

+ 6 - 6
docs/gethue/content/posts/2014-06-12-i-put-a-proxy-on-hue.md

@@ -49,7 +49,7 @@ Here's a sample configuration we use on our servers. We know the Apache web serv
 
 What you need to do is to install and enable the `mod_proxy` module (e.g. `a2enmod proxy_http`) and then configure the main virtual host (on Ubuntu it's `/etc/apache2/sites-available/000-default.conf`) to just proxy and reverse proxy any request to your Hue instance (and any HTTP 503 error to another path):
 
-{{< highlight xml >}}<VirtualHost *:80>
+<pre><code class="xml"><VirtualHost *:80>
 
 ProxyPreserveHost On
 
@@ -69,29 +69,29 @@ ServerName demo.gethue.com
 
 </VirtualHost>
 
-{{< /highlight >}}
+</code></pre>
 
 Change `demo.gethue.com` with your qualified server name available on your internal or external DNS.
 
 and then add an additional virtual host (running on a different port, 81 for instance, on `/etc/apache2/sites-available/001-error.conf`) to serve the error path you specified in the default vhost:
 
-{{< highlight xml >}}<VirtualHost *:81>
+<pre><code class="xml"><VirtualHost *:81>
 
 DocumentRoot /var/www/
 
 </VirtualHost>
 
-{{< /highlight >}}
+</code></pre>
 
 in `/var/www` you need to have a folder `error` with an `index.html` inside (need inspiration? look [here!][4]) that is going to be displayed when Hue is not reachable.
 
 The last thing we need to do is to tell Apache we are listening to the port 81 as well, so edit `/etc/apache2/ports.conf` and just add
 
-{{< highlight xml >}}
+<pre><code class="xml">
 
 Listen 81
 
-{{< /highlight >}}
+</code></pre>
 
 After everything, let's restart Apache with `sudo service apache2 restart` and... et voila! You are good to go!
 

+ 17 - 17
docs/gethue/content/posts/2014-06-16-get-started-with-spark-deploy-spark-server-and-compute-pi-from-your-web-browser.md

@@ -59,23 +59,23 @@ Most of the instructions are on the [github][4].
 
 We start by checking out the repository and building the project (note: if you are on Ubuntu and encrypted your disk, you will need to build from  /tmp). Then, from the Spark Job Server root directory:
 
-{{< highlight bash >}}mkdir bin/config
+<pre><code class="bash">mkdir bin/config
 
 cp config/local.sh.template bin/config/settings.sh
 
-{{< /highlight >}}
+</code></pre>
 
 And these two variables in settings.sh:
 
-{{< highlight bash >}}LOG_DIR=/var/log/job-server
+<pre><code class="bash">LOG_DIR=/var/log/job-server
 
 SPARK_HOME=/usr/lib/spark (or SPARK_HOME=/opt/cloudera/parcels/CDH/lib/spark)
 
-{{< /highlight >}}
+</code></pre>
 
 Then package everything:
 
-{{< highlight bash >}}bin/server_deploy.sh settings.sh
+<pre><code class="bash">bin/server_deploy.sh settings.sh
 
 [info] - should return error message if classPath does not match
 
@@ -103,7 +103,7 @@ spark-job-server.jar
 
 Created distribution at /tmp/job-server/job-server.tar.gz
 
-{{< /highlight >}}
+</code></pre>
 
 We have our main tarball `/tmp/job-server/job-server.tar.gz`, ready to be copied on a server.
 
@@ -117,15 +117,15 @@ We then extract `job-server.tar.gz` and copy our application.conf on the server.
 
 <!--email_off-->
 
-{{< highlight bash >}}scp /tmp/spark-jobserver/./job-server/src/main/resources/application.conf hue@server.com:
+<pre><code class="bash">scp /tmp/spark-jobserver/./job-server/src/main/resources/application.conf hue@server.com:
 
-{{< /highlight >}}
+</code></pre>
 
 <!--/email_off-->
 
 Edit application.conf to point to the master:
 
-{{< highlight bash >}}# Settings for safe local mode development
+<pre><code class="bash"># Settings for safe local mode development
 
 spark {
 
@@ -135,11 +135,11 @@ master = "spark://spark-host:7077"
 
 }
 
-{{< /highlight >}}
+</code></pre>
 
 Here is the content of our jobserver folder:
 
-{{< highlight bash >}}ls -l
+<pre><code class="bash">ls -l
 
 total 25208
 
@@ -155,7 +155,7 @@ total 25208
 
 -rw-rw-r- 1 ubuntu ubuntu 13673788 Jun  9 23:05 spark-job-server.jar
 
-{{< /highlight >}}
+</code></pre>
 
 Note:
 
@@ -165,23 +165,23 @@ Also make sure that you see at least one Spark work:  `"Workers: 1"`
 
 In the past, we had some problems (e.g. spark worker not starting) when trying to bind Spark to a localhost. We fixed it by hardcoding in the `spark-env.sh`:
 
-{{< highlight bash >}}sudo vim /etc/spark/conf/spark-env.sh
+<pre><code class="bash">sudo vim /etc/spark/conf/spark-env.sh
 
 export STANDALONE_SPARK_MASTER_HOST=spark-host
 
-{{< /highlight >}}
+</code></pre>
 
 Now just start the server and the process will run in the background:
 
-{{< highlight bash >}}./server_start.sh{{< /highlight >}}
+<pre><code class="bash">./server_start.sh</code></pre>
 
 You can check if it is alive by grepping it:
 
-{{< highlight bash >}}ps -ef | grep 9999
+<pre><code class="bash">ps -ef | grep 9999
 
 ubuntu   28755     1  2 01:41 pts/0    00:00:11 java -cp /home/ubuntu/spark-server:/home/ubuntu/spark-server/spark-job-server.jar::/opt/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/spark/conf:/opt/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/spark/assembly/lib/\*:/opt/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/spark/examples/lib/\*:/etc/hadoop/conf:/opt/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/hadoop/\*:/opt/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/hadoop/../hadoop-hdfs/\*:/opt/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/hadoop/../hadoop-yarn/\*:/opt/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/hadoop/../hadoop-mapreduce/\*:/opt/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/spark/lib/scala-library.jar:/opt/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/spark/lib/scala-compiler.jar:/opt/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/spark/lib/jline.jar -XX:+UseConcMarkSweepGC -verbose:gc -XX:+PrintGCTimeStamps -Xloggc:/home/ubuntu/spark-server/gc.out -XX:MaxPermSize=512m -XX:+CMSClassUnloadingEnabled -Xmx5g -XX:MaxDirectMemorySize=512M -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.manage
 
-{{< /highlight >}}
+</code></pre>
 
 That’s it!
 

+ 2 - 2
docs/gethue/content/posts/2014-06-18-hadoop-tutorial-yarn-resource-manager-high-availability-ha-in-mr2.md

@@ -50,7 +50,7 @@ Hue will automatically pick up the active Resource Manager even if it failed ove
 
 Here is an example of configuration for the [[yarn_clusters]] section in hue.ini:
 
-{{< highlight bash >}}[hadoop]
+<pre><code class="bash">[hadoop]
 
 \# Configuration for YARN (MR2)
 
@@ -90,7 +90,7 @@ logical_name=ha-rm
 
 submit_to=True
 
-{{< /highlight >}}
+</code></pre>
 
 We hope that the multi Resource Manager support will make your life with Hadoop easier!
 

+ 15 - 15
docs/gethue/content/posts/2014-07-17-rbtools-example-how-do-easily-do-code-reviews-with-review-board.md

@@ -50,11 +50,11 @@ First, join the 'hue' group in your account <a href="https://review.cloudera.org
 
 Then install the Review Board tools:
 
-{{< highlight bash >}}sudo pip install -allow-all-external RBTools{{< /highlight >}}
+<pre><code class="bash">sudo pip install -allow-all-external RBTools</code></pre>
 
 Point it to your git repository:
 
-{{< highlight bash >}}romain@runreal:~/projects/hue$ rbt setup-repo
+<pre><code class="bash">romain@runreal:~/projects/hue$ rbt setup-repo
 
 Enter the Review Board server URL: https://review.cloudera.org
 
@@ -72,15 +72,15 @@ BRANCH = "master"
 
 Config written to /home/romain/projects/hue/.reviewboardrc
 
-{{< /highlight >}}
+</code></pre>
 
 # Post a review
 
 We have wrapped up the typical submission in a dedicated 'tools/scripts/hue-review' script prefilled with all the details of the commits:
 
-{{< highlight bash >}}vim tools/scripts/hue-review{{< /highlight >}}
+<pre><code class="bash">vim tools/scripts/hue-review</code></pre>
 
-{{< highlight bash >}}function hue-review {
+<pre><code class="bash">function hue-review {
 
 #!/usr/bin/env bash
 
@@ -116,11 +116,11 @@ exec $RBT post -o -description="$(git whatchanged $REVLIST)" -target-groups=hue
 
 }
 
-{{< /highlight >}}
+</code></pre>
 
 If you use a Mac:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 #!/usr/bin/env bash
 
@@ -168,23 +168,23 @@ $@ \
 
 $REVLIST
 
-{{< /highlight >}}
+</code></pre>
 
 Then:
 
-{{< highlight bash >}}source /home/romain/.bashrc{{< /highlight >}}
+<pre><code class="bash">source /home/romain/.bashrc</code></pre>
 
 or put it in your PATH.
 
 Now we post the review:
 
-{{< highlight bash >}}tools/scripts/hue-review HEAD~1..HEAD romain,enricoberti,erickt "HUE-2123 [beeswax] Handle cancel state properly" -bugs-closed=HUE-2123
+<pre><code class="bash">tools/scripts/hue-review HEAD~1..HEAD romain,enricoberti,erickt "HUE-2123 [beeswax] Handle cancel state properly" -bugs-closed=HUE-2123
 
 Review request #4501 posted.
 
 https://review.cloudera.org/r/4501/
 
-{{< /highlight >}}
+</code></pre>
 
 Et voila! Here is our review <a href="https://review.cloudera.org/r/4501/" target="_blank" rel="noopener noreferrer">https://review.cloudera.org/r/4501/</a>.
 
@@ -196,7 +196,7 @@ If you have more than one diff, update `HEAD~1..HEAD` accordingly (e.g. `HEAD~2.
 
 Modify the previous commit diff:
 
-{{< highlight bash >}}git commit -a -amend
+<pre><code class="bash">git commit -a -amend
 
 ... Update a file ...
 
@@ -204,13 +204,13 @@ Modify the previous commit diff:
 
 3 files changed, 10 insertions(+), 4 deletions(-)
 
-{{< /highlight >}}
+</code></pre>
 
 Update the review:
 
-{{< highlight bash >}}rbt post -u -r 6092 HEAD~1..HEAD
+<pre><code class="bash">rbt post -u -r 6092 HEAD~1..HEAD
 
-Review request #6092 posted. {{< /highlight >}}
+Review request #6092 posted. </code></pre>
 
 # Sump-up
 

+ 14 - 14
docs/gethue/content/posts/2014-07-24-tutorial-how-to-run-the-hue-integration-tests.md

@@ -51,7 +51,7 @@ First, clone the Hue repository and make sure that you have all the pre-requisi
 
 <!--email_off-->
 
-{{< highlight bash >}}git clone git@github.com:cloudera/hue.git{{< /highlight >}}
+<pre><code class="bash">git clone git@github.com:cloudera/hue.git</code></pre>
 
 <!--/email_off-->
 
@@ -59,13 +59,13 @@ First, clone the Hue repository and make sure that you have all the pre-requisi
 
 The regular unit tests do not require all this setup! Just run them directly:
 
-{{< highlight bash >}}build/env/bin/hue test specific beeswax.tests:test_split_statements &> a; vim a{{< /highlight >}}
+<pre><code class="bash">build/env/bin/hue test specific beeswax.tests:test_split_statements &> a; vim a</code></pre>
 
 **Note**
 
 This requires to have done an <a href="https://github.com/cloudera/hue#getting-started" target="_blank" rel="noopener noreferrer">initial build</a> of Hue with:
 
-{{< highlight bash >}}make apps{{< /highlight >}}
+<pre><code class="bash">make apps</code></pre>
 
 # Integration Tests
 
@@ -73,7 +73,7 @@ This requires to have done an <a href="https://github.com/cloudera/hue#getting-s
 
 The tests will run against the cluster configured in your hue.ini one if you specify:
 
-{{< highlight bash >}}export LIVE_CLUSTER=true{{< /highlight >}}
+<pre><code class="bash">export LIVE_CLUSTER=true</code></pre>
 
 ### Mini cluster
 
@@ -81,7 +81,7 @@ The test will run in a mini cluster (mini Hadoop, Oozie, Sqoop2 and Hive) create
 
 Here is how to get started:
 
-{{< highlight bash >}}./tools/jenkins/jenkins.sh slow{{< /highlight >}}
+<pre><code class="bash">./tools/jenkins/jenkins.sh slow</code></pre>
 
 **Note**
 
@@ -89,17 +89,17 @@ You might have lost all the changes in your local pseudo hue.ini because of the
 
 In order to avoid this, add this into your \`~/.bashrc\`:
 
-{{< highlight bash >}}export SKIP_CLEAN=true{{< /highlight >}}
+<pre><code class="bash">export SKIP_CLEAN=true</code></pre>
 
 **Note**
 
 To point to an Impalad and trigger the Impala tests:
 
-{{< highlight bash >}}export TEST_IMPALAD_HOST=impalad-01.gethue.com{{< /highlight >}}
+<pre><code class="bash">export TEST_IMPALAD_HOST=impalad-01.gethue.com</code></pre>
 
 or
 
-{{< highlight bash >}}./build/env/bin/hue test impalaimpalad-01.gethue.com{{< /highlight >}}
+<pre><code class="bash">./build/env/bin/hue test impalaimpalad-01.gethue.com</code></pre>
 
 &nbsp;
 
@@ -107,7 +107,7 @@ It is then going to download the 4 latest Hadoop, Oozie, Sqoop2 and Hive and pre
 
 You can CTRL+C and kill the script when you see:
 
-{{< highlight bash >}}INFO: Oozie webconsole disabled, ExtJS library not specified
+<pre><code class="bash">INFO: Oozie webconsole disabled, ExtJS library not specified
 
 New Oozie WAR file with added 'JARs' at /home/romain/projects/hue-master/ext/oozie/oozie-4.0.0-cdh5.1.0/oozie-server/webapps/oozie.war
 
@@ -239,11 +239,11 @@ cd /home/romain/projects/hue-master/maven && mvn install
 
 [INFO] ------------------------
 
-{{< /highlight >}}
+</code></pre>
 
 And that's it! You can run all the tests or some parts with this <a href="https://github.com/cloudera/hue#getting-started" target="_blank" rel="noopener noreferrer">syntax</a>:
 
-{{< highlight bash >}}build/env/bin/hue test specific filebrowser.views_test:test_listdir_sort_and_filter &> a; vim a{{< /highlight >}}
+<pre><code class="bash">build/env/bin/hue test specific filebrowser.views_test:test_listdir_sort_and_filter &> a; vim a</code></pre>
 
 &nbsp;
 
@@ -251,15 +251,15 @@ And that's it! You can run all the tests or some parts with this <a href="https
 
 In some cases you might need to clear up your your cache with something like:
 
-{{< highlight bash >}}rm /home/romain/.hue_cache/.*{{< /highlight >}}
+<pre><code class="bash">rm /home/romain/.hue_cache/.*</code></pre>
 
 then in
 
-{{< highlight bash >}}ext/{{< /highlight >}}
+<pre><code class="bash">ext/</code></pre>
 
 delete everything but
 
-{{< highlight bash >}}thirdparty/{{< /highlight >}}
+<pre><code class="bash">thirdparty/</code></pre>
 
 &nbsp;
 

+ 21 - 21
docs/gethue/content/posts/2014-09-11-how-to-build-hue-on-ubuntu-14-04-trusty.md

@@ -52,7 +52,7 @@ Due to a [package bug,][1] we got quite a few questions about how to build Hue c
 
 First, make sure that you are indeed on the 14.04:
 
-{{< highlight bash >}}> lsb_release -a
+<pre><code class="bash">> lsb_release -a
 
 No LSB modules are available.
 
@@ -64,21 +64,21 @@ Release: 14.04
 
 Codename: trusty
 
-{{< /highlight >}}
+</code></pre>
 
 Then install git and fetch Hue [source code][2] from github:
 
-{{< highlight bash >}}sudo apt-get install git
+<pre><code class="bash">sudo apt-get install git
 
 git clone https://github.com/cloudera/hue.git
 
 cd hue
 
-{{< /highlight >}}
+</code></pre>
 
 Then some [development packages][3] need to be installed:
 
-{{< highlight bash >}}apt-get install python2.7-dev \
+<pre><code class="bash">apt-get install python2.7-dev \
 
 make \
 
@@ -98,11 +98,11 @@ libldap2-dev \
 
 python-pip
 
-{{< /highlight >}}
+</code></pre>
 
 You can also try this one line:
 
-{{< highlight bash >}}sudo apt-get install ant gcc g++ libkrb5-dev libffi-dev libmysqlclient-dev libssl-dev libsasl2-dev libsasl2-modules-gssapi-mit libsqlite3-dev libtidy-0.99-0 libxml2-dev libxslt-dev make libldap2-dev maven python-dev python-setuptools libgmp3-dev{{< /highlight >}}
+<pre><code class="bash">sudo apt-get install ant gcc g++ libkrb5-dev libffi-dev libmysqlclient-dev libssl-dev libsasl2-dev libsasl2-modules-gssapi-mit libsqlite3-dev libtidy-0.99-0 libxml2-dev libxslt-dev make libldap2-dev maven python-dev python-setuptools libgmp3-dev</code></pre>
 
 You will also need the ‘maven’ package. You could install it with apt-get but it is also recommended to install from a [maven3 tarball][4] in order to avoid to pull a lot of dependencies.
 
@@ -110,7 +110,7 @@ Then it is time to build Hue. Just issue the ‘make apps’ command.
 
 You will hit the Ubuntu package problem the first time if you are using a Hue version [smaller than 3.8][5]:
 
-{{< highlight bash >}}- Creating virtual environment at /root/hue/build/env
+<pre><code class="bash">- Creating virtual environment at /root/hue/build/env
 
 python2.7 /root/hue/tools/virtual-bootstrap/virtual-bootstrap.py \
 
@@ -146,19 +146,19 @@ OSError: Command /root/hue/build/env/bin/python2.7 -c "#!python
 
 \"\"\"Bootstrap setuptoo...
 
-{{< /highlight >}}
+</code></pre>
 
 We use one of the workaround:
 
-{{< highlight bash >}}sudo ln -s /usr/lib/python2.7/plat-*/_sysconfigdata_nd.py /usr/lib/python2.7/
+<pre><code class="bash">sudo ln -s /usr/lib/python2.7/plat-*/_sysconfigdata_nd.py /usr/lib/python2.7/
 
-{{< /highlight >}}
+</code></pre>
 
 Links on <https://issues.cloudera.org/browse/HUE-2246> detail its cause.
 
 If you don’t have Oracle Java 7 installed the build will then stop with:
 
-{{< highlight bash >}}[INFO] ------------------------
+<pre><code class="bash">[INFO] ------------------------
 
 [INFO] BUILD FAILURE
 
@@ -196,23 +196,23 @@ make[1]: Leaving directory \`/root/hue/desktop'
 
 make: \*** [desktop] Error 2
 
-{{< /highlight >}}
+</code></pre>
 
 To fix is install this packages:
 
-{{< highlight bash >}}sudo add-apt-repository ppa:webupd8team/java
+<pre><code class="bash">sudo add-apt-repository ppa:webupd8team/java
 
 sudo apt-get install oracle-java7-installer
 
 sudo apt-get install oracle-java7-set-default
 
-{{< /highlight >}}
+</code></pre>
 
 **Note**
 
 ‘asciidoc‘ is also required if you want to build a tarball release at some point with ‘make prod’. If not you will get this error:
 
-{{< highlight bash >}}make[1]: Leaving directory \`/root/hue/apps'
+<pre><code class="bash">make[1]: Leaving directory \`/root/hue/apps'
 
 make[1]: Entering directory \`/root/hue/docs'
 
@@ -264,11 +264,11 @@ make[1]: Entering directory \`/root/hue/docs'
 
 mv: cannot stat ‘release-notes/*.html’: No such file or directory
 
-{{< /highlight >}}
+</code></pre>
 
 And that’s it! At the end of the build:
 
-{{< highlight bash >}}=== Installing app at oozie
+<pre><code class="bash">=== Installing app at oozie
 
 === oozie v.3.6.0 is already installed
 
@@ -482,13 +482,13 @@ Installed 0 object(s) from 0 fixture(s)
 
 make[1]: Leaving directory \`/home/romain/projects/hue/apps'
 
-{{< /highlight >}}
+</code></pre>
 
 Just start the development server:
 
-{{< highlight bash >}}./build/env/bin/hue runserver
+<pre><code class="bash">./build/env/bin/hue runserver
 
-{{< /highlight >}}
+</code></pre>
 
 and visit <http://127.0.0.1:8000/> !
 

+ 11 - 11
docs/gethue/content/posts/2014-09-17-hadoop-tutorial-hive-and-impala-queries-life-cycle.md

@@ -55,7 +55,7 @@ Hue tries to close the query when the user navigates away from the result page (
 
 <pre></pre>
 
-{{< highlight bash >}}[impala]
+<pre><code class="bash">[impala]
 
 \# If &amp;gt; 0, the query will be timed out (i.e. cancelled) if Impala does not do any work
 
@@ -67,7 +67,7 @@ query_timeout_s=600
 
 \# (compute or send back results) for that session within QUERY_TIMEOUT_S seconds (default 1 hour).
 
-session_timeout_s=3600 {{< /highlight >}}
+session_timeout_s=3600 </code></pre>
 
 Until this version, the only alternative workaround to close all the queries, is to restart Hue (or Impala).
 
@@ -77,7 +77,7 @@ Until this version, the only alternative workaround to close all the queries, is
 
 Hue never closes the Hive queries by default (as some queries can take hours of processing time). Also if your query volume is low (e.g. < a few hundreds a day) and you restart HiveServer2 every week, you are probably not affected. To get the same behavior as Impala (and close the query when the user leaves the page), switch on in the hue.ini:
 
-{{< highlight bash >}}[beeswax]
+<pre><code class="bash">[beeswax]
 
 \# Hue will try to close the Hive query when the user leaves the editor page.
 
@@ -85,19 +85,19 @@ Hue never closes the Hive queries by default (as some queries can take hours of
 
 close_queries=true
 
-{{< /highlight >}}
+</code></pre>
 
 Starting in CDH5 and CDH4.6 (with HiveServer2), some close_query and close_session commands were added to Hue.
 
-{{< highlight bash >}}build/env/bin/hue close_queries -help
+<pre><code class="bash">build/env/bin/hue close_queries -help
 
 Usage: build/env/bin/hue close_queries [options] &amp;amp;lt;age_in_days&amp;amp;gt; (default is 7)
 
-{{< /highlight >}}
+</code></pre>
 
 Closes the non running queries older than 7 days. If <all> is specified, close the ones of any types.   To run them while using Cloudera Manager, be sure to export these two environment variables:
 
-{{< highlight bash >}}export HUE_CONF_DIR="/var/run/cloudera-scm-agent/process/\`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk '{print $9}'\`"
+<pre><code class="bash">export HUE_CONF_DIR="/var/run/cloudera-scm-agent/process/\`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk '{print $9}'\`"
 
 ./build/env/bin/hue close_queries 0
 
@@ -111,7 +111,7 @@ Closing (all=False) HiveServer2 sessions older than 0 days...
 
 1 sessions closed.
 
-{{< /highlight >}}
+</code></pre>
 
 You can then add this commands into a crontab and expire the queries older than N days.
 
@@ -119,11 +119,11 @@ You can then add this commands into a crontab and expire the queries older than
 
 When using Kerberos you also need:
 
-{{< highlight bash >}}export HIVE_CONF_DIR="/var/run/cloudera-scm-agent/process/\`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk '{print $9}'\`/hive-conf"{{< /highlight >}}
+<pre><code class="bash">export HIVE_CONF_DIR="/var/run/cloudera-scm-agent/process/\`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk '{print $9}'\`/hive-conf"</code></pre>
 
 A cleaner solution comes with [HIVE-5799][5] (available in Hive 0.14 or C5.2). Like Impala, HiveServer2 can now automatically expires queries. So tweak hive-site.xml with:
 
-{{< highlight xml >}}<property>
+<pre><code class="xml"><property>
 
 <name>hive.server2.session.check.interval</name>
 
@@ -153,7 +153,7 @@ A cleaner solution comes with [HIVE-5799][5] (available in Hive 0.14 or C5.2). L
 
 </property>
 
-{{< /highlight >}}
+</code></pre>
 
 **Note**
 

+ 15 - 15
docs/gethue/content/posts/2014-09-17-hadoop-tutorial-kerberos-security-and-sentry-authorization-for-solr-search-app.md

@@ -55,7 +55,7 @@ First, make sure that you have a [kerberized Cluster][6] (and it particular [So
 
 Make sure you use the secure version of solrconfig.xml:
 
-{{< highlight bash >}}solrctl instancedir -generate foosecure
+<pre><code class="bash">solrctl instancedir -generate foosecure
 
 cp foosecure/conf/solrconfig.xml.secure solr_configs_twitter_demo/conf/solrconfig.xml
 
@@ -63,21 +63,21 @@ solrctl instancedir -update twitter_demo solr_configs_twitter_demo
 
 solrctl collection -reload twitter_demo
 
-{{< /highlight >}}
+</code></pre>
 
 Then, create the collection. The command should work as-is if you have the proper Solr environment variables.
 
-{{< highlight bash >}}cd $HUE_HOME/apps/search/examples/bin
+<pre><code class="bash">cd $HUE_HOME/apps/search/examples/bin
 
 ./create_collections.sh
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
 You should then see the collections:
 
-{{< highlight bash >}}solrctl instancedir -list
+<pre><code class="bash">solrctl instancedir -list
 
 jobs_demo
 
@@ -87,17 +87,17 @@ twitter_demo
 
 yelp_demo
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
 The next step is to create the Solr cores. To keep it simple, we will just use one collection, the twitter demo. When creating the core
 
-{{< highlight bash >}}sudo -u systest solrctl collection -create twitter_demo -s 1{{< /highlight >}}
+<pre><code class="bash">sudo -u systest solrctl collection -create twitter_demo -s 1</code></pre>
 
 if using Sentry, you will probably see this error the first time:
 
-{{< highlight bash >}}Error: A call to SolrCloud WEB APIs failed: HTTP/1.1 401 Unauthorized
+<pre><code class="bash">Error: A call to SolrCloud WEB APIs failed: HTTP/1.1 401 Unauthorized
 
 Server: Apache-Coyote/1.1
 
@@ -153,7 +153,7 @@ org.apache.sentry.binding.solr.authz.SentrySolrAuthorizationException: User syst
 
 </lst>
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -163,13 +163,13 @@ This is because by default our ‘systest’ user does not have permissions to c
 
 In order to do this, we need to update:
 
-{{< highlight bash >}}/user/solr/sentry/sentry-provider.ini{{< /highlight >}}
+<pre><code class="bash">/user/solr/sentry/sentry-provider.ini</code></pre>
 
 &nbsp;
 
 with something similar to this:
 
-{{< highlight bash >}}[groups]
+<pre><code class="bash">[groups]
 
 admin = admin_role
 
@@ -181,7 +181,7 @@ admin_role = collection=admin->action=\*, collection=twitter_demo->action=\*
 
 query_role = collection=twitter_demo->action=query
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -199,19 +199,19 @@ Then it is time to create the core and upload some data. Update the [post.sh][10
 
 Replace ‘curl’ by:
 
-{{< highlight bash >}}curl -negotiate -u: foo:bar{{< /highlight >}}
+<pre><code class="bash">curl -negotiate -u: foo:bar</code></pre>
 
 &nbsp;
 
 and make sure that you use the real hostname in the URL:
 
-{{< highlight bash >}}URL=http://hue-c5-sentry.ent.cloudera.com:8983/solr{{< /highlight >}}
+<pre><code class="bash">URL=http://hue-c5-sentry.ent.cloudera.com:8983/solr</code></pre>
 
 &nbsp;
 
 A quick way to test is is to run the indexing command:
 
-{{< highlight bash >}}sudo -u systest curl -negotiate -u: foo:bar http://hue-c5-sentry.ent.cloudera.com:8983/solr/twitter_demo/update -data-binary @../collections/solr_configs_twitter_demo/index_data.csv -H 'Content-type:text/csv'{{< /highlight >}}
+<pre><code class="bash">sudo -u systest curl -negotiate -u: foo:bar http://hue-c5-sentry.ent.cloudera.com:8983/solr/twitter_demo/update -data-binary @../collections/solr_configs_twitter_demo/index_data.csv -H 'Content-type:text/csv'</code></pre>
 
 &nbsp;
 

+ 14 - 14
docs/gethue/content/posts/2014-09-22-hadoop-tutorial-ssl-encryption-between-hue-and-hive.md

@@ -81,19 +81,19 @@ Let’s step through the procedure to create certificates and keys:
 
 1) Generate keystore.jks containing private key (used by Hive to decrypt messages received from Hue over SSL) and public certificate (used by Hue to encrypt messages over SSL)
 
-{{< highlight bash >}}keytool -genkeypair -alias certificatekey -keyalg RSA -validity 7 -keystore
+<pre><code class="bash">keytool -genkeypair -alias certificatekey -keyalg RSA -validity 7 -keystore
 
 keystore.jks
 
-{{< /highlight >}}
+</code></pre>
 
 2) Generate certificate from keystore
 
-{{< highlight bash >}}keytool -export -alias certificatekey -keystore keystore.jks -rfc -file
+<pre><code class="bash">keytool -export -alias certificatekey -keystore keystore.jks -rfc -file
 
 cert.pem
 
-{{< /highlight >}}
+</code></pre>
 
 3) Export private key and certificate with openSSL for Hue's SSL library to ingest.
 
@@ -101,7 +101,7 @@ Exporting the private key from a jks file (Java keystore) needs an intermediate
 
 a. Import the keystore from JKS to PKCS12
 
-{{< highlight bash >}}keytool -importkeystore -srckeystore keystore.jks -destkeystore keystore.p12
+<pre><code class="bash">keytool -importkeystore -srckeystore keystore.jks -destkeystore keystore.p12
 
 -srcstoretype JKS -deststoretype PKCS12 -srcstorepass mysecret -deststorepass
 
@@ -109,27 +109,27 @@ mysecret -srcalias certificatekey -destalias certificatekey -srckeypass
 
 mykeypass -destkeypass mykeypass -noprompt
 
-{{< /highlight >}}
+</code></pre>
 
 b. Convert pkcs12 to pem using OpenSSL
 
-{{< highlight bash >}}openssl pkcs12 -in keystore.p12 -out keystore.pem -passin pass:mysecret
+<pre><code class="bash">openssl pkcs12 -in keystore.p12 -out keystore.pem -passin pass:mysecret
 
 -passout pass:mysecret
 
-{{< /highlight >}}
+</code></pre>
 
 c. Strip the pass phrase so Python doesn't prompt for password while connecting to Hive
 
-{{< highlight bash >}}openssl rsa -in keystore.pem -out hue_private_keystore.pem
+<pre><code class="bash">openssl rsa -in keystore.pem -out hue_private_keystore.pem
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
 Then the following needs to be setup in Hue’s configuration file hue.ini under [beeswax] section:
 
-{{< highlight bash >}} [[ssl]]
+<pre><code class="bash"> [[ssl]]
 
 \# SSL communication enabled for this server. (optional since Hue 3.8)
 
@@ -143,13 +143,13 @@ enabled=true
 
 validate=false
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
 Then make sure no custom authentication mechanism is turned on and configure your hive-site.xml with the following properties on Hive 0.13:
 
-{{< highlight xml >}} <property>
+<pre><code class="xml"> <property>
 
   <name>hive.server2.use.SSL</name>
 
@@ -173,7 +173,7 @@ Then make sure no custom authentication mechanism is turned on and configure you
 
 </property>
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 

+ 26 - 26
docs/gethue/content/posts/2014-10-02-how-to-configure-hue-in-your-hadoop-cluster.md

@@ -66,25 +66,25 @@ Note:** To override a value in Cloudera Manager, you need to enter verbatim each
 
 At any time, you can see the path to the hue.ini and what are its values on the [/desktop/dump_config][11] page. Then, for each Hadoop Service, Hue contains a section that needs to be updated with the correct hostnames and ports. Here is an example of the Hive section in the ini file:
 
-{{< highlight bash >}}[beeswax]
+<pre><code class="bash">[beeswax]
 
  # Host where HiveServer2 is running.
 
  hive_server_host=localhost
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
 To point to another server, just replaced the host value by 'hiveserver.ent.com':
 
-{{< highlight bash >}}[beeswax]
+<pre><code class="bash">[beeswax]
 
  # Host where HiveServer2 is running.
 
  hive_server_host=hiveserver.ent.com
 
-{{< /highlight >}}
+</code></pre>
 
 **Note: **Any line starting with a # is considered as a comment so is not used.
 
@@ -106,7 +106,7 @@ This is required for [listing or creating files][15]. Replace localhost by the r
 
 <samp class="ph codeph">Enter this in hdfs-site.xml</samp> to enable WebHDFS in the NameNode and DataNodes:
 
-{{< highlight xml >}}
+<pre><code class="xml">
 
 <property>
 
@@ -116,11 +116,11 @@ This is required for [listing or creating files][15]. Replace localhost by the r
 
 </property>
 
-{{< /highlight >}}
+</code></pre>
 
 Configure Hue as a proxy user for all other users and groups, meaning it may submit a request on behalf of any other user. Add to <samp class="ph codeph">core-site.xml</samp>:
 
-{{< highlight xml >}}
+<pre><code class="xml">
 
 <property>
 
@@ -138,11 +138,11 @@ Configure Hue as a proxy user for all other users and groups, meaning it may sub
 
 </property>
 
-{{< /highlight >}}
+</code></pre>
 
 Then, if the Namenode is on another host than Hue, don't forget to update in the hue.ini:
 
-{{< highlight bash >}}[hadoop]
+<pre><code class="bash">[hadoop]
 
  [[hdfs_clusters]]
 
@@ -158,13 +158,13 @@ Then, if the Namenode is on another host than Hue, don't forget to update in the
 
      webhdfs_url=http://localhost:50070/webhdfs/v1
 
-{{< /highlight >}}
+</code></pre>
 
 ## YARN
 
 The Resource Manager is often on http://localhost:8088 by default. The ProxyServer and Job History servers also needs to be specified. Then Job Browser will let you [list and kill running applications][16] and get their logs.
 
-{{< highlight bash >}}[hadoop]
+<pre><code class="bash">[hadoop]
 
  [[yarn_clusters]]
 
@@ -190,25 +190,25 @@ The Resource Manager is often on http://localhost:8088 by default. The ProxyServ
 
      history_server_api_url=http://localhost:19888
 
-{{< /highlight >}}
+</code></pre>
 
 ## Hive
 
 Here we need a running HiveServer2 in order to [send SQL queries][17].
 
-{{< highlight bash >}}[beeswax]
+<pre><code class="bash">[beeswax]
 
  # Host where HiveServer2 is running.
 
  hive_server_host=localhost
 
-{{< /highlight >}}
+</code></pre>
 
 Note:
 
 If HiveServer2 is on another machine and you are using security or customized HiveServer2 configuration, you will need to copy the hive-site.xml on the Hue machine too:
 
-{{< highlight bash >}}[beeswax]
+<pre><code class="bash">[beeswax]
 
  # Host where HiveServer2 is running.
 
@@ -218,43 +218,43 @@ If HiveServer2 is on another machine and you are using security or customized Hi
 
 hive_conf_dir=/etc/hive/conf
 
-{{< /highlight >}}
+</code></pre>
 
 ## Impala
 
 We need to specify one of the Impalad address for [interactive SQL][17] in the Impala app.
 
-{{< highlight bash >}}[impala]
+<pre><code class="bash">[impala]
 
  # Host of the Impala Server (one of the Impalad)
 
  server_host=localhost
 
-{{< /highlight >}}
+</code></pre>
 
 ## Solr Search
 
 We just need to specify the address of a Solr Cloud (or non Cloud Solr), then [interactive dashboards][18] capabilities are unleashed!
 
-{{< highlight bash >}}[search]
+<pre><code class="bash">[search]
 
  # URL of the Solr Server
 
  solr_url=http://localhost:8983/solr/
 
-{{< /highlight >}}
+</code></pre>
 
 ## Oozie
 
 An Oozie server should be up and running before [submitting or monitoring workflows][19].
 
-{{< highlight bash >}}[liboozie]
+<pre><code class="bash">[liboozie]
 
  # The URL where the Oozie service runs on.
 
 oozie_url=http://localhost:11000/oozie
 
-{{< /highlight >}}
+</code></pre>
 
 ## Pig
 
@@ -264,25 +264,25 @@ The [Pig Editor][20] requires Oozie to be setup with its [sharelib][21].
 
 The HBase app works with a HBase Thrift Server version 1. It lets you [browse, query and edit HBase tables][22].
 
-{{< highlight bash >}}[hbase]
+<pre><code class="bash">[hbase]
 
  # Comma-separated list of HBase Thrift server 1 for clusters in the format of '(name|host:port)'.
 
 hbase_clusters=(Cluster|localhost:9090)
 
-{{< /highlight >}}
+</code></pre>
 
 ## Sentry
 
 Hue just needs to point to the machine with the Sentry server running.
 
-{{< highlight bash >}}[libsentry]
+<pre><code class="bash">[libsentry]
 
  # Hostname or IP of server.
 
  hostname=localhost
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 

+ 14 - 14
docs/gethue/content/posts/2014-10-03-running-an-oozie-workflow-and-getting-split-class-org-apache-oozie-action-hadoop-oozielauncherinputformatemptysplit-not-found.md

@@ -43,7 +43,7 @@ categories:
 ---
 If after installing your cluster and submitting some Oozie jobs you are seeing this type of error:
 
-{{< highlight bash >}}2015-03-11 09:11:19,821 WARN ActionStartXCommand:544 - SERVER[local] USER[hue] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000000-150311091052117-oozie-hue-W] ACTION[0000000-150311091052117-oozie-hue-W@pig] Error starting action [pig]. ErrorType [FAILED], ErrorCode [It should never happen], Message [File /user/oozie/share/lib does not exist]
+<pre><code class="bash">2015-03-11 09:11:19,821 WARN ActionStartXCommand:544 - SERVER[local] USER[hue] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000000-150311091052117-oozie-hue-W] ACTION[0000000-150311091052117-oozie-hue-W@pig] Error starting action [pig]. ErrorType [FAILED], ErrorCode [It should never happen], Message [File /user/oozie/share/lib does not exist]
 
 org.apache.oozie.action.ActionExecutorException: File /user/oozie/share/lib does not exist
 
@@ -77,11 +77,11 @@ at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:61
 
 at java.lang.Thread.run(Thread.java:745)
 
-{{< /highlight >}}
+</code></pre>
 
 or
 
-{{< highlight bash >}} Error: java.io.IOException: Split class org.apache.oozie.action.hadoop.OozieLauncherInputFormat$EmptySplit not found
+<pre><code class="bash"> Error: java.io.IOException: Split class org.apache.oozie.action.hadoop.OozieLauncherInputFormat$EmptySplit not found
 
 at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:363)
 
@@ -107,11 +107,11 @@ at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:361)
 
 ... 7 more
 
-{{< /highlight >}}
+</code></pre>
 
 This is because the [Oozie Share Lib][1] is not installed. Here is one command to install the YARN one:
 
-{{< highlight bash >}}sudo -u oozie /usr/lib/oozie/bin/oozie-setup.sh sharelib create -fs hdfs://localhost:8020 -locallib /usr/lib/oozie/oozie-sharelib-yarn.tar.gz
+<pre><code class="bash">sudo -u oozie /usr/lib/oozie/bin/oozie-setup.sh sharelib create -fs hdfs://localhost:8020 -locallib /usr/lib/oozie/oozie-sharelib-yarn.tar.gz
 
 setting JAVA_LIBRARY_PATH="$JAVA_LIBRARY_PATH:/usr/lib/hadoop/lib/native"
 
@@ -183,15 +183,15 @@ SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 
 the destination path for sharelib is: /user/oozie/share/lib/lib_20141003111250
 
-{{< /highlight >}}
+</code></pre>
 
 On latest version of Oozie, just point to a folder instead:
 
-{{< highlight bash >}}sudo -u oozie /usr/lib/oozie/bin/oozie-setup.sh sharelib create -fs hdfs://localhost:8020 -locallib /usr/lib/oozie/oozie-sharelib-yarn{{< /highlight >}}
+<pre><code class="bash">sudo -u oozie /usr/lib/oozie/bin/oozie-setup.sh sharelib create -fs hdfs://localhost:8020 -locallib /usr/lib/oozie/oozie-sharelib-yarn</code></pre>
 
 And how to check it:
 
-{{< highlight bash >}}sudo -u oozie oozie admin -shareliblist -oozie http://localhost:11000/oozie
+<pre><code class="bash">sudo -u oozie oozie admin -shareliblist -oozie http://localhost:11000/oozie
 
 [Available ShareLib]
 
@@ -211,7 +211,7 @@ hive2
 
 pig
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -219,27 +219,27 @@ pig
 
 If you have upgraded your cluster, use 'upgrade' instead of 'create':
 
-{{< highlight bash >}}sudo -u oozie /usr/lib/oozie/bin/oozie-setup.sh sharelib upgrade -fs hdfs://localhost:8020 -locallib /usr/lib/oozie/oozie-sharelib-yarn.tar.gz{{< /highlight >}}
+<pre><code class="bash">sudo -u oozie /usr/lib/oozie/bin/oozie-setup.sh sharelib upgrade -fs hdfs://localhost:8020 -locallib /usr/lib/oozie/oozie-sharelib-yarn.tar.gz</code></pre>
 
 **Note**
 
 If you are seeing:
 
-{{< highlight bash >}}sharelib.system.libpath (unavailable){{< /highlight >}}
+<pre><code class="bash">sharelib.system.libpath (unavailable)</code></pre>
 
 You need something like that in your oozie-site.xml
 
-{{< highlight xml >}}<property>
+<pre><code class="xml"><property>
 
 <name>oozie.service.HadoopAccessorService.hadoop.configurations</name>
 
 <value>*=/etc/hadoop/conf</value>
 
-</property>{{< /highlight >}}
+</property></code></pre>
 
 And now restart Oozie:
 
-{{< highlight bash >}}sudo service oozie restart{{< /highlight >}}
+<pre><code class="bash">sudo service oozie restart</code></pre>
 
 That's it, you are now ready to submit [workflows][2]!
 

+ 18 - 18
docs/gethue/content/posts/2014-10-07-apache-sentry-made-easy-with-the-new-hue-security-app.md

@@ -58,7 +58,7 @@ Main features:
 
 To have Hue point to a Sentry service and another host, modify these [hue.ini][4] properties:
 
-{{< highlight bash >}}[libsentry]
+<pre><code class="bash">[libsentry]
 
  # Hostname or IP of server.
 
@@ -72,7 +72,7 @@ To have Hue point to a Sentry service and another host, modify these [hue.ini][4
 
  sentry_conf_dir=/etc/sentry/conf
 
-{{< /highlight >}}
+</code></pre>
 
 Hue will also automatically pick up the server name of HiveServer2 from the sentry-site.xml file of /etc/hive/conf.
 
@@ -88,7 +88,7 @@ As usual, feel free to continue to send us questions and feedback on the [hue-u
 
 To be able to edit roles and privileges in Hue, the logged-in Hue user needs to belong to a **group in Hue** that is also an **admin group in Sentry** (whatever UserGroupMapping Sentry is using, the corresponding groups must exist in Hue or need to be entered manually). For example, our 'hive' user belongs to a 'hive' group in Hue and also to a 'hive' group in Sentry:
 
-{{< highlight xml >}}<property>
+<pre><code class="xml"><property>
 
   <name>sentry.service.admin.group</name>
 
@@ -96,7 +96,7 @@ To be able to edit roles and privileges in Hue, the logged-in Hue user needs to
 
 </property>
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -123,21 +123,21 @@ Our users are:
 
 We [synced the Unix users/groups][9] into Hue with these commands:
 
-{{< highlight bash >}}export HUE_CONF_DIR="/var/run/cloudera-scm-agent/process/\`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk '{print $9}'\`"
+<pre><code class="bash">export HUE_CONF_DIR="/var/run/cloudera-scm-agent/process/\`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk '{print $9}'\`"
 
 build/env/bin/hue useradmin_sync_with_unix -min-uid=1000
 
-{{< /highlight >}}
+</code></pre>
 
 If using the package version and has the CDH repository register, install sentry with:
 
-{{< highlight bash >}}sudo apt-get install sentry
+<pre><code class="bash">sudo apt-get install sentry
 
-{{< /highlight >}}
+</code></pre>
 
 If using Kerberos, make sure ‘hue’ is allowed to connect to Sentry in /etc/sentry/conf/sentry-site.xml:
 
-{{< highlight xml >}}<property>
+<pre><code class="xml"><property>
 
   <name>sentry.service.allow.connect</name>
 
@@ -145,13 +145,13 @@ If using Kerberos, make sure ‘hue’ is allowed to connect to Sentry in /etc/s
 
 </property>
 
-{{< /highlight >}}
+</code></pre>
 
 Here is an example of sentry-site.xml
 
 Here is an example of sentry-site.xml
 
-{{< highlight xml >}}<?xml version="1.0" encoding="UTF-8"?>
+<pre><code class="xml"><?xml version="1.0" encoding="UTF-8"?>
 
 <configuration>
 
@@ -205,27 +205,27 @@ Here is an example of sentry-site.xml
 
 </configuration>
 
-{{< /highlight >}}
+</code></pre>
 
 For testing purposes, here is how to create the initial Sentry database:
 
-{{< highlight bash >}}romain@runreal:~/projects/hue$ sentry -command schema-tool -initSchema -conffile /etc/sentry/conf/sentry-site.xml -dbType derby
+<pre><code class="bash">romain@runreal:~/projects/hue$ sentry -command schema-tool -initSchema -conffile /etc/sentry/conf/sentry-site.xml -dbType derby
 
-{{< /highlight >}}
+</code></pre>
 
 And start the service:
 
-{{< highlight bash >}}sentry -command service  -conffile /etc/sentry/conf/sentry-site.xml
+<pre><code class="bash">sentry -command service  -conffile /etc/sentry/conf/sentry-site.xml
 
-{{< /highlight >}}
+</code></pre>
 
 **Note**
 
 In Sentry 1.5, you will need to specify a ‘entry.store.jdbc.password’ property in the sentry-site.xml, if not you will get:
 
-{{< highlight bash >}}Caused by: org.apache.sentry.provider.db.service.thrift.SentryConfigurationException: Error reading sentry.store.jdbc.password
+<pre><code class="bash">Caused by: org.apache.sentry.provider.db.service.thrift.SentryConfigurationException: Error reading sentry.store.jdbc.password
 
-{{< /highlight >}}
+</code></pre>
 
  [1]: http://sentry.incubator.apache.org/
  [2]: https://gethue.com/hadoop-tutorial-new-impala-and-hive-editors/

+ 11 - 11
docs/gethue/content/posts/2014-10-09-bay-area-bike-share-analysis-with-the-hadoop-notebook-and-spark-sql.md

@@ -72,7 +72,7 @@ Now that we've imported the data into our cluster, we can create a new Notebook
 
 Let's find the top 10 most popular start stations based on the trip data:
 
-{{< highlight sql >}}SELECT startterminal, startstation, COUNT(1) AS count FROM bikeshare.trips GROUP BY startterminal, startstation ORDER BY count DESC LIMIT 10{{< /highlight >}}
+<pre><code class="sql">SELECT startterminal, startstation, COUNT(1) AS count FROM bikeshare.trips GROUP BY startterminal, startstation ORDER BY count DESC LIMIT 10</code></pre>
 
 [<img src="https://cdn.gethue.com/uploads/2015/09/impala_query-1024x339.png"  />][9]
 
@@ -82,7 +82,7 @@ Once our results are returned, we can easily visualize this data; a bar graph wo
 
 It seems that the San Francisco Caltrain (Townsend at 4th) was by far the most common start station. Let's determine which end stations, for trips starting from the SF Caltrain Townsend station, were the most popular. We'll fetch the latitude and longitude coordinates so that we can visualize the results on a map.
 
-{{< highlight sql >}}
+<pre><code class="sql">
 
 SELECT
 
@@ -106,7 +106,7 @@ GROUP BY s.station_id, s.name, s.lat, s.long
 
 ORDER BY count DESC LIMIT 10
 
-{{< /highlight >}}
+</code></pre>
 
 [<img src="https://cdn.gethue.com/uploads/2015/08/impala_map-e1443111522857-1024x223.png" />][11]
 
@@ -120,7 +120,7 @@ Let's say we wanted to dig further into the trip data for the SF Caltrain statio
 
 Since the trip data stores startdate as a STRING, we'll need to apply some string-manipulation to extract the hour within an inline SQL query. The outer query will aggregate the count of trips and the average duration.
 
-{{< highlight sql >}}
+<pre><code class="sql">
 
 SELECT
 
@@ -154,7 +154,7 @@ GROUP BY hour
 
 ORDER BY hour ASC;
 
-{{< /highlight >}}
+</code></pre>
 
 Since this data produces several numeric dimensions of data, we can visualize the results using a scatterplot graph, with the hour as the x-axis, number of trips as the y-axis, and the average duration as the scatterplot size.
 
@@ -162,7 +162,7 @@ Since this data produces several numeric dimensions of data, we can visualize th
 
 Let's add another Hive snippet to analyze an hour-by-hour breakdown of availability at the SF Caltrain Station:
 
-{{< highlight sql >}}
+<pre><code class="sql">
 
 SELECT
 
@@ -202,7 +202,7 @@ GROUP BY hour
 
 ORDER BY hour ASC;
 
-{{< /highlight >}}
+</code></pre>
 
 We'll visualize the results as a line graph, which indicates that the bike availability tends to fall starting at 6 AM and is regained around 6 PM.
 
@@ -216,7 +216,7 @@ Hue's Spark notebooks allow users to mix exploratory SQL-analysis with custom Sc
 
 For example, we can open a pyspark snippet and load the trip data directly from the Hive warehouse and apply a sequence of filter, map, and reduceByKey operations to determine the average number of trips starting from the SF Caltrain Station:
 
-{{< highlight python >}}
+<pre><code class="python">
 
 trips = sc.textFile('/user/hive/warehouse/bikeshare.db/trips/201402_trip_data.csv')
 
@@ -248,7 +248,7 @@ avg_trips_sorted = sorted(avg_trips_by_hour.collect())
 
 %table avg_trips_sorted
 
-{{< /highlight >}}
+</code></pre>
 
 [<img src="https://cdn.gethue.com/uploads/2015/09/Screenshot-2015-09-23-23.13.46-e1443110910319-1024x268.png" />][14]
 
@@ -268,7 +268,7 @@ Stay tuned for a number of exciting improvements to the notebook app, and as usu
 
 The BABS rebalancing data (named 201402_status_data.csv) uses quotes.  In these cases, it is easier to create the table in Hive in the Hive editor and use the OpenCSV Row SERDE for Hive:
 
-{{< highlight sql >}}
+<pre><code class="sql">
 
 CREATE TABLE rebalancing(station_id int, bikes_available int, docks_available int, time string)
 
@@ -286,7 +286,7 @@ WITH SERDEPROPERTIES (
 
 STORED AS TEXTFILE;
 
-{{< /highlight >}}
+</code></pre>
 
 Then you can go back to the Metastore to import the CSV into the table; note that you may have to remove the header line manually.
 

+ 6 - 6
docs/gethue/content/posts/2014-12-03-hadoop-yarn-11-local-dirs-are-bad-varlibhadoop-yarncacheyarnnm-local-dir-11-log-dirs-are-bad-varloghadoop-yarncontainers.md

@@ -42,11 +42,11 @@ categories:
 ---
 If you are getting this error, make some disk space!
 
-{{< highlight bash >}}
+<pre><code class="bash">
   
 1/1 local-dirs are bad: /var/lib/hadoop-yarn/cache/yarn/nm-local-dir; 1/1 log-dirs are bad: /var/log/hadoop-yarn/containers
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -54,7 +54,7 @@ If you are getting this error, make some disk space!
 
 ## Node Manager logs
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 yarn.server.nodemanager.DirectoryCollection: Directory /var/lib/hadoop-yarn/cache/yarn/nm-local-dir error, used space above threshold of 90.0%, removing from list of valid directories
   
@@ -62,17 +62,17 @@ yarn.server.nodemanager.DirectoryCollection: Directory /var/lib/hadoop-yarn/cach
   
 2014-11-17 17:45:00,713 INFO org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService: Disk(s) failed: 1/1 local-dirs are bad: /var/lib/hadoop-yarn/cache/yarn/nm-local-dir; 1/1 log-dirs are bad: /var/log/hadoop
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
 ## Resource Manager logs
 
-{{< highlight bash >}}
+<pre><code class="bash">
   
 2014-11-17 16:57:07,301 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Node localhost:34650 reported UNHEALTHY with details: 1/1 local-dirs are bad: /var/lib/hadoop-yarn/cache/yarn/nm-local-dir; 1/1 log-dirs are bad: /var/log/hadoop-yarn/containers
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 

+ 12 - 12
docs/gethue/content/posts/2014-12-09-how-to-use-hcatalog-with-pig-in-a-secured-cluster.md

@@ -53,7 +53,7 @@ As usual, if you have questions or feedback, feel free to contact the Hue commun
 
 We are going to use this simple script that display the first records of one of the sample Hive tables:
 
-{{< highlight bash >}}- Load table 'sample_07'
+<pre><code class="bash">- Load table 'sample_07'
 
 sample_07 = LOAD 'sample_07' USING org.apache.hcatalog.pig.HCatLoader();
 
@@ -61,7 +61,7 @@ out = LIMIT sample_07 15;
 
 DUMP out;
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -69,13 +69,13 @@ DUMP out;
 
 As usual, if it is [missing][4], some jars won't be found and you will get:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 ERROR 1070: Could not resolve org.apache.hcatalog.pig.HCatLoader using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
 
 org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1000: Error during parsing. Could not resolve org.apache.hcatalog.pig.HCatLoader using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -87,13 +87,13 @@ org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1000: Error during par
 
 In the workflow properties, make sure that these Oozie properties are set:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 oozie.use.system.libpath true
 
 oozie.action.sharelib.for.pig pig,hcatalog
 
-{{< /highlight >}}
+</code></pre>
 
 [<img src="https://cdn.gethue.com/uploads/2014/12/pig-hcat-cred-1024x402.png" />][7]
 
@@ -117,7 +117,7 @@ And that's it!
 
 Examples of XML workflow
 
-{{< highlight xml >}}<workflow-app name="pig-app-hue-script" xmlns="uri:oozie:workflow:0.4">
+<pre><code class="xml"><workflow-app name="pig-app-hue-script" xmlns="uri:oozie:workflow:0.4">
 
 <credentials>
 
@@ -175,11 +175,11 @@ Examples of XML workflow
 
 </workflow-app>
 
-{{< /highlight >}}
+</code></pre>
 
 Properties
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 Name Value
 
@@ -201,11 +201,11 @@ oozie.wf.application.path hdfs://hue-c5-sentry.ent.cloudera.com:8020/user/hue/oo
 
 user.name hive
 
-{{< /highlight >}}
+</code></pre>
 
 If you get the dreaded 'ERROR 2245: Cannot get schema from loadFunc org.apache.hcatalog.pig.HCatLoader' error this could be that the hive-site.xml is not added or that you need[HUE-2152][9] that injects the HCat credential in the script.
 
-{{< highlight bash >}}ERROR 2245: Cannot get schema from loadFunc org.apache.hcatalog.pig.HCatLoader
+<pre><code class="bash">ERROR 2245: Cannot get schema from loadFunc org.apache.hcatalog.pig.HCatLoader
 
 org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1000: Error during parsing. Cannot get schema from loadFunc org.apache.hcatalog.pig.HCatLoader
 
@@ -501,7 +501,7 @@ at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(Connect
 
 ... 109 more
 
-{{< /highlight >}}
+</code></pre>
 
  [1]: https://gethue.com/hadoop-tutorial-how-to-access-hive-in-pig-with/
  [2]: http://groups.google.com/a/cloudera.org/group/hue-user

+ 6 - 6
docs/gethue/content/posts/2014-12-11-how-to-run-hue-with-the-apache-server.md

@@ -50,7 +50,7 @@ It turns out it’s pretty simple to do. It only requires a small script, a Hue
 
 This script (which was just added in [`desktop/core/desktop/wsgi.py`][4]) enables any Web server that speaks WSGI to launch Hue and route requests to it:
 
-{{< highlight python >}}
+<pre><code class="python">
   
 import os
   
@@ -62,7 +62,7 @@ os.environ.setdefault("DJANGO_SETTINGS_MODULE", "desktop.settings")
   
 from django.core.wsgi import get_wsgi_application
   
-application = get_wsgi_application(){{< /highlight >}}
+application = get_wsgi_application()</code></pre>
 
 The next step disables booting Hue from the `runcpserver` command. In Cloudera Manager, go to **Hue** > **Configuration** > **Service-Wide** > **Advanced**, and add the following to the hue safety valve:
 
@@ -70,7 +70,7 @@ The next step disables booting Hue from the `runcpserver` command. In Cloudera
 
 If you are [running Hue outside of Cloudera Manager][5], modify `desktop/conf/hue.ini` with:
 
-{{< highlight bash >}}
+<pre><code class="bash">
   
 [desktop]
     
@@ -78,11 +78,11 @@ If you are [running Hue outside of Cloudera Manager][5], modify `desktop/conf/hu
     
 enable_server=no
   
-{{< /highlight >}}
+</code></pre>
 
 The final step is to configure Apache to launch Hue by adding the following to the `apache.conf`:
 
-{{< highlight bash >}}WSGIScriptAlias / $HUE_PATH/desktop/core/src/desktop/wsgi.py
+<pre><code class="bash">WSGIScriptAlias / $HUE_PATH/desktop/core/src/desktop/wsgi.py
   
 WSGIPythonPath $HUE_PATH/desktop/core/src/desktop:$HUE_PATH/build/env/lib/python2.7/site-packages
   
@@ -112,7 +112,7 @@ Require all granted
   
 </Directory>
   
-{{< /highlight >}}
+</code></pre>
 
 Where `$HOSTNAME` should be the hostname of the machine running Hue, and `$HUE_PATH` is where Hue is installed. If you’re using Cloudera Manager, by default it should be either `/usr/lib/hue` for a package install, or `/opt/cloudera/parcels/CDH/lib/hue` for a parcel install.
 

+ 8 - 8
docs/gethue/content/posts/2014-12-12-how-to-use-hue-with-hive-and-impala-configured-with-ldap-authentication-and-ssl.md

@@ -61,7 +61,7 @@ The same Hue behavior occurred after making the change, but now the HiveServer2
 
 So, we added the following to the Hue Safety Valve:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [beeswax]
 
@@ -75,11 +75,11 @@ cacerts=/etc/hue/cacerts.pem
 
 validate=false
 
-{{< /highlight >}}
+</code></pre>
 
 or
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [impala]
 
@@ -97,21 +97,21 @@ cacerts=/etc/hue/cacerts.pem
 
 validate=false
 
-{{< /highlight >}}
+</code></pre>
 
 3.
 
 Hue still showed the same behavior. HiveServer2 logs showed:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 <HUE_LDAP_USERNAME> is not allowed to impersonate bob
 
-{{< /highlight >}}
+</code></pre>
 
 We solved this by adding the following to the HDFS > Service-Wide ->Advanced>Safety Valve for core-site.xml.
 
-{{< highlight xml >}}
+<pre><code class="xml">
 
 <property>
 
@@ -129,7 +129,7 @@ We solved this by adding the following to the HDFS > Service-Wide ->Advanced>Saf
 
 </property>
 
-{{< /highlight >}}
+</code></pre>
 
 4.
 

+ 6 - 6
docs/gethue/content/posts/2014-12-16-how-to-deploy-hue-on-hdp.md

@@ -90,7 +90,7 @@ Hue uses an SQLite database by default and you may find the following error when
     
     <div>
       <p>
-        {{< highlight bash >}}[desktop]<br /> app_blacklist=impala<br /> {{< /highlight >}}
+        <pre><code class="bash">[desktop]<br /> app_blacklist=impala<br /> </code></pre>
       </p>
     </div>
   </div>
@@ -138,7 +138,7 @@ Hue uses an SQLite database by default and you may find the following error when
                 </p>
                 
                 <p>
-                  {{< highlight xml >}}<property><br /> <name>hadoop.proxyuser.hue.hosts</name><br /> <value>*</value><br /> </property><br /> <property><br /> <name>hadoop.proxyuser.hue.groups</name><br /> <value>*</value><br /> </property><br /> {{< /highlight >}}
+                  <pre><code class="xml"><property><br /> <name>hadoop.proxyuser.hue.hosts</name><br /> <value>*</value><br /> </property><br /> <property><br /> <name>hadoop.proxyuser.hue.groups</name><br /> <value>*</value><br /> </property><br /> </code></pre>
                 </p>
                 
                 <p>
@@ -239,7 +239,7 @@ Hue uses an SQLite database by default and you may find the following error when
     
     <div>
       <p>
-        {{< highlight bash >}}"Server does not support GetLog()"{{< /highlight >}}
+        <pre><code class="bash">"Server does not support GetLog()"</code></pre>
       </p>
     </div>
     
@@ -254,7 +254,7 @@ Hue uses an SQLite database by default and you may find the following error when
       <div>
         <div>
           <p>
-            {{< highlight bash >}}[beeswax]<br /> # Choose whether Hue uses the GetLog() thrift call to retrieve Hive logs.<br /> # If false, Hue will use the FetchResults() thrift call instead.<br /> use_get_log_api=false<br /> {{< /highlight >}}
+            <pre><code class="bash">[beeswax]<br /> # Choose whether Hue uses the GetLog() thrift call to retrieve Hive logs.<br /> # If false, Hue will use the FetchResults() thrift call instead.<br /> use_get_log_api=false<br /> </code></pre>
           </p>
         </div>
       </div>
@@ -366,7 +366,7 @@ Hue uses an SQLite database by default and you may find the following error when
     <div>
       <div>
         <p>
-          {{< highlight bash >}}2014-12-15 23:32:17,626  INFO ActionStartXCommand:543 - SERVER[hdptest.construct.dev] USER[amo] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000001-141215230246520-<wbr />oozie-oozi-W] ACTION[0000001-<wbr />141215230246520-oozie-oozi-W@:<wbr />start:] Start action [0000001-141215230246520-<wbr />oozie-oozi-W@:start:] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]
+          <pre><code class="bash">2014-12-15 23:32:17,626  INFO ActionStartXCommand:543 - SERVER[hdptest.construct.dev] USER[amo] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000001-141215230246520-<wbr />oozie-oozi-W] ACTION[0000001-<wbr />141215230246520-oozie-oozi-W@:<wbr />start:] Start action [0000001-141215230246520-<wbr />oozie-oozi-W@:start:] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]
         </p>
         
         <p>
@@ -378,7 +378,7 @@ Hue uses an SQLite database by default and you may find the following error when
         </p>
         
         <p>
-          2014-12-15 23:32:17,873  INFO ActionStartXCommand:543 - SERVER[hdptest.construct.dev] USER[amo] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000001-141215230246520-<wbr />oozie-oozi-W] ACTION[0000001-<wbr />141215230246520-oozie-oozi-W@<wbr />pig] Start action [0000001-141215230246520-<wbr />oozie-oozi-W@pig] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]<br /> {{< /highlight >}}
+          2014-12-15 23:32:17,873  INFO ActionStartXCommand:543 - SERVER[hdptest.construct.dev] USER[amo] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000001-141215230246520-<wbr />oozie-oozi-W] ACTION[0000001-<wbr />141215230246520-oozie-oozi-W@<wbr />pig] Start action [0000001-141215230246520-<wbr />oozie-oozi-W@pig] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]<br /> </code></pre>
         </p>
       </div>
     </div>

+ 14 - 14
docs/gethue/content/posts/2015-01-16-configure-hue-with-https-ssl.md

@@ -50,11 +50,11 @@ To configure Hue to use HTTPS we need a self signed SSL certificate that does n
 
 Here is how to generate a private key and a self-signed certificate for the Hue server:
 
-{{< highlight bash >}}openssl genrsa 4096 > server.key
+<pre><code class="bash">openssl genrsa 4096 > server.key
 
 openssl req -new -x509 -nodes -sha1 -key server.key > server.cert
 
-{{< /highlight >}}
+</code></pre>
 
 **
   
@@ -82,7 +82,7 @@ Make sure Hue is setting the [cookie as secure][3].
 
 Here is an example of creation of a certificate for enabling SSL:
 
-{{< highlight bash >}}
+<pre><code class="bash">
   
 [root@cehd1 hue]# pwd
   
@@ -92,21 +92,21 @@ Here is an example of creation of a certificate for enabling SSL:
   
 cacerts  cert  key
   
-{{< /highlight >}}
+</code></pre>
 
 Generate a private key for the server:
 
-{{< highlight bash >}}[root@cehd1 hue]# openssl genrsa -out key/server.key 4096{{< /highlight >}}
+<pre><code class="bash">[root@cehd1 hue]# openssl genrsa -out key/server.key 4096</code></pre>
 
 Generate a "certificate request" for the server:
 
-{{< highlight bash >}}[root@cehd1 hue] openssl req -new -key key/server.key -out request/server.csr{{< /highlight >}}
+<pre><code class="bash">[root@cehd1 hue] openssl req -new -key key/server.key -out request/server.csr</code></pre>
 
 You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN.
   
 There are quite a few fields but you can leave some blank. For some fields there will be a default value, if you enter '.', the field will be left blank.
 
-{{< highlight bash >}}Country Name (2 letter code) [XX]:US
+<pre><code class="bash">Country Name (2 letter code) [XX]:US
    
 State or Province Name (full name) []:Colorado
    
@@ -126,11 +126,11 @@ A challenge password []:  ## note this was left
   
 An optional company name []:
 
-{{< /highlight >}}
+</code></pre>
 
 Self-sign the request, creating a certificate for the server:
 
-{{< highlight bash >}}[root@cehd1 hue] openssl x509 -req -days 365 -in request/server.csr -signkey key/server.key -out cert/server.crt
+<pre><code class="bash">[root@cehd1 hue] openssl x509 -req -days 365 -in request/server.csr -signkey key/server.key -out cert/server.crt
    
 Signature ok
    
@@ -138,9 +138,9 @@ subject=/C=US/ST=Colorado/L=<wbr />Denver/O=Cloudera/OU=COE/CN=test.lab
    
 Getting Private key
 
-{{< /highlight >}}
+</code></pre>
 
-{{< highlight bash >}}[root@cehd1 hue]# ls -lR
+<pre><code class="bash">[root@cehd1 hue]# ls -lR
   
 .
    
@@ -178,7 +178,7 @@ total 4
    
 -rw-r-r- 1 root root 1704 Jul 31 10:00 server.csr
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -197,7 +197,7 @@ Also, the Hue truststore has to be in PEM file format. At Cloudera we are using
 <div class="preformatted panel">
   <div class="preformattedContent panelContent">
     <p>
-      {{< highlight bash >}}keytool -exportcert -keystore hadoop-server.keystore -alias foo-1.cloudera.com \<br /> -storepass cloudera -file foo-1.cert<br /> openssl x509 -inform der -in foo-1.cert > foo-1.pem<br /> {{< /highlight >}}
+      <pre><code class="bash">keytool -exportcert -keystore hadoop-server.keystore -alias foo-1.cloudera.com \<br /> -storepass cloudera -file foo-1.cert<br /> openssl x509 -inform der -in foo-1.cert > foo-1.pem<br /> </code></pre>
     </p>
     
     <p>
@@ -207,7 +207,7 @@ Also, the Hue truststore has to be in PEM file format. At Cloudera we are using
     <div class="preformatted panel">
       <div class="preformattedContent panelContent">
         <p>
-          {{< highlight bash >}}cat foo-1.pem foo-2.pem ... > huetrust.pem{{< /highlight >}}
+          <pre><code class="bash">cat foo-1.pem foo-2.pem ... > huetrust.pem</code></pre>
         </p>
       </div>
     </div>

+ 12 - 12
docs/gethue/content/posts/2015-01-21-automatic-high-availability-with-hue-and-cloudera-manager.md

@@ -70,43 +70,43 @@ Once the database has been set up, the following instructions describe setting u
 
 On a Redhat/Fedora-based system:
 
-{{< highlight bash >}}% sudo yum install git nginx haproxy python python-pip
+<pre><code class="bash">% sudo yum install git nginx haproxy python python-pip
 
 % pip install virtualenv
 
-{{< /highlight >}}
+</code></pre>
 
 On a Debian/Ubuntu-based system:
 
-{{< highlight bash >}}% sudo apt-get install git nginx haproxy python python-pip
+<pre><code class="bash">% sudo apt-get install git nginx haproxy python python-pip
 
 % pip install virtualenv
 
-{{< /highlight >}}
+</code></pre>
 
 ## Running the load balancers
 
 First we want to start the load balancer:
 
-{{< highlight bash >}}% cd $HUE_HOME_DIR/tools/load-balancer
+<pre><code class="bash">% cd $HUE_HOME_DIR/tools/load-balancer
 
-{{< /highlight >}}
+</code></pre>
 
 Next we install the load balancer specific dependencies in a python virtual environment to keep those dependencies from affecting other projects on the system.
 
-{{< highlight bash >}}% virtualenv build
+<pre><code class="bash">% virtualenv build
 
 % source build/bin/activate
 
 % pip install -r requirements.txt
 
-{{< /highlight >}}
+</code></pre>
 
 Finally, modify `etc/hue-lb.toml` to point at your instance of Cloudera Manager (as in "cloudera-manager.example.com" without the port or "http://"), and provide a username and password for an account that has read access to the Hue state.
 
 Now we are ready to start the load balancers. Run:
 
-{{< highlight bash >}}% ./bin/supervisord
+<pre><code class="bash">% ./bin/supervisord
 
 % ./bin/supervisorctl status
 
@@ -116,7 +116,7 @@ monitor-hue-lb RUNNING pid 36919, uptime 0:00:01
 
 nginx RUNNING pid 36921, uptime 0:00:01
 
-{{< /highlight >}}
+</code></pre>
 
 You should be able to access Hue from either `http://HUE-LB-HOSTNAME:8000` for NGINX, or `http://HUE-LB-HOSTNAME:8001` for HAProxy. To demonstrate the that it’s load balancing:
 
@@ -132,9 +132,9 @@ You should be able to access Hue from either `http://HUE-LB-HOSTNAME:8000` for N
 
 Finally, if you want to shut down the load balancers, run:
 
-{{< highlight bash >}}% ./bin/supervisorctl shutdown
+<pre><code class="bash">% ./bin/supervisorctl shutdown
 
-{{< /highlight >}}
+</code></pre>
 
 ## Automatic Updates from Cloudera Manager
 

+ 8 - 8
docs/gethue/content/posts/2015-02-06-export-and-import-your-search-dashboards.md

@@ -49,25 +49,25 @@ categories:
 
 20000013 is the id you can see in the URL of the dashboard. If you don't specify -pks it will export all your dashboards.
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 ./build/env/bin/hue dumpdata search.Collection -indent 2 -pks=20000013 -natural > data.json
 
-{{< /highlight >}}
+</code></pre>
 
 **Using Hue 3.7 or less**
 
-{{< highlight bash >}}./build/env/bin/hue dumpdata search -indent 2 > data.json
+<pre><code class="bash">./build/env/bin/hue dumpdata search -indent 2 > data.json
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
 then
 
-{{< highlight bash >}}./build/env/bin/hue loaddata data.json
+<pre><code class="bash">./build/env/bin/hue loaddata data.json
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -81,11 +81,11 @@ And that's it, the dashboards with the same IDs will be refreshed with the impo
 
 If using CM, export this variable in order to point to the correct database:
 
-{{< highlight bash >}}HUE_CONF_DIR=/var/run/cloudera-scm-agent/process/-hue-HUE_SERVER-id
+<pre><code class="bash">HUE_CONF_DIR=/var/run/cloudera-scm-agent/process/-hue-HUE_SERVER-id
 
 echo $HUE_CONF_DIR
 
-export HUE_CONF_DIR{{< /highlight >}}
+export HUE_CONF_DIR</code></pre>
 
 Where <id> is the most recent ID in that process directory for hue-HUE_SERVER.
 

+ 24 - 24
docs/gethue/content/posts/2015-02-08-hue-api-execute-some-builtin-commands.md

@@ -53,25 +53,25 @@ Hue comes with a set of commands for simplifying the management of the service.
 
 If using CM, export this variable in order to point to the correct Hue:
 
-{{< highlight bash >}}cd /opt/cloudera/parcels/CDH/lib/{{< /highlight >}}
+<pre><code class="bash">cd /opt/cloudera/parcels/CDH/lib/</code></pre>
 
-{{< highlight bash >}}HUE_CONF_DIR=/var/run/cloudera-scm-agent/process/-hue-HUE_SERVER-id
+<pre><code class="bash">HUE_CONF_DIR=/var/run/cloudera-scm-agent/process/-hue-HUE_SERVER-id
 
 echo $HUE_CONF_DIR
 
-export HUE_CONF_DIR{{< /highlight >}}
+export HUE_CONF_DIR</code></pre>
 
 Where <id> is the most recent ID in that process directory for hue-HUE_SERVER.
 
 If not using CM, just append the root of Hue home, normally:
 
-{{< highlight bash >}}/usr/lib/hue{{< /highlight >}}
+<pre><code class="bash">/usr/lib/hue</code></pre>
 
 Note:
 
 You might need to have access to a local directory for the logs of the command, e.g.:
 
-{{< highlight bash >}}cd /tmp{{< /highlight >}}
+<pre><code class="bash">cd /tmp</code></pre>
 
 &nbsp;
 
@@ -104,7 +104,7 @@ You might need to have access to a local directory for the logs of the command,
 
 <span style="font-weight: 400;">Example running </span><span style="font-weight: 400;">changepassword</span><span style="font-weight: 400;">:</span>
 
-{{< highlight bash >}}[root@nightly55-1 ~]# export HUE_CONF_DIR="/var/run/cloudera-scm-agent/process/\`ls -1 /var/run/cloudera-scm-agent/process | grep HUE | sort -n | tail -1 \`"
+<pre><code class="bash">[root@nightly55-1 ~]# export HUE_CONF_DIR="/var/run/cloudera-scm-agent/process/\`ls -1 /var/run/cloudera-scm-agent/process | grep HUE | sort -n | tail -1 \`"
 
 [root@nightly55-1 ~]# HUE_IGNORE_PASSWORD_SCRIPT_ERRORS=1 HUE_DATABASE_PASSWORD=password /opt/cloudera/parcels/CDH/lib/hue/build/env/bin/hue changepassword admin
 
@@ -116,11 +116,11 @@ Password (again):
 
 Password changed successfully for user 'admin'
 
-{{< /highlight >}}
+</code></pre>
 
 <span style="font-weight: 400;">If you are performing command line actions that require other password, such as </span><span style="font-weight: 400;">bind_password</span> <span style="font-weight: 400;">for syncing LDAP users and groups, you need to include environment variables to set those as well.  Here is a list:</span>
 
-{{< highlight bash >}}HUE_AUTH_PASSWORD = password used to authenticate to HS2/Impala.
+<pre><code class="bash">HUE_AUTH_PASSWORD = password used to authenticate to HS2/Impala.
 
 HUE_LDAP_PASSWORD = password used to authenticate to HS2/Impala.
 
@@ -128,7 +128,7 @@ HUE_SSL_PASSWORD = password used for private key file.
 
 HUE_SMTP_PASSWORD = password used for SMTP.
 
-HUE_LDAP_BIND_PASSWORD = password used for Ldap Bind.{{< /highlight >}}
+HUE_LDAP_BIND_PASSWORD = password used for Ldap Bind.</code></pre>
 
 ##
 
@@ -149,7 +149,7 @@ HUE_LDAP_BIND_PASSWORD = password used for Ldap Bind.{{< /highlight >}}
 
 <span style="font-weight: 400;">Example running </span><span style="font-weight: 400;">changepassword</span><span style="font-weight: 400;">.</span>
 
-{{< highlight bash >}}[root@cdhnok54-1 tmp]# export HUE_CONF_DIR="/var/run/cloudera-scm-agent/process/\`ls -1 /var/run/cloudera-scm-agent/process | grep HUE | sort -n | tail -1 \`"
+<pre><code class="bash">[root@cdhnok54-1 tmp]# export HUE_CONF_DIR="/var/run/cloudera-scm-agent/process/\`ls -1 /var/run/cloudera-scm-agent/process | grep HUE | sort -n | tail -1 \`"
 
 [root@cdhnok54-1 tmp]# /opt/cloudera/parcels/CDH/lib/hue/build/env/bin/hue changepassword admin
 
@@ -159,7 +159,7 @@ Password:
 
 Password (again):
 
-Password changed successfully for user 'admin'{{< /highlight >}}
+Password changed successfully for user 'admin'</code></pre>
 
 &nbsp;
 
@@ -167,7 +167,7 @@ Password changed successfully for user 'admin'{{< /highlight >}}
 
 Executing the hue command with no argument will list them all:
 
-{{< highlight bash >}}./build/env/bin/hue
+<pre><code class="bash">./build/env/bin/hue
 
 ...
 
@@ -385,17 +385,17 @@ sync_ldap_users_and_groups
 
 useradmin_sync_with_unix
 
-{{< /highlight >}}
+</code></pre>
 
 ## Starting the server
 
 For stating the test server, defaulting to port 8000:
 
-{{< highlight bash >}}./build/env/bin/hue runserver{{< /highlight >}}
+<pre><code class="bash">./build/env/bin/hue runserver</code></pre>
 
 For stating the production server, defaulting to port 8888:
 
-{{< highlight bash >}}./build/env/bin/hue runcpserver{{< /highlight >}}
+<pre><code class="bash">./build/env/bin/hue runcpserver</code></pre>
 
 These commands are more detailed on the [How to get started page][1].
 
@@ -403,11 +403,11 @@ These commands are more detailed on the [How to get started page][1].
 
 All the commands finishing by '_setup' will install the example of the particular app.
 
-{{< highlight bash >}}./build/env/bin/hue search_setup{{< /highlight >}}
+<pre><code class="bash">./build/env/bin/hue search_setup</code></pre>
 
 In the case of Hive, in order to install the sample_07 and sample_08 tables and SQL queries, type:
 
-{{< highlight bash >}}./build/env/bin/hue beeswax_install_examples{{< /highlight >}}
+<pre><code class="bash">./build/env/bin/hue beeswax_install_examples</code></pre>
 
 **Note**:
 
@@ -419,35 +419,35 @@ These commands are also accessible directly from the [Web UI][2].
 
 This command is explained in more detail in the [How to change or reset a forgotten password][4] post:
 
-{{< highlight bash >}}./build/env/bin/hue changepassword{{< /highlight >}}
+<pre><code class="bash">./build/env/bin/hue changepassword</code></pre>
 
 ## Closing Hive queries
 
 This command is explained in more detail in the [Hive and Impala queries life cycle][5] post:
 
-{{< highlight bash >}}./build/env/bin/hue close_queries{{< /highlight >}}
+<pre><code class="bash">./build/env/bin/hue close_queries</code></pre>
 
-{{< highlight bash >}}./build/env/bin/hue close_sessions{{< /highlight >}}
+<pre><code class="bash">./build/env/bin/hue close_sessions</code></pre>
 
 ## Running the tests
 
 This command is explained in more detail in the [How to run the tests][6] post:
 
-{{< highlight bash >}}./build/env/bin/hue test{{< /highlight >}}
+<pre><code class="bash">./build/env/bin/hue test</code></pre>
 
 ## Connect to the Database
 
 This command is explained in more detail in the [How to manage the database with the shell][7] post:
 
-{{< highlight bash >}}./build/env/bin/hue dbshell{{< /highlight >}}
+<pre><code class="bash">./build/env/bin/hue dbshell</code></pre>
 
 ## Connect to the Python shell
 
 In order to type any Django to Python:
 
-{{< highlight bash >}}./build/env/bin/hue shell{{< /highlight >}}
+<pre><code class="bash">./build/env/bin/hue shell</code></pre>
 
-{{< highlight bash >}}./build/env/bin/hue shell < script.py{{< /highlight >}}
+<pre><code class="bash">./build/env/bin/hue shell < script.py</code></pre>
 
 &nbsp;
 

+ 9 - 9
docs/gethue/content/posts/2015-02-12-hadoop-hue-3-on-hdp-installation-tutorial.md

@@ -98,17 +98,17 @@ HUE will be deployed as a “Gateway” access node to our Hadoop cluster; this
 
 <span style="color: #ff0000;">Note about Hive and HDP 2.5+: <span style="color: #000000;">Since at least HDP 2.5, the default Hive shipped won't work with Hue unless you change the property:</span></span>
 
-{{< highlight bash >}}hive.server2.parallel.ops.in.session=true{{< /highlight >}}
+<pre><code class="bash">hive.server2.parallel.ops.in.session=true</code></pre>
 
 Note about <span style="color: #ff0000;">Tez</span>:
 
-{{< highlight bash >}}[beeswax]
+<pre><code class="bash">[beeswax]
    
 \# Hue will use at most this many HiveServer2 sessions per user at a time.
    
 \# For Tez, increase the number to more if you need more than one query at the time, e.g. 2 or 3 (Tez as a maximum of 1 query by session).
    
-max_number_of_sessions=1{{< /highlight >}}
+max_number_of_sessions=1</code></pre>
 
 &nbsp;
 
@@ -132,7 +132,7 @@ Ubuntu uses ‘apt-get’ for package management. In our example, we’re using
 
 Prepare dependencies:
 
-{{< highlight bash >}}sudo apt-get install -y ant
+<pre><code class="bash">sudo apt-get install -y ant
   
 sudo apt-get install -y gcc g++
   
@@ -150,7 +150,7 @@ sudo apt-get install -y libldap2-dev
   
 sudo apt-get install -y python-dev python-simplejson python-setuptools
 
-{{< /highlight >}}
+</code></pre>
 
 Download Hue 3.8.1 release tarball (in case, older version [3.7.1 link][5]):
 
@@ -160,17 +160,17 @@ Make sure you have Java installed and configured correctly!
   
 I’m using Open JDK 1.7 in this example:
 
-{{< highlight bash >}}sudo apt-get install -y openjdk-7-jre openjdk-7-jdk
+<pre><code class="bash">sudo apt-get install -y openjdk-7-jre openjdk-7-jdk
   
 sudo echo “JAVA_HOME=\”/usr/lib/jvm/java-7-openjdk-amd64/jre\”" &gt;&gt; /etc/environment
 
-{{< /highlight >}}
+</code></pre>
 
 Unpackage the HUE 3.7.1 release tarball and change to the directory.
 
 Install HUE:
 
-{{< highlight bash >}}sudo make install{{< /highlight >}}
+<pre><code class="bash">sudo make install</code></pre>
 
 By default, HUE installs to ‘/usr/local/hue’ in your Gateway node’s local filesystem.
 
@@ -178,7 +178,7 @@ As installed, the HUE installation folders and file ownership will be set to the
 
 Let’s fix that so HUE can run correctly without root user permissions:
 
-{{< highlight bash >}}sudo chown -R ubuntu:ubuntu /usr/local/hue{{< /highlight >}}
+<pre><code class="bash">sudo chown -R ubuntu:ubuntu /usr/local/hue</code></pre>
 
 ## Configuring Hadoop and HUE
 

+ 6 - 6
docs/gethue/content/posts/2015-03-10-fixing-the-yarn-invalid-resource-request-requested-memory-0-or-requested-memory-max-configured.md

@@ -40,7 +40,7 @@ categories:
 ---
 Are you seeing this error when submitting a job to YARN? Are you launching an Oozie workflow with a Spark action? You might be hitting this issue!
 
-{{< highlight bash >}}Error starting action [spark-e27e]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested memory < 0, or requested memory > max configured, requestedMemory=1536, maxMemory=1024
+<pre><code class="bash">Error starting action [spark-e27e]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested memory < 0, or requested memory > max configured, requestedMemory=1536, maxMemory=1024
 
 at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:203)
 
@@ -72,7 +72,7 @@ at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.jav
 
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
 
-]{{< /highlight >}}
+]</code></pre>
 
 [
 
@@ -82,17 +82,17 @@ at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
 
 Your job is asking for more memory than what YARN is authorizing him to do. One way to fix it is to up these parameters to more like 2000:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 yarn.scheduler.maximum-allocation-mb
 
-{{< /highlight >}}
+</code></pre>
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 yarn.nodemanager.resource.memory-mb
 
-{{< /highlight >}}
+</code></pre>
 
 Have any questions? Feel free to contact us on [hue-user][2] or [@gethue][3]!
 

+ 15 - 15
docs/gethue/content/posts/2015-03-11-export-and-import-your-oozie-workflows.md

@@ -50,11 +50,11 @@ The previous methods were very error prone as they required to insert data in mu
 
 **Export all workflows**
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 ./build/env/bin/hue dumpdata desktop.Document2 -indent 2 -natural > data.json
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -62,15 +62,15 @@ The previous methods were very error prone as they required to insert data in mu
 
 20000013 is the id you can see in the URL of the dashboard.
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 ./build/env/bin/hue dumpdata desktop.Document2 -indent 2 -pks=20000013 -natural > data.json
 
-{{< /highlight >}}
+</code></pre>
 
 You can specify more than one id:
 
-{{< highlight bash >}}-pks=20000013,20000014,20000015{{< /highlight >}}
+<pre><code class="bash">-pks=20000013,20000014,20000015</code></pre>
 
 &nbsp;
 
@@ -78,9 +78,9 @@ You can specify more than one id:
 
 Then
 
-{{< highlight bash >}}./build/env/bin/hue loaddata data.json
+<pre><code class="bash">./build/env/bin/hue loaddata data.json
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -88,7 +88,7 @@ Then
 
 Until we hit Hue 4, this step is required in order to make the imported documents appear:
 
-{{< highlight bash >}}./build/env/bin/hue sync_documents{{< /highlight >}}
+<pre><code class="bash">./build/env/bin/hue sync_documents</code></pre>
 
 &nbsp;
 
@@ -102,15 +102,15 @@ And that's it, the dashboards with the same IDs will be refreshed with the impo
 
 If the document with the same id already exists in the database, just set its id to null in data.json and it will be inserted as a new document.
 
-{{< highlight bash >}}vim data.json{{< /highlight >}}
+<pre><code class="bash">vim data.json</code></pre>
 
 then change
 
-{{< highlight bash >}}"pk": 16,{{< /highlight >}}
+<pre><code class="bash">"pk": 16,</code></pre>
 
 to
 
-{{< highlight bash >}}"pk": null,{{< /highlight >}}
+<pre><code class="bash">"pk": null,</code></pre>
 
 &nbsp;
 
@@ -118,21 +118,21 @@ to
 
 If using CM, export this variable in order to point to the correct database:
 
-{{< highlight bash >}}HUE_CONF_DIR=/var/run/cloudera-scm-agent/process/-hue-HUE_SERVER-id
+<pre><code class="bash">HUE_CONF_DIR=/var/run/cloudera-scm-agent/process/-hue-HUE_SERVER-id
 
 echo $HUE_CONF_DIR
 
-export HUE_CONF_DIR{{< /highlight >}}
+export HUE_CONF_DIR</code></pre>
 
 Where <id> is the most recent ID in that process directory for hue-HUE_SERVER.
 
 or even quicker
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 export HUE_CONF_DIR="/var/run/cloudera-scm-agent/process/\`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk '{print $9}'\`"
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 

+ 5 - 5
docs/gethue/content/posts/2015-03-12-solr-search-ui-only.md

@@ -59,23 +59,23 @@ The [Solr Search App][1] is having a great success and users often wonder if th
 
 In the hue.ini (See '[Where is my hue.ini][4]'?), blacklist all the other apps:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [desktop]
 
 app_blacklist=beeswax,impala,security,filebrowser,jobbrowser,rdbms,jobsub,pig,hbase,sqoop,zookeeper,metastore,spark,oozie
 
-{{< /highlight >}}
+</code></pre>
 
 At the same time, double check Hue is pointing to your correct Solr:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [search]
 
 solr_url=http://localhost:8983/solr/
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -93,7 +93,7 @@ Have any questions? Feel free to contact us on [hue-user][6] or [@gethue][7]!
 
 If you want to install the examples you could enable the [indexer][8]
 
-{{< highlight bash >}}indexer{{< /highlight >}}
+<pre><code class="bash">indexer</code></pre>
 
 **
 

+ 15 - 15
docs/gethue/content/posts/2015-03-23-start-developing-hue-on-a-mac-in-a-few-minutes.md

@@ -50,7 +50,7 @@ In the meanwhile, let’s set up your Mac!
 
 To clone the Hue Github repository you need git installed on your system. Git (plus a ton of other tools) is included in the Xcode command line tools. To install it open Terminal and type
 
-{{< highlight bash >}}xcode-select -install{{< /highlight >}}
+<pre><code class="bash">xcode-select -install</code></pre>
 
 In the dialog choose "Install". If on Terminal you have the message "xcode-select: error: command line tools are already installed, use "Software Update" to install updates" it means you are almost good to go already.
 
@@ -58,7 +58,7 @@ From Terminal, navigate to a directory where you keep all your project and run
 
 <!--email_off-->
 
-{{< highlight bash >}}git clone https://github.com/cloudera/hue.git{{< /highlight >}}
+<pre><code class="bash">git clone https://github.com/cloudera/hue.git</code></pre>
 
 <!--/email_off-->
 
@@ -68,11 +68,11 @@ You now have the Hue source code in your Mac.
 
 The build process use Java to run. A quick way to get to the right download URL from Oracle is to run from Terminal
 
-{{< highlight bash >}}java -version{{< /highlight >}}
+<pre><code class="bash">java -version</code></pre>
 
 and then click on the "More info" button on the dialog that appears. On Oracle's website, accept the license and choose the Mac OS X JDK link. After the DMG has been downloaded, open it and double click on the installation package. Now, if we return to the Terminal and type again
 
-{{< highlight bash >}}java -version{{< /highlight >}}
+<pre><code class="bash">java -version</code></pre>
 
 we will have the version of the freshly installed JDK. At the time of writing, 1.8.0_40.
 
@@ -80,39 +80,39 @@ we will have the version of the freshly installed JDK. At the time of writing, 1
 
 Hue uses several libraries that are not included in the XCode command line tools so we will need to install that too. To do that we will use <a href="http://brew.sh" target="_blank" rel="noopener noreferrer">Homebrew</a>, the fantastic open source package manager for Mac OS X. Install it from Terminal with
 
-{{< highlight bash >}}ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"{{< /highlight >}}
+<pre><code class="bash">ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"</code></pre>
 
 You will need to enter your password to continue. Then, as suggested by the installation script, run
 
-{{< highlight bash >}}brew doctor{{< /highlight >}}
+<pre><code class="bash">brew doctor</code></pre>
 
 If you already have Homebrew installed, just update it running
 
-{{< highlight bash >}}brew update{{< /highlight >}}
+<pre><code class="bash">brew update</code></pre>
 
 As a first thing, we need to install Maven 3
 
-{{< highlight bash >}}brew install maven{{< /highlight >}}
+<pre><code class="bash">brew install maven</code></pre>
 
 And then Mysql to have the development libraries for it
 
-{{< highlight bash >}}brew install mysql{{< /highlight >}}
+<pre><code class="bash">brew install mysql</code></pre>
 
 This will install also lib-openssl. Let's go on install GMP
 
-{{< highlight bash >}}brew install gmp{{< /highlight >}}
+<pre><code class="bash">brew install gmp</code></pre>
 
 **Step 3b (just for El Capitan and Sierra): export ENV variables for openssl**
 
 If you have OS X El Capitan or macOS Sierra, you need an extra mini step to be able to make Hue:
 
-{{< highlight bash >}}export LDFLAGS=-L/usr/local/opt/openssl/lib && export CPPFLAGS=-I/usr/local/opt/openssl/include{{< /highlight >}}
+<pre><code class="bash">export LDFLAGS=-L/usr/local/opt/openssl/lib && export CPPFLAGS=-I/usr/local/opt/openssl/include</code></pre>
 
 **Step 4: Compile and configure Hue**
 
 Now that we are all set with the requirements we can compile Hue by running
 
-{{< highlight bash >}}make apps{{< /highlight >}}
+<pre><code class="bash">make apps</code></pre>
 
 from the Hue folder that was created by the git clone in step 1. After a while, if everything goes as planned, you should see as a last build message something like "N static files copied to ...".
 
@@ -130,11 +130,11 @@ The last thing we should do is to start the Quickstart VM and get its IP address
 
 (you can launch the terminal inside the VM and run 'ifconfig' for that; in my case it's 172.16.156.130). Then, on your Mac, edit the hosts file with
 
-{{< highlight bash >}}sudo vi /etc/hosts{{< /highlight >}}
+<pre><code class="bash">sudo vi /etc/hosts</code></pre>
 
 and add the line
 
-{{< highlight bash >}}172.16.156.130 quickstart.cloudera{{< /highlight >}}
+<pre><code class="bash">172.16.156.130 quickstart.cloudera</code></pre>
 
 with the IP you got from the VM. Save and you are good to go!
 
@@ -142,7 +142,7 @@ with the IP you got from the VM. Save and you are good to go!
 
 What you have to do on Terminal from the Hue folder is just
 
-{{< highlight bash >}}./build/env/bin/hue runserver{{< /highlight >}}
+<pre><code class="bash">./build/env/bin/hue runserver</code></pre>
 
 And point your browser to <http://localhost:8000>! Go and write a [new app][6] now! 🙂
 

+ 22 - 22
docs/gethue/content/posts/2015-03-25-hbase-browsing-with-doas-impersonation-and-kerberos.md

@@ -63,7 +63,7 @@ HBase can now be configured to [offer impersonation][3] (with or without Kerbero
 
 First, make sure you have this in your hbase-site.xml:
 
-{{< highlight xml >}}<property>
+<pre><code class="xml"><property>
 
 <name>hbase.thrift.support.proxyuser</name>
 
@@ -79,7 +79,7 @@ First, make sure you have this in your hbase-site.xml:
 
 </property>
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -91,7 +91,7 @@ If using Cloudera Manager, this is done by typing ‘thrift’ in the configurat
 
 Then check in core-site.xml that HBase is authorized to impersonates someone:
 
-{{< highlight xml >}}<property>
+<pre><code class="xml"><property>
 
 <name>hadoop.proxyuser.hbase.hosts</name>
 
@@ -107,17 +107,17 @@ Then check in core-site.xml that HBase is authorized to impersonates someone:
 
 </property>
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
 And finally check that Hue point to a local config directory of HBase specified in its hue.ini:
 
-{{< highlight bash >}}[hbase]
+<pre><code class="bash">[hbase]
 
 hbase_conf_dir=/etc/hbase/conf
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -125,11 +125,11 @@ hbase_conf_dir=/etc/hbase/conf
 
 If you are using Cloudera Manager, you might want to select the HBase Thrift server in the Hue configuration and enter something like this in the Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini.
 
-{{< highlight bash >}}[hbase]
+<pre><code class="bash">[hbase]
 
 hbase_conf_dir={{HBASE_CONF_DIR}}
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -145,7 +145,7 @@ Make sure that HBase is configured with [Kerberos][4] and that you have this in
 
 <!--email_off-->
 
-{{< highlight xml >}}<property>
+<pre><code class="xml"><property>
 
 <name>hbase.security.authentication</name>
 
@@ -161,7 +161,7 @@ Make sure that HBase is configured with [Kerberos][4] and that you have this in
 
 </property>
 
-{{< /highlight >}}
+</code></pre>
 
 <!--/email_off-->
 
@@ -181,19 +181,19 @@ If using Cloudera Manager, go to "Hbase service > Configuration > Service-Wide /
 
 And similarly to above, make sure that the hue.ini points to a valid directory with hbase-site.xml:
 
-{{< highlight bash >}}[hbase]
+<pre><code class="bash">[hbase]
 
 hbase_conf_dir=/etc/hbase/conf
 
-{{< /highlight >}}
+</code></pre>
 
 or
 
-{{< highlight bash >}}[hbase]
+<pre><code class="bash">[hbase]
 
 hbase_conf_dir={{HBASE_CONF_DIR}}
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -231,19 +231,19 @@ As usual feel free to comment on the [hue-user][7] list or [@gethue][8]!
 
 This error means that the above ‘hadoop.proxyuser.hbase.hosts’ / ‘hadoop.proxyuser.hbase.groups’ properties are not correct:
 
-{{< highlight bash >}}Api Error: Error 500 User: hbase is not allowed to impersonate romain HTTP ERROR 500 Problem accessing /.
+<pre><code class="bash">Api Error: Error 500 User: hbase is not allowed to impersonate romain HTTP ERROR 500 Problem accessing /.
 
 Reason: User: hbase is not allowed to impersonate bob Caused by:javax.servlet.ServletException:
 
 User: hbase is not allowed to impersonate bob at org.apache.hadoop.hbase.thrift.ThriftHttpServlet.doPost(ThriftHttpServlet.java:117) at
 
-{{< /highlight >}}
+</code></pre>
 
 **Note**
 
 You might now see permission errors like below.
 
-{{< highlight bash >}}Api Error: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions (user=admin, scope=default, action=CREATE)...{{< /highlight >}}
+<pre><code class="bash">Api Error: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions (user=admin, scope=default, action=CREATE)...</code></pre>
 
 This is because either:
 
@@ -254,11 +254,11 @@ This is because either:
 
 A quick way to fix this is to just give all the permissions. Obviously this is not recommended for a real setup, instead read more about [HBase Access Control][9]!
 
-{{< highlight bash >}}sudo -u hbase hbase shell
+<pre><code class="bash">sudo -u hbase hbase shell
 
 hbase(main):004:0> grant 'bob', 'RWC'
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -268,11 +268,11 @@ If you are getting a “Api Error: TSocket read 0 bytes”, this is because Hue
 
 A temporary hack would be to insert this in the hue.ini:
 
-{{< highlight bash >}}[hbase]
+<pre><code class="bash">[hbase]
 
 use_doas=true
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -294,7 +294,7 @@ buffered transport mode was not tested when using impersonation but might work.
 
 If you are getting this error:
 
-{{< highlight bash >}}Caused by: org.apache.hadoop.hbase.thrift.HttpAuthenticationException: Authorization header received from the client is empty.{{< /highlight >}}
+<pre><code class="bash">Caused by: org.apache.hadoop.hbase.thrift.HttpAuthenticationException: Authorization header received from the client is empty.</code></pre>
 
 You are very probably hitting <https://issues.apache.org/jira/browse/HBASE-13069>. Also make sure the HTTP/_HOST principal is in the keytab of for their HBase Thrift Server. Beware that as a follow-up you might get <https://issues.apache.org/jira/browse/HBASE-14471>.
 

+ 6 - 6
docs/gethue/content/posts/2015-03-26-add-a-top-banner-to-hue.md

@@ -39,7 +39,7 @@ categories:
 ---
 We have already seen <a href="https://gethue.com/how-to-configure-hue-in-your-hadoop-cluster/" target="_blank" rel="noopener noreferrer">in this post</a> how you can configure Hue in your cluster. But did you know that there’s a property that can make a top banner appear in your Hue installation? [<img src="https://cdn.gethue.com/uploads/2015/03/Screenshot-2015-03-23-16.33.12-1024x610.png"  />][1] This is quite useful if you want for instance to show a disclaimer to your users, or to clearly mark a testing or production environment, or if you want to display some dynamic information there. Depending on if you are using <a href="https://gethue.com/hadoop-tutorial-how-to-create-a-real-hadoop-cluster-in-10-minutes/" target="_blank" rel="noopener noreferrer">Cloudera Manager</a> or not, you should either add a safety valve or edit a .ini file to use this feature. For details on how to change the configuration, <a href="https://gethue.com/how-to-configure-hue-in-your-hadoop-cluster/" target="_blank" rel="noopener noreferrer">read here</a>. In the desktop/custom section of the ini file you can find the banner_top_html property:
 
-{{< highlight bash >}}[desktop]
+<pre><code class="bash">[desktop]
 
 [[custom]]
 
@@ -47,13 +47,13 @@ We have already seen <a href="https://gethue.com/how-to-configure-hue-in-your-ha
 
 banner_top_html=
 
-{{< /highlight >}}
+</code></pre>
 
 Then it’s just a matter of writing some HTML/CSS and even Javascript code to customized it as you prefer. Keep in mind that you have a limited height to do that (30px). For instance, to write a the same message you see on <a href="demo.gethue.com" target="_blank" rel="noopener noreferrer">demo.gethue.com</a>, you can write this:
 
 <!--email_off-->
 
-{{< highlight bash >}}[desktop]
+<pre><code class="bash">[desktop]
 
 [[custom]]
 
@@ -61,11 +61,11 @@ Then it’s just a matter of writing some HTML/CSS and even Javascript code to c
 
 banner_top_html='<div style="padding: 4px; text-align: center; background-color: #EEE; height: 40px"><i class="fa fa-flash muted"></i> This is Hue 3.11 read-only demo - <a href="https://gethue.com/hue-3-11-with-its-new-s3-browser-and-sql-autocomplete-is-out/" target="_blank">Read more about it</a> or <a href="/notebook/editor?editor=11">open a sample query</a>! <i class="fa fa-flash muted"></i></div>'
 
-{{< /highlight >}}
+</code></pre>
 
 <!--/email_off-->Or we could even use a very old HTML tag to display a running ticker!
 
-{{< highlight bash >}}[desktop]
+<pre><code class="bash">[desktop]
 
 [[custom]]
 
@@ -73,7 +73,7 @@ banner_top_html='<div style="padding: 4px; text-align: center; background-color:
 
 banner_top_html='<marquee behavior="scroll" direction="left" scrollamount="2" style="font-size: 15px;padding: 5px;color:#338BB8;font-weight:bold">Welcome to the test environment.</marquee>'
 
-{{< /highlight >}}
+</code></pre>
 
 [<img src="https://cdn.gethue.com/uploads/2015/03/Screenshot-2015-03-23-18.56.32-1024x610.png"  />][2].
 

+ 6 - 6
docs/gethue/content/posts/2015-03-26-using-nginx-to-speed-up-hue-3-8-0.md

@@ -57,23 +57,23 @@ In comparison, in Hue 3.8 behind NGINX, rendering that same page performs 5 requ
 
 The simplest option is to just follow the instructions described in [Automatic High Availability with Hue and Cloudera Manager][5], which we’ve updated to support this optimization. Or if you want to just set up a simple NGINX configuration, you can install NGINX on Redhat systems with:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 % yum install nginx
 
-{{< /highlight >}}
+</code></pre>
 
 Or on a Debian/Ubuntu system with:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 % apt-get install nginx
 
-{{< /highlight >}}
+</code></pre>
 
 Next, add a `/etc/nginx/conf.d/hue.conf` file with the following contents. Make sure to tweak `server_name` to this machine's hostname (or just localhost), the `alias` to point at Hue's static files, and the `server` to point at the Hue instance. Note that if you're running multiple Hue instances, be sure to use a database like MySQL, PostgreSQL, or Oracle which allows for remote access:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 server {
 
@@ -137,7 +137,7 @@ server HUE_HOST2:8888 max_fails=3;
 
 }
 
-{{< /highlight >}}
+</code></pre>
 
 Finally, start NGINX with `sudo service nginx start` and navigate to http://NGINX_HOSTNAME:8001.
 

+ 24 - 24
docs/gethue/content/posts/2015-04-08-developer-guide-on-upgrading-apps-for-django-1-6.md

@@ -58,33 +58,33 @@ Hue was upgraded from Django 1.4.5 to Django 1.6.10. While the Django release no
 
 We backported Django 1.7’s [JsonResponse][6] to simplify responding with Json records. So views that used to be written as:
 
-{{< highlight python >}}def view(request):
+<pre><code class="python">def view(request):
 
 value = { “x”: “y” }
 
 HttpResponse(json.dumps(value))
 
-{{< /highlight >}}
+</code></pre>
 
 Can now be written as:
 
-{{< highlight python >}}def view(request):
+<pre><code class="python">def view(request):
 
 value = { “x”: “y” }
 
 return JsonResponse(value)
 
-{{< /highlight >}}
+</code></pre>
 
 One thing to note though is that Django now by default will raise an error if a non-dictionary is serialized. This is to prevent against [attack against older browsers][7]. Here is how to disable this error:
 
-{{< highlight python >}}def view(request):
+<pre><code class="python">def view(request):
 
 value = [“x”, “y”]
 
 return JsonResponse(value, safe=False)
 
-{{< /highlight >}}
+</code></pre>
 
 We recommend that developers migrate over to returning objects. Hue itself should be completely transitioned by 3.8.0.
 
@@ -92,21 +92,21 @@ We recommend that developers migrate over to returning objects. Hue itself shoul
 
 Django’s `django.core.urlresolvers.reverse` (and therefore the `url` function in mako scripts) now automatically escapes arguments. So so uses of these functions should be changed from:
 
-{{< highlight python >}}<a href="${ url('useradmin.views.edit_user', username=urllib.quote(user.username)) }">...</a>
+<pre><code class="python"><a href="${ url('useradmin.views.edit_user', username=urllib.quote(user.username)) }">...</a>
 
-{{< /highlight >}}
+</code></pre>
 
 To:
 
-{{< highlight python >}}<a href="${ url('useradmin.views.edit_user', username=user.username) }">...</a>
+<pre><code class="python"><a href="${ url('useradmin.views.edit_user', username=user.username) }">...</a>
 
-{{< /highlight >}}
+</code></pre>
 
 ### StreamingHttpResponse
 
 In order to return a generator from a view, it is now required to use `StreamingHttpResponse`. When testing, change code from
 
-{{< highlight python >}} csv_response = self.c.post(reverse('search:download'), {
+<pre><code class="python"> csv_response = self.c.post(reverse('search:download'), {
 
 'csv': True,
 
@@ -118,11 +118,11 @@ In order to return a generator from a view, it is now required to use `Streaming
 
 csv_response_content = csv_response.content
 
-{{< /highlight >}}
+</code></pre>
 
 To:
 
-{{< highlight python >}}csv_response = self.c.post(reverse('search:download'), {
+<pre><code class="python">csv_response = self.c.post(reverse('search:download'), {
 
 'csv': True,
 
@@ -134,7 +134,7 @@ To:
 
 csv_response_content = ".join(csv_response.streaming_content)
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -160,35 +160,35 @@ In order to make the transition, do:
 
   * Move static files from <code>/apps/$name/static</code> to <code>/apps/$name/src/$name/static</code>
   * Update <code>.mako</code> files to change from:
-    {{< highlight python >}}<link rel=”stylesheet” href=”/metastore/static/css/metastore.css”>{{< /highlight >}}
+    <pre><code class="python"><link rel=”stylesheet” href=”/metastore/static/css/metastore.css”></code></pre>
 
     To:
 
-    {{< highlight python >}}<link rel=”stylesheet” href=”${ static(‘metastore/css/metastore.css’) }”>{{< /highlight >}}
+    <pre><code class="python"><link rel=”stylesheet” href=”${ static(‘metastore/css/metastore.css’) }”></code></pre>
 
   * Update the “ICON” in apps/$name/src/help/settings.py from:
-    {{< highlight python >}}ICON = “/help/static/art/icon_help_24.png”
+    <pre><code class="python">ICON = “/help/static/art/icon_help_24.png”
 
-    {{< /highlight >}}
+    </code></pre>
 
     To:
 
-    {{< highlight python >}}ICON = “help/art/icon_help_24.png”
+    <pre><code class="python">ICON = “help/art/icon_help_24.png”
 
-    {{< /highlight >}}
+    </code></pre>
 
   * Update any Python python templates from:
-    {{< highlight python >}}def view(request):
+    <pre><code class="python">def view(request):
 
     data = {‘image’: “/help/static/art/icon_help_24.png”}
 
     return render(“template.mako”, request, data)
 
-    {{< /highlight >}}
+    </code></pre>
 
     To:
 
-    {{< highlight python >}}from django.contrib.staticfiles.storage import staticfiles_storage
+    <pre><code class="python">from django.contrib.staticfiles.storage import staticfiles_storage
 
 
@@ -198,7 +198,7 @@ In order to make the transition, do:
 
     return render(“template.mako”, request, data)
 
-    {{< /highlight >}}
+    </code></pre>
 
 Finally, in order to run Hue with `debug=False`, it is now required to first run either `make apps` or `./build/env/bin/hue collectstatic` in order to gather all the files into the `build/static` directory. This is not necessary for `debug=True`, where hue will serve the static files directly from the `/apps/$name/src/$name/static` directory.
 

+ 2 - 2
docs/gethue/content/posts/2015-04-10-hive-1-1-and-impala-2-2-support.md

@@ -60,7 +60,7 @@ One more feature landing in Hue 3.8 that could interest some users is the Thrif
 
 By configure HiveServer2 in [HTTP mode][6]:
 
-{{< highlight xml >}}<property>
+<pre><code class="xml"><property>
 
 <name>hive.server2.transport.mode</name>
 
@@ -68,7 +68,7 @@ By configure HiveServer2 in [HTTP mode][6]:
 
 </property>
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 

+ 8 - 8
docs/gethue/content/posts/2015-04-23-new-notebook-application-for-spark-sql.md

@@ -75,11 +75,11 @@ Supports:
 
 If the Spark app is not visible in the ‘Editor’ menu, you will need to unblacklist it from the [hue.ini][7]:
 
-{{< highlight bash >}}[desktop]
+<pre><code class="bash">[desktop]
 
 app_blacklist=
 
-{{< /highlight >}}
+</code></pre>
 
 **Note:** To override a value in Cloudera Manager, you need to enter verbatim each mini section from below into the Hue [Safety Valve][8]: Hue Service → Configuration → Service-Wide → Advanced → Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini
 
@@ -89,7 +89,7 @@ On the same machine as Hue, go in the Hue home:
 
 If using the package installed:
 
-{{< highlight bash >}}cd /usr/lib/hue{{< /highlight >}}
+<pre><code class="bash">cd /usr/lib/hue</code></pre>
 
 &nbsp;
 
@@ -101,7 +101,7 @@ Use Livy Spark Job Server from the Hue master repository instead of CDH (it is c
 
 If not, use Cloudera Manager:
 
-{{< highlight bash >}}cd /opt/cloudera/parcels/CDH/lib/
+<pre><code class="bash">cd /opt/cloudera/parcels/CDH/lib/
 
 HUE_CONF_DIR=/var/run/cloudera-scm-agent/process/-hue-HUE_SERVER-#
 
@@ -109,7 +109,7 @@ echo $HUE_CONF_DIR
 
 export HUE_CONF_DIR
 
-{{< /highlight >}}
+</code></pre>
 
 <div>
   Where # is substituted by the last number, e.g. hue-HUE_SERVER-65
@@ -117,13 +117,13 @@ export HUE_CONF_DIR
 
 Then cd to hue directory And start the [Spark Job Server][4] from the Hue home:
 
-{{< highlight bash >}}./build/env/bin/hue livy_server{{< /highlight >}}
+<pre><code class="bash">./build/env/bin/hue livy_server</code></pre>
 
 &nbsp;
 
 You can customize the setup by modifying these properties in the hue.ini:
 
-{{< highlight bash >}}[spark]
+<pre><code class="bash">[spark]
 
 \# URL of the REST Spark Job Server.
 
@@ -137,7 +137,7 @@ languages='[{"name": "Scala", "type": "scala"},{"name": "Python", "type": "pytho
 
 \## livy_server_session_kind=yarn
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 

+ 4 - 4
docs/gethue/content/posts/2015-05-21-build-a-real-time-analytic-dashboard-with-solr-search-and-spark-streaming.md

@@ -63,23 +63,23 @@ You can see the tweets rolling in! Compared to the previous version:
 
 Download a [nightly Solr 5.x][6], uncompress it and start it:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 bin/solr start -cloud
 
 bin/solr create -c tweets
 
-{{< /highlight >}}
+</code></pre>
 
 Then compile the [Spark Solr app][7].
 
 Enable the analytic widgets in hue.ini:
 
-{{< highlight bash >}}[search]
+<pre><code class="bash">[search]
 
 latest=true
 
-{{< /highlight >}}
+</code></pre>
 
 **Sum-up**
 

File diff suppressed because it is too large
+ 6 - 6
docs/gethue/content/posts/2015-06-15-install-hue-3-on-pivotal-hd-3-0.md


+ 2 - 2
docs/gethue/content/posts/2015-07-07-bay-area-bikeshare-data-analysis-with-search-and-spark-notebook.md

@@ -67,7 +67,7 @@ As usual feel free to comment on the [hue-user][11] list or [@gethue][12]!
 
 A quick way to index the data with Solr:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 bin/solr create_collection  -c  bikes
 
@@ -77,7 +77,7 @@ u="$URL/bikes/update?commitWithin=5000"
 
 curl $u -data-binary @/home/test/index_data.csv -H 'Content-type:text/csv'
 
-{{< /highlight >}}
+</code></pre>
 
  [1]: http://www.bayareabikeshare.com
  [2]: https://www.dropbox.com/s/jw44si1gy26tdhj/bikedataclean.csv?dl=0

+ 2 - 2
docs/gethue/content/posts/2015-07-08-analizziamo-i-dati-bikeshare-della-bay-area-con-solr-search-e-spark-notebook.md

@@ -70,7 +70,7 @@ Come al solito commentate pure sulla lista [hue-user][10] o su Twitter [@gethue
 
 Un modo veloce per indicizzare i dati con Solr:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 bin/solr create_collection  -c  bikes
 
@@ -80,7 +80,7 @@ u="$URL/bikes/update?commitWithin=5000"
 
 curl $u -data-binary @/home/test/index_data.csv -H 'Content-type:text/csv'
 
-{{< /highlight >}}
+</code></pre>
 
  [1]: http://www.bayareabikeshare.com
  [2]: https://www.dropbox.com/s/jw44si1gy26tdhj/bikedataclean.csv?dl=0

+ 2 - 2
docs/gethue/content/posts/2015-07-08-analyse-des-donnees-des-velib-de-san-francisco-avec-solr-search-et-un-spark-notebook.md

@@ -66,11 +66,11 @@ Un moyen rapide pour indexer les données avec Solr:
 
 <div>
   <p>
-    {{< highlight bash >}}<br /> bin/solr create_collection  -c  bikes
+    <pre><code class="bash"><br /> bin/solr create_collection  -c  bikes
   </p>
 
   <p>
-    URL=http://localhost:8983/solr<br /> u="$URL/bikes/update?commitWithin=5000"<br /> curl $u -data-binary @/home/test/index_data.csv -H 'Content-type:text/csv'<br /> {{< /highlight >}}
+    URL=http://localhost:8983/solr<br /> u="$URL/bikes/update?commitWithin=5000"<br /> curl $u -data-binary @/home/test/index_data.csv -H 'Content-type:text/csv'<br /> </code></pre>
   </p>
 </div>
 

+ 8 - 8
docs/gethue/content/posts/2015-07-27-enhance-search-results.md

@@ -53,7 +53,7 @@ We want to create a couple of functions to make our results prettier: a graphica
 
 On the CSS/JS tab we can specify something the new Mustache functions 'hue_fn_renderStars' and 'hue_fn_renderMap':
 
-{{< highlight html >}}
+<pre><code class="html">
 
 <script>
 
@@ -101,13 +101,13 @@ return '<img src="https://maps.googleapis.com/maps/api/staticmap?center=' + coor
 
 </script>
 
-{{< /highlight >}}
+</code></pre>
 
 it's very important to prefix the name of the additional Mustache functions with 'hue_fn_' so Hue can pick them up and process them.
 
 On the HTML tab we write this:
 
-{{< highlight html >}}
+<pre><code class="html">
 
 <div class="row-fluid">
 
@@ -129,7 +129,7 @@ On the HTML tab we write this:
 
 </div>
 
-{{< /highlight >}}
+</code></pre>
 
 As you can see, the newly added functions can be called with {{#renderStars}}{{/renderStars}} and {{#renderMap}}{{/renderMap}}
 
@@ -141,7 +141,7 @@ To access the string that is in between the function declaration in the HTML tem
 
 For instance, if you want to do a conditional function like 'if' and test for a variable inside it, you can do something like
 
-{{< highlight html >}}
+<pre><code class="html">
 
 <script>
 
@@ -169,15 +169,15 @@ return isTrue ? "The condition is true!" : "No, it's false";
 
 </script>
 
-{{< /highlight >}}
+</code></pre>
 
 and use it in the HTML tab with
 
-{{< highlight html >}}
+<pre><code class="html">
 
 {{#if}}{{field_to_test}}{{/if}}
 
-{{< /highlight >}}
+</code></pre>
 
 With the HTML result widget the sky is the limit! 🙂
 

+ 4 - 4
docs/gethue/content/posts/2015-07-30-filter-sort-browse-hive-partitions-with-hues-metastore.md

@@ -45,15 +45,15 @@ However, partitioning is also useful for external tables where the data may alre
 
 Take for example an external table called "blog" created with the following partition scheme:
 
-{{< highlight sql >}}
+<pre><code class="sql">
 
 CREATE TABLE blog (title STRING, body STRING, pubdate DATE) PARTITIONED BY (dy STRING, dm STRING, dd STRING, dh STRING);
 
-{{< /highlight >}}
+</code></pre>
 
 We can continue to alter the table as needed to add data at specific partition locations:
 
-{{< highlight sql >}}
+<pre><code class="sql">
 
 ALTER TABLE blog ADD PARTITION (dy='2015', dm='2015-01', dd='2015-01-01', dh='2015-01-01 00') LOCATION '/user/jennykim/2015/01/01/00';
 
@@ -71,7 +71,7 @@ ALTER TABLE blog ADD PARTITION (dy='2015', dm='2015-01', dd='2015-01-04', dh='20
 
 ALTER TABLE blog ADD PARTITION (dy='2015', dm='2015-01', dd='2015-01-04', dh='2015-01-04 12') LOCATION '/user/jennykim/2015/01/04/12';
 
-{{< /highlight >}}
+</code></pre>
 
 Regardless of a table's partition locations, Hue's metastore now enables you to browse all the partitions in the table, by clicking the "Show Partitions" link from the table view. By default, the partitions view will sort the partitions in reverse order by name (or newest first, if partitioned by date) and display the first 250 partitions.
 

+ 2 - 2
docs/gethue/content/posts/2015-08-07-configuring-hue-multiple-authentication-backends-and-ldap.md

@@ -46,7 +46,7 @@ Hue already allows you to authenticate with several authentication services incl
 
 For example, to enable Hue to first attempt LDAP directory lookup before falling back to the database-backed user model, we can update the hue.ini configuration file or [Hue safety valve][4] in Cloudera Manager with a list containing first the `LdapBackend` followed by either the `ModelBackend` or custom `AllowFirstUserDjangoBackend` (permits first login and relies on user model for all subsequent authentication):
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [desktop]
 
@@ -54,7 +54,7 @@ For example, to enable Hue to first attempt LDAP directory lookup before falling
 
 backend=desktop.auth.backend.LdapBackend,desktop.auth.backend.AllowFirstUserDjangoBackend
 
-{{< /highlight >}}
+</code></pre>
 
 This tells Hue to first check against the [configured LDAP directory service][5], and if the username is not found in the directory, then attempt to authenticate the user with the Django user manager.
 

+ 7 - 7
docs/gethue/content/posts/2015-08-20-dynamic-search-dashboard-improvements-3.md

@@ -66,25 +66,25 @@ Links to the original documents can also be inserted. Add to the record a field
 
 Any link
 
-{{< highlight javascript >}}{'type': 'link', 'link': 'gethue.com'}{{< /highlight >}}
+<pre><code class="javascript">{'type': 'link', 'link': 'gethue.com'}</code></pre>
 
 HBase Browser
 
-{{< highlight javascript >}}{'type': 'hbase', 'table': 'document_demo', 'row_key': '20150527'}
+<pre><code class="javascript">{'type': 'hbase', 'table': 'document_demo', 'row_key': '20150527'}
 
 {'type': 'hbase', 'table': 'document_demo', 'row_key': '20150527', 'fam': 'f1'}
 
 {'type': 'hbase', 'table': 'document_demo', 'row_key': '20150527', 'fam': 'f1', 'col': 'c1'}
 
-{{< /highlight >}}
+</code></pre>
 
 File Browser
 
-{{< highlight javascript >}}{'type': 'hdfs', 'path': '/data/hue/file.txt'}{{< /highlight >}}
+<pre><code class="javascript">{'type': 'hdfs', 'path': '/data/hue/file.txt'}</code></pre>
 
 Metastore
 
-{{< highlight javascript >}}{'type': 'hive', 'database': 'default', 'table': 'sample_07'}{{< /highlight >}}
+<pre><code class="javascript">{'type': 'hive', 'database': 'default', 'table': 'sample_07'}</code></pre>
 
 <img src="https://cdn.gethue.com/uploads/2015/08/search-link-1024x630.png" />
 
@@ -128,11 +128,11 @@ The dashboard experience is even more real with this new browser full screen mod
 
 Solr 5.1 is seeing new [Analytics Facets][8]. A beta support for them has been added and can be enabled in the hue.ini with:
 
-{{< highlight bash >}}[search]
+<pre><code class="bash">[search]
 
 latest=true
 
-{{< /highlight >}}
+</code></pre>
 
 A more comprehensive demo is available on the [BikeShare data visualization][2] post.
 

+ 2 - 2
docs/gethue/content/posts/2015-08-28-mini-task-configure-hue-with-a-proxy.md

@@ -40,7 +40,7 @@ categories:
 ---
 We explained how to run Hue with [NGINX][1] serving the static files or under [Apache][2]. If you use another proxy, you might need to set these options:
 
-{{< highlight bash >}}[desktop]
+<pre><code class="bash">[desktop]
   
 \# Enable X-Forwarded-Host header if the load balancer requires it.
   
@@ -50,7 +50,7 @@ use_x_forwarded_host=false
   
 secure_proxy_ssl_header=false
   
-{{< /highlight >}}
+</code></pre>
 
  [1]: https://gethue.com/using-nginx-to-speed-up-hue-3-8-0/
  [2]: https://gethue.com/how-to-run-hue-with-the-apache-server/

+ 2 - 2
docs/gethue/content/posts/2015-09-02-mini-how-to-disabling-some-apps-from-showing-up.md

@@ -42,13 +42,13 @@ In the Hue ini [configuration file][1], in the `[desktop]` section, you can ente
 
 &nbsp;
 
-{{< highlight bash >}}[desktop]
+<pre><code class="bash">[desktop]
   
 \# Comma separated list of apps to not load at server startup.
   
 app_blacklist=beeswax,impala,security,filebrowser,jobbrowser,rdbms,jobsub,pig,hbase,sqoop,zookeeper,metastore,spark,oozie,indexer
   
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 

+ 4 - 4
docs/gethue/content/posts/2015-09-09-storing-passwords-in-script-rather-than-hue-ini-files.md

@@ -54,7 +54,7 @@ Any parameter that defines a password in the hue.ini can be replaced with the sa
 
 On startup, Hue runs the startup script and grabs the password from stdout. This is an example configuration:
 
-{{< highlight bash >}}[desktop]
+<pre><code class="bash">[desktop]
   
 ldap_username=hueservice
   
@@ -70,11 +70,11 @@ bind_password_script="/var/lib/hue/hue_passwords.sh bind_password"
   
 password_script="/var/lib/hue/hue_passwords.sh database"
   
-{{< /highlight >}}
+</code></pre>
 
 The script should go in a location where it can be read and executed by only the hue user. You can have a script per password or a single script that takes parameters. Here is an example single script that takes parameters that matches the above config:
 
-{{< highlight bash >}}#!/bin/bash
+<pre><code class="bash">#!/bin/bash
 
 SERVICE=$1
 
@@ -110,7 +110,7 @@ echo "password"
   
 fi
 
-{{< /highlight >}}
+</code></pre>
 
 Starting in Cloudera Manager 5.5 passwords are not stored in configuration files in clear text anymore.  As a result on Cloudera Manager 5.5 and higher you will need to know the password for Hue's DB connection to be able to run the Hue command line.
 

+ 6 - 6
docs/gethue/content/posts/2015-09-10-ldap-or-pam-pass-through-authentication-with-hive-or-impala.md

@@ -55,7 +55,7 @@ In order to provide better security, it is also now possible to provide a path t
 
 For example, here is how to configure a 'hue' user and password in a file for all the apps
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [desktop]
 
@@ -65,11 +65,11 @@ auth_username=hue
 
 auth_password_script=/path/to/ldap_password
 
-{{< /highlight >}}
+</code></pre>
 
 If Hue needs to authenticate to HiveServer2 with some different username and password:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [beeswax]
 
@@ -79,11 +79,11 @@ auth_password=hue_hive_pwd
 
 \# auth_password_script=
 
-{{< /highlight >}}
+</code></pre>
 
 If Impala is not using LDAP authentication but Hive does, we disable it in [desktop] and do not specify anything in [impala]:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [desktop]
 
@@ -107,7 +107,7 @@ auth_password=hue_hive_pwd
 
 \# auth_password_script=/
 
-{{< /highlight >}}
+</code></pre>
 
 **Note**
 

+ 31 - 31
docs/gethue/content/posts/2015-09-24-how-to-use-the-livy-spark-rest-job-server-for-interactive-spark-2-2.md

@@ -63,29 +63,29 @@ categories:
 
 <span style="font-weight: 400;">Based on the <a href="https://github.com/cloudera/hue/tree/master/apps/spark/java#building-livy">README</a>, we check out Livy's code. It is currently living in Hue repository for simplicity but hopefully will eventually graduate in its top project.</span>
 
-{{< highlight bash >}}git clone git@github.com:cloudera/hue.git{{< /highlight >}}
+<pre><code class="bash">git clone git@github.com:cloudera/hue.git</code></pre>
 
 <span style="font-weight: 400;">Then we compile Livy with</span>
 
-{{< highlight bash >}}cd hue/apps/spark/java
+<pre><code class="bash">cd hue/apps/spark/java
 
 mvn -DskipTests clean package
 
-{{< /highlight >}}
+</code></pre>
 
 <span style="font-weight: 400;">Export these variables</span>
 
-{{< highlight bash >}}export SPARK_HOME=/usr/lib/spark
+<pre><code class="bash">export SPARK_HOME=/usr/lib/spark
 
-export HADOOP_CONF_DIR=/etc/hadoop/conf{{< /highlight >}}
+export HADOOP_CONF_DIR=/etc/hadoop/conf</code></pre>
 
 <span style="font-weight: 400;">And start it</span>
 
-{{< highlight bash >}}./bin/livy-server{{< /highlight >}}
+<pre><code class="bash">./bin/livy-server</code></pre>
 
 **Note**: Livy defaults to Spark local mode, to use the YARN mode copy the configuration template file [apps/spark/java/conf/livy-defaults.conf.tmpl][4] into livy-defaults.conf and set the property:
 
-{{< highlight bash >}}livy.server.session.factory = yarn{{< /highlight >}}
+<pre><code class="bash">livy.server.session.factory = yarn</code></pre>
 
 &nbsp;
 
@@ -95,31 +95,31 @@ As the REST server is running, we can communicate with it. We are on the same ma
 
 <span style="font-weight: 400;">Let's list our open sessions</span>
 
-{{< highlight bash >}}curl localhost:8998/sessions
+<pre><code class="bash">curl localhost:8998/sessions
 
 {"from":0,"total":0,"sessions":[]}
 
-{{< /highlight >}}
+</code></pre>
 
 Note
 
 You can use
 
-{{< highlight bash >}} | python -m json.tool{{< /highlight >}}
+<pre><code class="bash"> | python -m json.tool</code></pre>
 
 at the end of the command to prettify the output, e.g.:
 
-{{< highlight bash >}}curl localhost:8998/sessions/0 | python -m json.tool{{< /highlight >}}
+<pre><code class="bash">curl localhost:8998/sessions/0 | python -m json.tool</code></pre>
 
 &nbsp;
 
 <span style="font-weight: 400;">There is zero session. We create an interactive PySpark session</span>
 
-{{< highlight bash >}}curl -X POST -data '{"kind": "pyspark"}' -H "Content-Type: application/json" localhost:8998/sessions
+<pre><code class="bash">curl -X POST -data '{"kind": "pyspark"}' -H "Content-Type: application/json" localhost:8998/sessions
 
 {"id":0,"state":"starting","kind":"pyspark","log":[]}
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -173,7 +173,7 @@ Livy supports the three languages of Spark:
 
 We check the status of the session until its state becomes `idle`: it means it is ready to be execute snippet of PySpark:
 
-{{< highlight bash >}}curl localhost:8998/sessions/0 | python -m json.tool
+<pre><code class="bash">curl localhost:8998/sessions/0 | python -m json.tool
 
 % Total % Received % Xferd Average Speed Time Time Time Current
 
@@ -213,7 +213,7 @@ Dload Upload Total Spent Left Speed
 
 "state": "idle"
 
-}{{< /highlight >}}
+}</code></pre>
 
 &nbsp;
 
@@ -227,19 +227,19 @@ Dload Upload Total Spent Left Speed
 
 <span style="font-weight: 400;">When the session state is <code>idle</code>, it means it is ready to accept statements! Lets compute <code>1 + 1</code></span>
 
-{{< highlight bash >}}curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/json' -d '{"code":"1 + 1"}'
+<pre><code class="bash">curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/json' -d '{"code":"1 + 1"}'
 
 {"id":0,"state":"running","output":null}
 
-{{< /highlight >}}
+</code></pre>
 
 We check the result of statement 0 when its state is `available`
 
-{{< highlight bash >}}curl localhost:8998/sessions/0/statements/0
+<pre><code class="bash">curl localhost:8998/sessions/0/statements/0
 
 {"id":0,"state":"available","output":{"status":"ok","execution_count":0,"data":{"text/plain":"2"}}}
 
-{{< /highlight >}}
+</code></pre>
 
 Note
 
@@ -247,29 +247,29 @@ If the statement is taking less than a few milliseconds, Livy returns the respo
 
 Statements are incrementing and all share the same context, so we can have a sequences
 
-{{< highlight bash >}}curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/json' -d '{"code":"a = 10"}'
+<pre><code class="bash">curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/json' -d '{"code":"a = 10"}'
 
 {"id":1,"state":"available","output":{"status":"ok","execution_count":1,"data":{"text/plain":""}}}
 
-{{< /highlight >}}
+</code></pre>
 
 Spanning multiple statements
 
-{{< highlight bash >}}curl localhost:8998/sessions/5/statements -X POST -H 'Content-Type: application/json' -d '{"code":"a + 1"}'
+<pre><code class="bash">curl localhost:8998/sessions/5/statements -X POST -H 'Content-Type: application/json' -d '{"code":"a + 1"}'
 
 {"id":2,"state":"available","output":{"status":"ok","execution_count":2,"data":{"text/plain":"11"}}}
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
 <span style="font-weight: 400;">Let's close the session to free up the cluster. Note that Livy will automatically inactive idle sessions after 1 hour (<a href="https://github.com/cloudera/hue/blob/master/apps/spark/java/conf/livy-defaults.conf.tmpl#L17">configurable</a>).</span>
 
-{{< highlight bash >}}curl localhost:8998/sessions/0 -X DELETE
+<pre><code class="bash">curl localhost:8998/sessions/0 -X DELETE
 
 {"msg":"deleted"}
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -277,15 +277,15 @@ Spanning multiple statements
 
 Let's say we want to create a shell running as the user `bob`, this is particularly useful when multi users are sharing a Notebook server
 
-{{< highlight bash >}}curl -X POST -data '{"kind": "pyspark", "proxyUser": "bob"}' -H "Content-Type: application/json" localhost:8998/sessions
+<pre><code class="bash">curl -X POST -data '{"kind": "pyspark", "proxyUser": "bob"}' -H "Content-Type: application/json" localhost:8998/sessions
 
 {"id":0,"state":"starting","kind":"pyspark","proxyUser":"bob","log":[]}
 
-{{< /highlight >}}
+</code></pre>
 
 Do not forget to add the user running Hue (your current login in dev or `hue` in production) in the Hadoop proxy user list (`/etc/hadoop/conf/core-site.xml`):
 
-{{< highlight xml >}}<property>
+<pre><code class="xml"><property>
 
 <name>hadoop.proxyuser.hue.hosts</name>
 
@@ -301,15 +301,15 @@ Do not forget to add the user running Hue (your current login in dev or `hue` in
 
 </property>
 
-{{< /highlight >}}
+</code></pre>
 
 ## <span style="font-weight: 400;">Additional properties</span>
 
 <span style="font-weight: 400;">All the properties supported by spark shells like the <a href="https://github.com/cloudera/hue/tree/master/apps/spark/java#request-body">number of executors, the memory</a>, etc can be changed at session creation. Their format is the same as when typing <code>spark-shell -h</code></span>
 
-{{< highlight bash >}}curl -X POST -data '{"kind": "pyspark", "numExecutors": "3", "executorMemory": "2G"}' -H "Content-Type: application/json" localhost:8998/sessions
+<pre><code class="bash">curl -X POST -data '{"kind": "pyspark", "numExecutors": "3", "executorMemory": "2G"}' -H "Content-Type: application/json" localhost:8998/sessions
 
-{"id":0,"state":"starting","kind":"pyspark","numExecutors":"3","executorMemory":"2G","log":[]} {{< /highlight >}}
+{"id":0,"state":"starting","kind":"pyspark","numExecutors":"3","executorMemory":"2G","log":[]} </code></pre>
 
 &nbsp;
 

+ 11 - 11
docs/gethue/content/posts/2015-09-25-bay-area-bike-share-data-analysis-with-spark-notebook-part-2.md

@@ -69,7 +69,7 @@ Now that we've imported the data into our cluster, we can create a new Notebook
 
 Let's find the top 10 most popular start stations based on the trip data:
 
-{{< highlight sql >}}SELECT startterminal, startstation, COUNT(1) AS count FROM bikeshare.trips GROUP BY startterminal, startstation ORDER BY count DESC LIMIT 10{{< /highlight >}}
+<pre><code class="sql">SELECT startterminal, startstation, COUNT(1) AS count FROM bikeshare.trips GROUP BY startterminal, startstation ORDER BY count DESC LIMIT 10</code></pre>
 
 [<img src="https://cdn.gethue.com/uploads/2015/09/impala_query-1024x339.png"  />][9]
 
@@ -79,7 +79,7 @@ Once our results are returned, we can easily visualize this data; a bar graph wo
 
 It seems that the San Francisco Caltrain (Townsend at 4th) was by far the most common start station. Let's determine which end stations, for trips starting from the SF Caltrain Townsend station, were the most popular. We'll fetch the latitude and longitude coordinates so that we can visualize the results on a map.
 
-{{< highlight sql >}}
+<pre><code class="sql">
 
 SELECT
 
@@ -103,7 +103,7 @@ GROUP BY s.station_id, s.name, s.lat, s.long
 
 ORDER BY count DESC LIMIT 10
 
-{{< /highlight >}}
+</code></pre>
 
 [<img src="https://cdn.gethue.com/uploads/2015/08/impala_map-e1443111522857-1024x223.png" />][11]
 
@@ -117,7 +117,7 @@ Let's say we wanted to dig further into the trip data for the SF Caltrain statio
 
 Since the trip data stores startdate as a STRING, we'll need to apply some string-manipulation to extract the hour within an inline SQL query. The outer query will aggregate the count of trips and the average duration.
 
-{{< highlight sql >}}
+<pre><code class="sql">
 
 SELECT
 
@@ -151,7 +151,7 @@ GROUP BY hour
 
 ORDER BY hour ASC;
 
-{{< /highlight >}}
+</code></pre>
 
 Since this data produces several numeric dimensions of data, we can visualize the results using a scatterplot graph, with the hour as the x-axis, number of trips as the y-axis, and the average duration as the scatterplot size.
 
@@ -159,7 +159,7 @@ Since this data produces several numeric dimensions of data, we can visualize th
 
 Let's add another Hive snippet to analyze an hour-by-hour breakdown of availability at the SF Caltrain Station:
 
-{{< highlight sql >}}
+<pre><code class="sql">
 
 SELECT
 
@@ -199,7 +199,7 @@ GROUP BY hour
 
 ORDER BY hour ASC;
 
-{{< /highlight >}}
+</code></pre>
 
 We'll visualize the results as a line graph, which indicates that the bike availability tends to fall starting at 6 AM and is regained around 6 PM.
 
@@ -213,7 +213,7 @@ Hue's Spark notebooks allow users to mix exploratory SQL-analysis with custom Sc
 
 For example, we can open a pyspark snippet and load the trip data directly from the Hive warehouse and apply a sequence of filter, map, and reduceByKey operations to determine the average number of trips starting from the SF Caltrain Station:
 
-{{< highlight python >}}
+<pre><code class="python">
 
 trips = sc.textFile('/user/hive/warehouse/bikeshare.db/trips/201402_trip_data.csv')
 
@@ -245,7 +245,7 @@ avg_trips_sorted = sorted(avg_trips_by_hour.collect())
 
 %table avg_trips_sorted
 
-{{< /highlight >}}
+</code></pre>
 
 [<img src="https://cdn.gethue.com/uploads/2015/09/Screenshot-2015-09-23-23.13.46-e1443110910319-1024x268.png" />][14]
 
@@ -265,7 +265,7 @@ Stay tuned for a number of exciting improvements to the notebook app, and as usu
 
 The BABS rebalancing data (named 201402_status_data.csv) uses quotes.  In these cases, it is easier to create the table in Hive in the Beeswax editor and use the OpenCSV Row SERDE for Hive:
 
-{{< highlight sql >}}
+<pre><code class="sql">
 
 CREATE TABLE rebalancing(station_id int, bikes_available int, docks_available int, time string)
 
@@ -283,7 +283,7 @@ WITH SERDEPROPERTIES (
 
 STORED AS TEXTFILE;
 
-{{< /highlight >}}
+</code></pre>
 
 Then you can go back to the Metastore to import the CSV into the table; note that you may have to remove the header line manually.
 

+ 26 - 26
docs/gethue/content/posts/2015-10-13-how-to-use-the-livy-spark-rest-job-server-api-for-sharing-spark-rdds-and-contexts.md

@@ -57,7 +57,7 @@ This is described in the [previous post section][2].
 
 <span style="font-weight: 400;">Livy offers remote Spark sessions to users. They usually have one each (or one by Notebook):</span>
 
-{{< highlight bash >}}# Client 1
+<pre><code class="bash"># Client 1
 
 curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/json' -d '{"code":"1 + 1"}'
 
@@ -69,7 +69,7 @@ curl localhost:8998/sessions/1/statements -X POST -H 'Content-Type: application/
 
 curl localhost:8998/sessions/2/statements -X POST -H 'Content-Type: application/json' -d '{"code":"..."}'
 
-{{< /highlight >}}
+</code></pre>
 
 [<img src="https://cdn.gethue.com/uploads/2015/10/livy_shared_contexts2-1024x565.png"  />][3]
 
@@ -79,7 +79,7 @@ curl localhost:8998/sessions/2/statements -X POST -H 'Content-Type: application/
 
 If the users were pointing to the same session, they would interact with the same Spark context. This context would itself manages several RDDs. Users simply need to use the same session id, e.g. 0, and issue commands there:
 
-{{< highlight bash >}}# Client 1
+<pre><code class="bash"># Client 1
 
 curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/json' -d '{"code":"1 <a href="https://cdn.gethue.com/uploads/2015/10/livy_multi_contexts.png"><img src="https://cdn.gethue.com/uploads/2015/10/livy_multi_contexts-1024x566.png"  /></a>+ 1"}'
 
@@ -89,7 +89,7 @@ curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/
 
 \# Client 3
 
-curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/json' -d '{"code":"..."}' {{< /highlight >}}
+curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/json' -d '{"code":"..."}' </code></pre>
 
 [<img src="https://cdn.gethue.com/uploads/2015/10/livy_multi_rdds2-1024x557.png"  />][4]
 
@@ -103,7 +103,7 @@ Now we can even make it more sophisticated while keeping it simple. Imagine we w
 
 To make it prettier, we can wrap it in a few lines of Python and call it `ShareableRdd`. Then users can directly connect to the session and set or retrieve values.
 
-{{< highlight python >}}
+<pre><code class="python">
 
 class ShareableRdd():
 
@@ -121,11 +121,11 @@ new_key = sc.parallelize([[key, value]])
 
 self.data = self.data.union(new_key)
 
-{{< /highlight >}}
+</code></pre>
 
 `set()` adds a value to the shared RDD, while `get()` retrieves it.
 
-{{< highlight python >}}
+<pre><code class="python">
 
 srdd = ShareableRdd()
 
@@ -133,33 +133,33 @@ srdd.set('ak', 'Alaska')
 
 srdd.set('ca', 'California')
 
-{{< /highlight >}}
+</code></pre>
 
-{{< highlight python >}}
+<pre><code class="python">
 
 srdd.get('ak')
 
-{{< /highlight >}}
+</code></pre>
 
 If using the REST Api directly someone can access it with just these commands:
 
-{{< highlight bash >}}curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/json' -d '{"code":"srdd.get(\"ak\")"}'
+<pre><code class="bash">curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/json' -d '{"code":"srdd.get(\"ak\")"}'
 
-{"id":3,"state":"running","output":null}{{< /highlight >}}
+{"id":3,"state":"running","output":null}</code></pre>
 
-{{< highlight bash >}}curl localhost:8998/sessions/0/statements/3
+<pre><code class="bash">curl localhost:8998/sessions/0/statements/3
 
-{"id":3,"state":"available","output":{"status":"ok","execution_count":3,"data":{"text/plain":"[['ak', 'Alaska']]"}}}{{< /highlight >}}
+{"id":3,"state":"available","output":{"status":"ok","execution_count":3,"data":{"text/plain":"[['ak', 'Alaska']]"}}}</code></pre>
 
 We can even get prettier data back, directly in json format by adding the `%json` magic keyword:
 
-{{< highlight bash >}}curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/json' -d '{"code":"data = srdd.get(\"ak\")\n%json data"}'
+<pre><code class="bash">curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/json' -d '{"code":"data = srdd.get(\"ak\")\n%json data"}'
 
-{"id":4,"state":"running","output":null}{{< /highlight >}}
+{"id":4,"state":"running","output":null}</code></pre>
 
-{{< highlight bash >}}curl localhost:8998/sessions/0/statements/4
+<pre><code class="bash">curl localhost:8998/sessions/0/statements/4
 
-{"id":4,"state":"available","output":{"status":"ok","execution_count":2,"data":{"application/json":[["ak","Alaska"]]}}}{{< /highlight >}}
+{"id":4,"state":"available","output":{"status":"ok","execution_count":2,"data":{"application/json":[["ak","Alaska"]]}}}</code></pre>
 
 Note
 
@@ -171,13 +171,13 @@ Support for `%json srdd.get("ak")` is on the way!
 
 <span style="font-weight: 400;">As Livy is providing a simple REST Api, we can quickly implement a little wrapper around it to offer the shared RDD functionality in any languages. Let's do it with regular Python:</span>
 
-{{< highlight python >}}pip install requests
+<pre><code class="python">pip install requests
 
-python{{< /highlight >}}
+python</code></pre>
 
 Then in the Python shell just declare the wrapper:
 
-{{< highlight python >}}
+<pre><code class="python">
 
 import requests
 
@@ -225,21 +225,21 @@ resp = r.json()
 
 return r.json()['data']
 
-{{< /highlight >}}
+</code></pre>
 
 Instantiate it and make it point to a live session that contains a `ShareableRdd`:
 
-{{< highlight python >}}states = SharedRdd('http://localhost:8998/sessions/0', 'states')
+<pre><code class="python">states = SharedRdd('http://localhost:8998/sessions/0', 'states')
 
-{{< /highlight >}}
+</code></pre>
 
 And just interact with the RDD transparently:
 
-{{< highlight python >}}states.get('ak')
+<pre><code class="python">states.get('ak')
 
 states.set('hi', 'Hawaii')
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 

+ 27 - 27
docs/gethue/content/posts/2015-10-21-how-to-use-the-livy-spark-rest-job-server-api-for-submitting-batch-jar-python-and-streaming-spark-jobs.md

@@ -61,7 +61,7 @@ We are using the `YARN` mode here, so all the paths needs to exist on HDFS. For
 
 <span style="font-weight: 400;">Livy offers a wrapper around <code>spark-submit</code> that work with jar and py files. The API is slightly different than the interactive. Let's start by listing the active running jobs:</span>
 
-{{< highlight bash >}}curl localhost:8998/sessions | python -m json.tool % Total % Received % Xferd Average Speed Time Time Time Current
+<pre><code class="bash">curl localhost:8998/sessions | python -m json.tool % Total % Received % Xferd Average Speed Time Time Time Current
 
 Dload Upload Total Spent Left Speed
 
@@ -77,19 +77,19 @@ Dload Upload Total Spent Left Speed
 
 }
 
-{{< /highlight >}}
+</code></pre>
 
 Then we upload the Spark example jar `/usr/lib/spark/lib/spark-examples.jar` on HDFS and point to it. If you are using Livy in local mode and not YARN mode, just keep the local path `/usr/lib/spark/lib/spark-examples.jar`.
 
-{{< highlight bash >}}curl -X POST -data '{"file": "/user/romain/spark-examples.jar", "className": "org.apache.spark.examples.SparkPi"}' -H "Content-Type: application/json" localhost:8998/batches
+<pre><code class="bash">curl -X POST -data '{"file": "/user/romain/spark-examples.jar", "className": "org.apache.spark.examples.SparkPi"}' -H "Content-Type: application/json" localhost:8998/batches
 
 {"id":0,"state":"running","log":[]}
 
-{{< /highlight >}}
+</code></pre>
 
 We get the submission id, in our case 0, and can check its progress. It should actually already be done:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 curl localhost:8998/batches/0 | python -m json.tool
 
@@ -131,11 +131,11 @@ Dload Upload Total Spent Left Speed
 
 }
 
-{{< /highlight >}}
+</code></pre>
 
 We can see the output logs:
 
-{{< highlight bash >}}curl localhost:8998/batches/0/log | python -m json.tool
+<pre><code class="bash">curl localhost:8998/batches/0/log | python -m json.tool
 
 % Total % Received % Xferd Average Speed Time Time Time Current
 
@@ -217,37 +217,37 @@ Dload Upload Total Spent Left Speed
 
 }
 
-{{< /highlight >}}
+</code></pre>
 
 We can add an argument to the command, for example do 100 iterations that way the result is more precise and will run longer:
 
-{{< highlight bash >}}curl -X POST -data '{"file": "/usr/lib/spark/lib/spark-examples.jar", "className": "org.apache.spark.examples.SparkPi", "args": ["100"]}' -H "Content-Type: application/json" localhost:8998/batches
+<pre><code class="bash">curl -X POST -data '{"file": "/usr/lib/spark/lib/spark-examples.jar", "className": "org.apache.spark.examples.SparkPi", "args": ["100"]}' -H "Content-Type: application/json" localhost:8998/batches
 
 {"id":1,"state":"running","log":[]}
 
-{{< /highlight >}}
+</code></pre>
 
 In case we want to stop the running job, we just issue:
 
-{{< highlight bash >}}curl -X DELETE localhost:8998/batches/1
+<pre><code class="bash">curl -X DELETE localhost:8998/batches/1
 
 {"msg":"deleted"}
 
-{{< /highlight >}}
+</code></pre>
 
 Doing it another time will return nothing as the job was removed from Livy:
 
-{{< highlight bash >}}curl -X DELETE localhost:8998/batches/1
+<pre><code class="bash">curl -X DELETE localhost:8998/batches/1
 
 session not found
 
-{{< /highlight >}}
+</code></pre>
 
 ## **Submitting a Python job**
 
 <span style="font-weight: 400;">Submitting Python jobs is almost identical to jar jobs. We uncompress the spark examples and upload <code>pi.py</code> on HDFS:</span>
 
-{{< highlight bash >}}~/tmp$ tar -zxvf /usr/lib/spark/examples/lib/python.tar.gz
+<pre><code class="bash">~/tmp$ tar -zxvf /usr/lib/spark/examples/lib/python.tar.gz
 
 ./
 
@@ -291,17 +291,17 @@ session not found
 
 ./hbase_inputformat.py
 
-{{< /highlight >}}
+</code></pre>
 
 Then start the job:
 
-{{< highlight bash >}}curl -X POST -data '{"file": "/user/romain/pi.py"}' -H "Content-Type: application/json" localhost:8998/batches
+<pre><code class="bash">curl -X POST -data '{"file": "/user/romain/pi.py"}' -H "Content-Type: application/json" localhost:8998/batches
 
-{"id":2,"state":"starting","log":[]}{{< /highlight >}}
+{"id":2,"state":"starting","log":[]}</code></pre>
 
 As always, we can check its status with a simple GET:
 
-{{< highlight bash >}}curl localhost:8998/batches/2 | python -m json.tool
+<pre><code class="bash">curl localhost:8998/batches/2 | python -m json.tool
 
 % Total % Received % Xferd Average Speed Time Time Time Current
 
@@ -341,11 +341,11 @@ Dload Upload Total Spent Left Speed
 
 }
 
-{{< /highlight >}}
+</code></pre>
 
 And the output by adding the `/log` suffix!
 
-{{< highlight bash >}}curl localhost:8998/batches/2/log | python -m json.tool{{< /highlight >}}
+<pre><code class="bash">curl localhost:8998/batches/2/log | python -m json.tool</code></pre>
 
 ## **Submitting a Streaming job**
 
@@ -353,15 +353,15 @@ And the output by adding the `/log` suffix!
 
 After we compiling the jar, we upload it on HDFS, and also upload the twitter4j.properties.
 
-{{< highlight bash >}}curl -X POST -data '{"file": "/user/romain/spark-solr-1.0-SNAPSHOT.jar", "className": "com.lucidworks.spark.SparkApp", "args": ["twitter-to-solr", "-zkHost", "localhost:9983", "-collection", "tweets"], "files": ["/user/romain/twitter4j.properties"]}' -H "Content-Type: application/json" localhost:8998/batches
+<pre><code class="bash">curl -X POST -data '{"file": "/user/romain/spark-solr-1.0-SNAPSHOT.jar", "className": "com.lucidworks.spark.SparkApp", "args": ["twitter-to-solr", "-zkHost", "localhost:9983", "-collection", "tweets"], "files": ["/user/romain/twitter4j.properties"]}' -H "Content-Type: application/json" localhost:8998/batches
 
 {"id":3,"state":"starting","log":[]}
 
-{{< /highlight >}}
+</code></pre>
 
 We check the status and see that it is running correctly:
 
-{{< highlight bash >}}curl localhost:8998/batches/3 | python -m json.tool
+<pre><code class="bash">curl localhost:8998/batches/3 | python -m json.tool
 
 % Total % Received % Xferd Average Speed Time Time Time Current
 
@@ -401,7 +401,7 @@ Dload Upload Total Spent Left Speed
 
 }
 
-{{< /highlight >}}
+</code></pre>
 
 If we open the Dashboard and configure it like in the blog post, we can see the tweets coming:
 
@@ -409,13 +409,13 @@ If we open the Dashboard and configure it like in the blog post, we can see the
 
 At the end, we can just stop the job with:
 
-{{< highlight bash >}}curl -X DELETE localhost:8998/batches/3{{< /highlight >}}
+<pre><code class="bash">curl -X DELETE localhost:8998/batches/3</code></pre>
 
 &nbsp;
 
 You can refer to the [Batch API documentation][6] for how to specify additional `spark-submit` properties. For example to add a custom name or queue:
 
-{{< highlight bash >}}curl -X POST -data '{"file": "/usr/lib/spark/lib/spark-examples.jar", "className": "org.apache.spark.examples.SparkPi", "queue": "my_queue", "name": "Livy Pi Example"}' -H "Content-Type: application/json" localhost:8998/batches{{< /highlight >}}
+<pre><code class="bash">curl -X POST -data '{"file": "/usr/lib/spark/lib/spark-examples.jar", "className": "org.apache.spark.examples.SparkPi", "queue": "my_queue", "name": "Livy Pi Example"}' -H "Content-Type: application/json" localhost:8998/batches</code></pre>
 
 Next time we will explore magic keywords and how to integrate better with IPython!
 

+ 2 - 2
docs/gethue/content/posts/2015-10-22-use-the-shell-action-in-oozie.md

@@ -56,11 +56,11 @@ If using Hue version less than 4.3 (it is automated from then):
 
 If the executable is a script instead of a standard UNIX command, it needs to be copied to HDFS and the path can be specified by using the File Chooser in `Files+` field.
 
-{{< highlight bash >}}#!/usr/bin/env bash
+<pre><code class="bash">#!/usr/bin/env bash
 
 sleep
 
-{{< /highlight >}}
+</code></pre>
 
 [<img class="alignnone wp-image-3417 size-full" src="https://cdn.gethue.com/uploads/2015/10/5.png" />][6]
 

+ 4 - 4
docs/gethue/content/posts/2015-12-07-auditing-user-administration-operations-with-hue-and-cloudera-navigator-2.md

@@ -50,7 +50,7 @@ Hue admins can thus easily `monitor` superuser operations such as adding/editing
 
 To enable and configure the log file used for the audit log, there are 2 new configuration properties that have been added to the hue.ini file, and can be overridden in [Cloudera Manager's Service Access Audit Log Properties][4] controls.
 
-{{< highlight bash >}}[desktop]
+<pre><code class="bash">[desktop]
   
 \# The directory where to store the auditing logs. Auditing is disable if the value is empty.
   
@@ -62,11 +62,11 @@ audit_event_log_dir=/Users/jennykim/Dev/hue/logs/audit.log
   
 audit_log_max_file_size=100MB
   
-{{< /highlight >}}
+</code></pre>
 
 After configuring the audit log and restarting Hue, you can then start viewing the audited operations by tailing the log:
 
-{{< highlight bash >}}$ tail logs/audit.log
+<pre><code class="bash">$ tail logs/audit.log
 
 {"username": "admin", "impersonator": "hue", "eventTime": 1447271632364, "operationText": "Successful login for user: admin", "service": "accounts", "url": "/accounts/login/", "allowed": true, "operation": "USER_LOGIN", "ipAddress": "127.0.0.1"}
   
@@ -76,7 +76,7 @@ After configuring the audit log and restarting Hue, you can then start viewing t
   
 {"username": "admin", "impersonator": "hue", "eventTime": 1447271788277, "operationText": "Successfully edited permissions: useradmin/access", "service": "useradmin", "url": "/useradmin/permissions/edit/useradmin/access", "allowed": true, "operation": "EDIT_PERMISSION", "ipAddress": "127.0.0.1"}
   
-{{< /highlight >}}
+</code></pre>
 
 Each audited record contains fields for:
 

+ 12 - 12
docs/gethue/content/posts/2015-12-18-getting-started-with-hue-in-2-minutes-with-docker.md

@@ -70,45 +70,45 @@ They are two ways: just pull the latest from the Internet or build it yourself f
 
 ### [][9]{#user-content-pull-the-image-from-docker-hub.anchor}Pull the image from Docker Hub
 
-{{< highlight bash >}}sudo docker pull gethue/hue:latest
+<pre><code class="bash">sudo docker pull gethue/hue:latest
 
-{{< /highlight >}}
+</code></pre>
 
 ### [][10]{#user-content-build-the-image.anchor}Build the image
 
-{{< highlight bash >}}cd tools/docker/hue-base
+<pre><code class="bash">cd tools/docker/hue-base
 
 sudo docker build -rm -t gethue/hue:latest .
 
-{{< /highlight >}}
+</code></pre>
 
 ## [][11]{#user-content-running-the-image.anchor}Running the image
 
-{{< highlight bash >}}docker run -it -p 8888:8888 gethue/hue:latest bash
+<pre><code class="bash">docker run -it -p 8888:8888 gethue/hue:latest bash
 
-{{< /highlight >}}
+</code></pre>
 
 This opens a bash to the root of the project. From there you can run the development version of Hue with the command
 
-{{< highlight bash >}}./build/env/bin/hue runserver_plus 0.0.0.0:8888
+<pre><code class="bash">./build/env/bin/hue runserver_plus 0.0.0.0:8888
 
-{{< /highlight >}}
+</code></pre>
 
 Hue should then be up and running on your default Docker IP on the port 8888, so usually [http://192.168.99.100:8888][12].
 
 **Note** If 192.168.99.100 does not work, get the IP of the docker container with:
 
-{{< highlight bash >}}sudo docker ps
+<pre><code class="bash">sudo docker ps
 
 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
 
 b7950388c1db gethue/hue:latest "bash" 10 minutes ago Up 10 minutes 22/tcp, 0.0.0.0:8888->8888/tcp agitated_mccarthy
 
-{{< /highlight >}}
+</code></pre>
 
 Then get `inet addr`, so in our case [http://172.17.0.1:8888][13]:
 
-{{< highlight bash >}}sudo docker exec -it b7950388c1db /sbin/ifconfig eth0
+<pre><code class="bash">sudo docker exec -it b7950388c1db /sbin/ifconfig eth0
 
 eth0 Link encap:Ethernet HWaddr 02:42:ac:11:00:01
 
@@ -126,7 +126,7 @@ collisions:0 txqueuelen:0
 
 RX bytes:10626 (10.6 KB) TX bytes:648 (648.0 B)
 
-{{< /highlight >}}
+</code></pre>
 
 <a href="https://raw.githubusercontent.com/cloudera/hue/master/docs/images/login.png" target="_blank" rel="noopener noreferrer"><img title="Hue First Login" src="https://raw.githubusercontent.com/cloudera/hue/master/docs/images/login.png" alt="alt text" /></a>
 

+ 20 - 20
docs/gethue/content/posts/2016-03-03-custom-sql-query-editors.md

@@ -48,7 +48,7 @@ Hue's new query editor can easily be configured to work with any database backen
 
 First, in your `hue.ini` file, you will need to add the relevant database connection information under the `librdbms` section:
 
-{{< highlight bash >}}[librdbms]
+<pre><code class="bash">[librdbms]
 
 [[databases]]
 
@@ -68,11 +68,11 @@ password=hue
 
 options={}
 
-{{< /highlight >}}
+</code></pre>
 
 Secondly, we need to add a new interpreter to the notebook app. This will allow the new database type to be registered as a snippet-type in the Notebook app. For query editors that use a Django-compatible database, the name in the brackets should match the database configuration name in the `librdbms` section (e.g. - `postgresql`). The interface will be set to `rdbms`. This tells Hue to use the `librdbms` driver and corresponding connection information to connect to the database. For example, with the above postgresql connection configuration in the `librdbms` section, we can add a PostgreSQL interpreter with the following `notebook` configuration:
 
-{{< highlight bash >}}[notebook]
+<pre><code class="bash">[notebook]
 
 [[interpreters]]
 
@@ -82,7 +82,7 @@ name=PostgreSQL
 
 interface=rdbms
 
-{{< /highlight >}}
+</code></pre>
 
 After updating the configuration and restarting Hue, we can access the new PostgreSQL interpreter in the Notebook app:
 
@@ -111,16 +111,16 @@ Integrating an external JDBC database involves a 3-step process:
 
   1. Download the compatible client driver JAR file for your specific OS and database. Usually you can find the driver files from the official database vendor site; for example, the MySQL JDBC connector for Mac OSX can be found here: <https://dev.mysql.com/downloads/connector/j/>. (NOTE: In the case of MySQL, the JDBC driver is platform independent, but some drivers are specific to certain OSes and versions so be sure to verify compatibility.)
   2. Add the path to the driver JAR file to your Java CLASSPATH. Here, we set the CLASSPATH environment variable in our \`.bash_profile\` script.
-    {{< highlight bash >}}# MySQL
+    <pre><code class="bash"># MySQL
 
     export MYSQL_HOME=/Users/hue/Dev/mysql
 
     export CLASSPATH=$MYSQL_HOME/mysql-connector-java-5.1.38-bin.jar:$CLASSPATH
 
-    {{< /highlight >}}
+    </code></pre>
 
   3. Add a new interpreter to the notebook app and supply the "name", set "interface" to `jdbc`, and set "options" to a JSON object that contains the JDBC connection information. For example, we can connect a local MySQL database named "hue" running on \`localhost\` and port \`8080\` via JDBC with the following configuration:
-    {{< highlight bash >}}[notebook]
+    <pre><code class="bash">[notebook]
 
     [[interpreters]]
 
@@ -132,7 +132,7 @@ Integrating an external JDBC database involves a 3-step process:
 
     options='{"url": "jdbc:mysql://localhost:3306/hue", "driver": "com.mysql.jdbc.Driver", "user": "root", "password": ""}'
 
-    {{< /highlight >}}
+    </code></pre>
 
 #### TIP: Testing JDBC Configurations
 
@@ -178,7 +178,7 @@ Microsoft's SQL Server JDBC drivers can be downloaded from the official site: [
 
 ##### Sample Configuration
 
-{{< highlight bash >}}[[[sqlserver]]]
+<pre><code class="bash">[[[sqlserver]]]
 
 name=SQLServer JDBC
 
@@ -186,7 +186,7 @@ interface=jdbc
 
 options='{"url": "jdbc:microsoft:sqlserver://localhost:1433", "driver": "com.microsoft.jdbc.sqlserver.SQLServerDriver", "user": "admin": "password": "pass"}'
 
-{{< /highlight >}}
+</code></pre>
 
 ####
 
@@ -198,7 +198,7 @@ Vertica's JDBC client drivers can be downloaded here: [Vertica JDBC Client Driv
 
 ##### Sample Configuration
 
-{{< highlight bash >}}[[[vertica]]]
+<pre><code class="bash">[[[vertica]]]
 
 name=Vertica JDBC
 
@@ -206,7 +206,7 @@ interface=jdbc
 
 options='{"url": "jdbc:vertica://localhost:5433/example", "driver": "com.vertica.jdbc.Driver", "user": "admin", "password": "pass"}'
 
-{{< /highlight >}}
+</code></pre>
 
 ####
 
@@ -218,7 +218,7 @@ The Phoenix JDBC client driver is bundled with the Phoenix binary and source rel
 
 ##### Sample Configuration
 
-{{< highlight bash >}}[[[phoenix]]]
+<pre><code class="bash">[[[phoenix]]]
 
 name=Phoenix JDBC
 
@@ -226,7 +226,7 @@ interface=jdbc
 
 options='{"url": "jdbc:phoenix:localhost:2181/hbase", "driver": "org.apache.phoenix.jdbc.PhoenixDriver", "user": "", "password": ""}'
 
-{{< /highlight >}}
+</code></pre>
 
 **NOTE**: Currently, the Phoenix JDBC connector for Hue only supports read-only operations (SELECT and EXPLAIN statements).
 
@@ -240,7 +240,7 @@ The Presto JDBC client driver is maintained by the Presto Team and can be downlo
 
 ##### Sample Configuration
 
-{{< highlight bash >}}[[[presto]]]
+<pre><code class="bash">[[[presto]]]
 
 name=Presto JDBC
 
@@ -248,7 +248,7 @@ interface=jdbc
 
 options='{"url": "jdbc:presto://localhost:8080/", "driver": "com.facebook.presto.jdbc.PrestoDriver"}'
 
-{{< /highlight >}}
+</code></pre>
 
 ####
 
@@ -258,7 +258,7 @@ The [Drill JDBC driver][11] can be used.
 
 ##### Sample Configuration
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 <pre class="pre codeblock"><code>[[[drill]]]
 
@@ -274,7 +274,7 @@ interface=jdbc
 
 options='{"url": "<drill-jdbc-url>", "driver": "org.apache.drill.jdbc.Driver", "user": "admin", "password": "admin"}'</code>
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 
@@ -286,7 +286,7 @@ The Kylin JDBC client driver is maintained can be downloaded here: <http://kyl
 
 ##### Sample Configuration
 
-{{< highlight bash >}}[[[kylin]]]
+<pre><code class="bash">[[[kylin]]]
 
 name=kylin JDBC
 
@@ -294,7 +294,7 @@ interface=jdbc
 
 options='{"url": "jdbc:kylin://172.17.0.2:7070/learn_kylin","driver": "org.apache.kylin.jdbc.Driver", "user": "ADMIN", "password": "KYLIN"}'
 
-{{< /highlight >}}
+</code></pre>
 
 ### When HS2, RDBMS, and JDBC Are Not Enough
 

+ 4 - 4
docs/gethue/content/posts/2016-04-06-suggest-for-solr-search-dashboards.md

@@ -58,7 +58,7 @@ We hope that you like the interactivity, and feel free to send feedback on the
 
 First grab a [Solr 5][4], start it and make sure that it has a suggester configured:
 
-{{< highlight bash >}}romain@unreal:$ ./bin/solr -e techproducts
+<pre><code class="bash">romain@unreal:$ ./bin/solr -e techproducts
 
 Waiting to see Solr listening on port 8983 [/]
 
@@ -68,11 +68,11 @@ Checked core existence using Core API command:
 
 http://localhost:8983/solr/admin/cores?action=STATUS&core=techproducts
 
-{{< /highlight >}}
+</code></pre>
 
 Confirm that Solr has a `suggester` configured, here named `mySuggester`:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 http://127.0.0.1:8983/solr/#/techproducts/files?file=solrconfig.xml
 
@@ -98,7 +98,7 @@ http://127.0.0.1:8983/solr/#/techproducts/files?file=solrconfig.xml
 
 </searchComponent>
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 

+ 2 - 2
docs/gethue/content/posts/2016-05-04-the-hue-team-development-process.md

@@ -63,11 +63,11 @@ Ready? Go!
   </li>
 </ul>
 
-{{< highlight bash >}}#!/bin/bash
+<pre><code class="bash">#!/bin/bash
 
 SUMMARY=$(curl -s https://issues.cloudera.org/rest/api/2/issue/HUE-${1} | jq -r '.fields | .summary')
 
-git commit -m "HUE-${1} ${SUMMARY}"{{< /highlight >}}
+git commit -m "HUE-${1} ${SUMMARY}"</code></pre>
 
 </span>
 

+ 2 - 2
docs/gethue/content/posts/2016-06-13-introducing-the-new-login-modal-and-idle-session-timeout.md

@@ -40,7 +40,7 @@ With the latest release of Hue 3.10, we've added an additional security feature
 
 Hue now offers a new property, `idle_session_timeout`, that can be configured in the hue.ini file:
 
-{{< highlight bash >}}
+<pre><code class="bash">
   
 [desktop]
   
@@ -48,7 +48,7 @@ Hue now offers a new property, `idle_session_timeout`, that can be configured i
   
 idle_session_timeout=600
   
-{{< /highlight >}}
+</code></pre>
 
 When `idle_session_timeout` is set, users will automatically be logged out after N (e.g. - 600) seconds of inactivity and be prompted to login again:
 

+ 4 - 4
docs/gethue/content/posts/2016-07-19-change-your-maps-look-and-feel.md

@@ -54,7 +54,7 @@ Let's display the Esri.WorldImagery in Hue!
 
 The properties we need to tweak are `leaflet_tile_layer` and `leaflet_tile_layer_attribution`, that can be configured in the hue.ini file:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [desktop]
 
@@ -62,19 +62,19 @@ leaflet_tile_layer=https://server.arcgisonline.com/ArcGIS/rest/services/World_Im
 
 leaflet_tile_layer_attribution='Tiles &copy; Esri &mdash; Source: Esri, i-cubed, USDA, USGS, AEX, GeoEye, Getmapping, Aerogrid, IGN, IGP, UPR-EGP, and the GIS User Community'
 
-{{< /highlight >}}
+</code></pre>
 
 The values are exactly the same taken from the Leaflet providers demo.
 
 With the recent security improvements in Hue, we need to whitelist the tile domain `server.arcgisonline.com` as well like and put it instead of `*.tile.osm.org`
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [desktop]
 
 secure_content_security_policy="script-src 'self' 'unsafe-inline' 'unsafe-eval' \*.google-analytics.com \*.doubleclick.net \*.mathjax.org data:;img-src 'self' \*.google-analytics.com *.doubleclick.net server.arcgisonline.com data:;style-src 'self' 'unsafe-inline';connect-src 'self';child-src 'none';object-src 'none'"
 
-{{< /highlight >}}
+</code></pre>
 
 Et voila, when we restart Hue, we'll have the world imagery in every app that uses maps!
 

+ 2 - 2
docs/gethue/content/posts/2016-08-22-easy-indexing-of-data-into-solr.md

@@ -61,7 +61,7 @@ categories:
 
 <span style="font-weight: 400;">Next you'll need to install these required <a href="https://www.dropbox.com/s/unex80g7xbx1aq7/smart_indexer_lib-2016-08-22.zip?dl=0">libraries</a>. To do so place them in a directory somewhere on HDFS and set the path for </span>_<span style="font-weight: 400;">config_indexer_libs_path</span>_ <span style="font-weight: 400;">under indexer in the Hue ini to match by default, the </span>_<span style="font-weight: 400;">config_indexer_libs_path</span>_ <span style="font-weight: 400;">value is set to </span>_<span style="font-weight: 400;">/tmp/smart_indexer_lib</span>_<span style="font-weight: 400;">. Additionally under indexer in the Hue ini you’ll need to set </span>_<span style="font-weight: 400;">enable_new_indexer </span>_<span style="font-weight: 400;">to true</span><span style="font-weight: 400;">.</span>
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [indexer]
 
@@ -73,7 +73,7 @@ enable_new_indexer=false
 
 \## config_indexer_libs_path=/tmp/smart_indexer_lib
 
-{{< /highlight >}}
+</code></pre>
 
 **Note**:
 

+ 6 - 6
docs/gethue/content/posts/2016-08-25-introducing-s3-support-in-hue.md

@@ -59,7 +59,7 @@ categories:
   These keys can securely stored in a script that outputs the actual access key and secret key to stdout to be read by Hue (this is similar to how <a href="https://gethue.com/storing-passwords-in-script-rather-than-hue-ini-files/">Hue reads password scripts</a>). In order to use script files, add the following section to your <code>hue.ini</code> configuration file:
 </p>
 
-{{< highlight bash >}}[aws]
+<pre><code class="bash">[aws]
 
 [[aws_accounts]]
 
@@ -73,13 +73,13 @@ allow_environment_credentials=false
 
 region=us-east-1
 
-{{< /highlight >}}
+</code></pre>
 
 <p class="p1">
   Alternatively (but not recommended for production or secure environments), you can set the <code>access_key_id</code> and <code>secret_access_key</code> values to the plain-text values of your keys:
 </p>
 
-{{< highlight bash >}}[aws]
+<pre><code class="bash">[aws]
 
 [[aws_accounts]]
 
@@ -93,7 +93,7 @@ allow_environment_credentials=false
 
 region=us-east-1
 
-{{< /highlight >}}
+</code></pre>
 
 <p class="p1">
   The region should be set to the AWS region corresponding to the S3 account. By default, this region will be set to ‘us-east-1’.
@@ -105,7 +105,7 @@ region=us-east-1
   In addition to configuring Hue with your S3 credentials, Hadoop will also need to be configured with the S3 authentication credentials in order to read from and save to S3. This can be done by setting the following properties in your <code>core-site.xml</code> file:
 </p>
 
-{{< highlight xml >}}
+<pre><code class="xml">
 
 <property>
 
@@ -123,7 +123,7 @@ region=us-east-1
 
 </property/>
 
-{{< /highlight >}}
+</code></pre>
 
 <p class="p4">
   <span class="s2">For more information see <a href="http://wiki.apache.org/hadoop/AmazonS3"><span class="s1">http://wiki.apache.org/hadoop/AmazonS3</span></a></span>

+ 16 - 16
docs/gethue/content/posts/2016-09-22-hue-security-improvements.md

@@ -50,7 +50,7 @@ This document describes some of the fixes and enables Hue administrators to enf
 
 The new Content-Security-Policy HTTP response header helps you reduce XSS risks on modern browsers by declaring what dynamic resources are allowed to load via a HTTP Header. (Read more here: <https://content-security-policy.com/>)
 
-{{< highlight bash >}}
+<pre><code class="bash">
   
 [desktop]
   
@@ -58,11 +58,11 @@ secure_content_security_policy="script-src 'self' 'unsafe-inline' 'unsafe-eval'
   
 #In HUE 3.11 and higher it is enabled by default.
   
-{{< /highlight >}}
+</code></pre>
 
 If you want to turn off content-security-policy header then use following value. <span style="color: #ff0000;">(Beware use it on your own risk)</span>
 
-{{< highlight bash >}}
+<pre><code class="bash">
   
 [desktop]
   
@@ -70,11 +70,11 @@ secure_content_security_policy=""
   
 #(Beware use it on your own risk)
   
-{{< /highlight >}}
+</code></pre>
 
 If you want to disable declaring what dynamic resources are allowed to load via a HTTP Header then you can use following value. <span style="color: #ff0000;">(Use it on your own risk)</span>
 
-{{< highlight bash >}}
+<pre><code class="bash">
   
 [desktop]
   
@@ -82,7 +82,7 @@ secure_content_security_policy="default-src 'self' 'unsafe-eval' 'unsafe-inline'
   
 #(Use it on your own risk)
   
-{{< /highlight >}}
+</code></pre>
 
 [<img src="https://cdn.gethue.com/uploads/2016/09/block-content-1024x400.png" />
   
@@ -92,15 +92,15 @@ secure_content_security_policy="default-src 'self' 'unsafe-eval' 'unsafe-inline'
 
 HUE now minimizes disclosure of web server information to minimize insight about web server it's version or other details. No change is needed from end user. Produces following HTTP response header :
 
-{{< highlight bash >}}
+<pre><code class="bash">
   
 Server:apache
   
-{{< /highlight >}}
+</code></pre>
 
 ### These HTTP response headers are generated after above security fixes.
 
-{{< highlight bash >}}
+<pre><code class="bash">
   
 x-content-type-options:nosniff
   
@@ -114,13 +114,13 @@ Strict-Transport-Security:max-age=31536000; includeSubDomains
   
 Server:apache
   
-{{< /highlight >}}
+</code></pre>
 
 ### X-Content-Type-Options: header
 
 Some browsers will try to guess the content types of the assets that they fetch, overriding the Content-Type header. To prevent the browser from guessing the content type, and force it to always use the type provided in the Content-Type header, you can pass the X-Content-Type-Options: nosniff header.
 
-{{< highlight bash >}}
+<pre><code class="bash">
   
 [desktop]
   
@@ -128,13 +128,13 @@ secure_content_type_nosniff=true
   
 #In HUE 3.11 and higher it is enabled by default.
   
-{{< /highlight >}}
+</code></pre>
 
 ### X-XSS-Protection: header
 
 Some browsers have ability to block content that appears to be an XSS attack. They work by looking for Javascript content in the GET or POST parameters of a page. To enable the XSS filter in the browser, and force it to always block suspected XSS attacks, you can pass the X-XSS-Protection: 1; mode=block header.
 
-{{< highlight bash >}}
+<pre><code class="bash">
   
 [desktop]
   
@@ -142,7 +142,7 @@ secure_browser_xss_filter=true
   
 #In HUE 3.11 and higher it is enabled by default.
   
-{{< /highlight >}}
+</code></pre>
 
 [
   
@@ -187,7 +187,7 @@ Fixed Arbitrary host header acceptance in Hue. Now one can set host/domain names
   
 allowed_hosts="host.domain,host2.domain,host3.domain"
 
-{{< highlight bash >}}
+<pre><code class="bash">
   
 [desktop]
   
@@ -197,7 +197,7 @@ allowed_hosts="*.domain"
   
 \# or specific example: allowed_hosts="hue1.hadoop.cloudera.com,hue2.hadoop.cloudera.com"
   
-{{< /highlight >}}
+</code></pre>
 
 ### Fixed Denial-of-service possibility by filling session store
 

+ 4 - 4
docs/gethue/content/posts/2016-12-19-security-improvements-http-only-flag-sasl-qop-and-more.md

@@ -80,13 +80,13 @@ SASL QOP values are
 
 In Thrift SASL library, the sasl_max_buffer support is already implemented. sasl_max_buffer in the hue.ini provides a bigger and configurable buffer size that allow to provide support for hive.server2.thrift.sasl.qop="auth-conf".
 
-{{< highlight bash >}}[desktop]
+<pre><code class="bash">[desktop]
 
 \# This property specifies the maximum size of the receive buffer in bytes in thrift sasl communication (default 2 MB).
 
 sasl_max_buffer=2 \* 1024 \* 1024
 
-{{< /highlight >}}
+</code></pre>
 
 ### Fixed XML Injection for oozie
 
@@ -104,11 +104,11 @@ Turn off HSTS header in Hue Load Balancer and made sure Hue server is generating
 
 The Request Session object allows to persist certain parameters across requests. It also persists cookies across all requests made from the Session instance, and will use urllib3's connection pooling. We are making several requests to the same host:port, with this change the underlying TCP connection will be reused, which can result in a significant performance increase. With current pool size set to 40 connections and is configurable using "CHERRYPY_SERVER_THREADS" parameter.
 
-{{< highlight python >}}CACHE_SESSION = requests.Session()
+<pre><code class="python">CACHE_SESSION = requests.Session()
 
 CACHE_SESSION.mount('http://', requests.adapters.HTTPAdapter(pool_connections=conf.CHERRYPY_SERVER_THREADS.get(), pool_maxsize=conf.CHERRYPY_SERVER_THREADS.get()))
 
-CACHE_SESSION.mount('https://', requests.adapters.HTTPAdapter(pool_connections=conf.CHERRYPY_SERVER_THREADS.get(), pool_maxsize=conf.CHERRYPY_SERVER_THREADS.get())){{< /highlight >}}
+CACHE_SESSION.mount('https://', requests.adapters.HTTPAdapter(pool_connections=conf.CHERRYPY_SERVER_THREADS.get(), pool_maxsize=conf.CHERRYPY_SERVER_THREADS.get()))</code></pre>
 
  [1]: https://gethue.com/hue-security-improvements/
  [2]: https://cdn.gethue.com/uploads/2016/12/Screen-Shot-2016-12-15-at-4.22.11-PM.png

+ 2 - 2
docs/gethue/content/posts/2016-12-22-extract-archives-as-oozie-job.md

@@ -57,13 +57,13 @@ Once the job finishes, you can find the extracted contents in the same HDFS fold
 
 Flag to enable this feature until Hue 4:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [filebrowser]
 
 \# enable_extract_uploaded_archive=true
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 

+ 2 - 2
docs/gethue/content/posts/2016-12-22-sql-improvements-with-row-counts-sample-popup-and-more.md

@@ -84,11 +84,11 @@ categories:
 
 <span style="font-weight: 400;">An </span>[<span style="font-weight: 400;">external contribution</span>][7] <span style="font-weight: 400;">provided support for sending multiple queries when using Tez (instead of a maximum of just one at the time). You can turn it on with this setting:</span>
 
-{{< highlight bash >}}[beeswax]
+<pre><code class="bash">[beeswax]
 
 max_number_of_sessions=10
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 

+ 8 - 8
docs/gethue/content/posts/2017-02-06-hue-3-12-the-improved-editor-for-sql-developers-and-analysts-is-out.md

@@ -133,7 +133,7 @@ Fixed Arbitrary host header acceptance in Hue. Now one can set host/domain names
 
 allowed_hosts="host.domain,host2.domain,host3.domain"
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [desktop]
 
@@ -143,7 +143,7 @@ allowed_hosts="*.domain"
 
 \# or specific example: allowed_hosts="hue1.hadoop.cloudera.com,hue2.hadoop.cloudera.com"
 
-{{< /highlight >}}
+</code></pre>
 
 <span style="color: #ff0000;"><strong>Note</strong></span>: “Bad Request (400)” error: when [hosting Hue in an AWS cluster][14], you might need to set the value to '*' to allow external client of the network to access Hue.
 
@@ -157,31 +157,31 @@ allowed_hosts="*.domain"
 
 <span style="font-weight: 400;">In Thrift SASL library, the </span><span style="font-weight: 400;">sasl_max_buffer</span> <span style="font-weight: 400;">support is already implemented. </span><span style="font-weight: 400;">sasl_max_buffer</span> <span style="font-weight: 400;">in the </span><span style="font-weight: 400;">hue.ini</span> <span style="font-weight: 400;">provides a bigger and configurable buffer size that allow to provide support for </span><span style="font-weight: 400;"><code>hive.server2.thrift.sasl.qop="auth-conf"&lt;code></code></code></span><span style="font-weight: 400;">.</span>
 
-{{< highlight bash >}}[desktop]
+<pre><code class="bash">[desktop]
 
 \# This property specifies the maximum size of the receive buffer in bytes in thrift sasl communication (default 2 MB).
 
 sasl_max_buffer=2 \* 1024 \* 1024
 
-{{< /highlight >}}
+</code></pre>
 
 ## <span style="font-weight: 400;">Introducing Request HTTP Pool in Hue</span>
 
 <span style="font-weight: 400;">The Request Session object allows the persistence of certain parameters across requests. It also persists cookies across all requests made from the Session instance, and will use urllib3’s connection pooling. We are making several requests to the same host:port, with this change the underlying TCP connection will be reused, which can result in a significant performance increase.</span>
 
-{{< highlight python >}}CACHE_SESSION = requests.Session()
+<pre><code class="python">CACHE_SESSION = requests.Session()
 
 CACHE_SESSION.mount('http://', requests.adapters.HTTPAdapter(pool_connections=conf.CHERRYPY_SERVER_THREADS.get(), pool_maxsize=conf.CHERRYPY_SERVER_THREADS.get()))
 
 CACHE_SESSION.mount('https://', requests.adapters.HTTPAdapter(pool_connections=conf.CHERRYPY_SERVER_THREADS.get(), pool_maxsize=conf.CHERRYPY_SERVER_THREADS.get()))
 
-{{< /highlight >}}
+</code></pre>
 
 ## Content-Security-Policy: header
 
 The new Content-Security-Policy HTTP response header helps you reduce XSS risks on modern browsers by declaring what dynamic resources are allowed to load via a HTTP Header. (For more reading: <https://content-security-policy.com/>)
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [desktop]
 
@@ -189,7 +189,7 @@ secure_content_security_policy="script-src 'self' 'unsafe-inline' 'unsafe-eval'
 
 #In HUE 3.11 and higher it is enabled by default.
 
-{{< /highlight >}}
+</code></pre>
 
 &nbsp;
 

+ 4 - 4
docs/gethue/content/posts/2017-04-03-hue-with-a-custom-logo.md

@@ -38,7 +38,7 @@ We have seen in this [previous blog post][1] that there's a way to customize the
 
 That's a perfect setting to show your company logo up there. Depending on if you are using <a href="https://gethue.com/hadoop-tutorial-how-to-create-a-real-hadoop-cluster-in-10-minutes/" target="_blank" rel="noopener noreferrer">Cloudera Manager</a> or not, you should either add a safety valve or edit a .ini file to use this feature. For details on how to change the configuration, <a href="https://gethue.com/how-to-configure-hue-in-your-hadoop-cluster/" target="_blank" rel="noopener noreferrer">read here</a>. In the desktop/custom section of the ini file you can find the logo_svg property:
 
-{{< highlight bash >}}[desktop]
+<pre><code class="bash">[desktop]
 
 [[custom]]
 
@@ -48,17 +48,17 @@ That's a perfect setting to show your company logo up there. Depending on if yo
 
 \## logo_svg=
 
-{{< /highlight >}}
+</code></pre>
 
 You can go crazy and write there any SVG code you want. Please keep in mind your SVG should be designed to fit in a 160x40 pixels space. To have the same 'hearts logo' you can see above, you can type this code
 
-{{< highlight bash >}}[desktop]
+<pre><code class="bash">[desktop]
 
 [[custom]]
 
 logo_svg='<g><path stroke="null" id="svg_1" d="m44.41215,11.43463c-4.05017,-10.71473 -17.19753,-5.90773 -18.41353,-0.5567c-1.672,-5.70253 -14.497,-9.95663 -18.411,0.5643c-4.35797,11.71793 16.891,22.23443 18.41163,23.95773c1.5181,-1.36927 22.7696,-12.43803 18.4129,-23.96533z" fill="#ffffff"/> <path stroke="null" id="svg_2" d="m98.41246,10.43463c-4.05016,-10.71473 -17.19753,-5.90773 -18.41353,-0.5567c-1.672,-5.70253 -14.497,-9.95663 -18.411,0.5643c-4.35796,11.71793 16.891,22.23443 18.41164,23.95773c1.5181,-1.36927 22.76959,-12.43803 18.41289,-23.96533z" fill="#FF5A79"/> <path stroke="null" id="svg_3" d="m154.41215,11.43463c-4.05016,-10.71473 -17.19753,-5.90773 -18.41353,-0.5567c-1.672,-5.70253 -14.497,-9.95663 -18.411,0.5643c-4.35796,11.71793 16.891,22.23443 18.41164,23.95773c1.5181,-1.36927 22.76959,-12.43803 18.41289,-23.96533z" fill="#ffffff"/> </g>'
 
-{{< /highlight >}}
+</code></pre>
 
 There are some online tools that can help you with designing/importing the logo. For instance, <a href="http://editor.method.ac/" target="_blank" rel="noopener noreferrer">http://editor.method.ac/</a> allows you to get the SVG code right away
 

+ 2 - 2
docs/gethue/content/posts/2017-07-20-the-hue-4-user-interface-in-detail.md

@@ -105,13 +105,13 @@ The older Hue 3 UI is still there and it's easily reachable just by clicking on
 
 Administrators can also decide to enable/disable the new UI at a global level on the <a href="https://gethue.com/how-to-configure-hue-in-your-hadoop-cluster/" target="_blank" rel="noopener noreferrer">hue.ini or CM safety valve</a>
 
-{{< highlight bash >}}[desktop]
+<pre><code class="bash">[desktop]
 
 \# Choose whether to enable the new Hue 4 interface.
 
 is_hue_4=true
 
-{{< /highlight >}}
+</code></pre>
 
 If you look at your browser's address bar, you will notice that all the URLs with the <span class="emphasis"><em>/hue</em></span> prefix point to Hue 4. It is possible to just remove the prefix and land on the Hue 3 version of the page, e.g. /hue/editor (Hue 4) → /editor (Hue 3)
 

+ 8 - 8
docs/gethue/content/posts/2017-11-20-browsing-adls-data-querying-it-with-sql-and-exporting-the-results-back-in-hue-4-2.md

@@ -94,7 +94,7 @@ categories:
   In order to add an ADLS account to Hue, you’ll need to configure Hue with valid <a href="https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-service-to-service-authenticate-rest-api">ADLS credentials</a>, including the client ID, client secret and tenant ID.<br /> These keys can securely stored in a script that outputs the actual access key and secret key to stdout to be read by Hue (this is similar to how <a href="https://gethue.com/storing-passwords-in-script-rather-than-hue-ini-files/">Hue reads password scripts</a>). In order to use script files, add the following section to your hue.ini configuration file:
 </p>
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [adls]
 
@@ -116,13 +116,13 @@ fs_defaultfs=adl://<account_name>.azuredatalakestore.net
 
 webhdfs_url=https://<account_name>.azuredatalakestore.net
 
-{{< /highlight >}}
+</code></pre>
 
 <p class="p1">
   Alternatively (but not recommended for production or secure environments), you can set the client_secret value in plain-text:
 </p>
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [adls]
 
@@ -144,13 +144,13 @@ fs_defaultfs=adl://<account_name>.azuredatalakestore.net
 
 webhdfs_url=https://<account_name>.azuredatalakestore.net
 
-{{< /highlight >}}
+</code></pre>
 
 <p class="p1">
   Alternatively (but not recommended for production or secure environments), you can set the client_secret value in plain-text:
 </p>
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [adls]
 
@@ -172,7 +172,7 @@ fs_defaultfs=adl://<account_name>.azuredatalakestore.net
 
 webhdfs_url=https://<account_name>.azuredatalakestore.net
 
-{{< /highlight >}}
+</code></pre>
 
 ## Integrating Hadoop with ADLS {.p3}
 
@@ -180,7 +180,7 @@ webhdfs_url=https://<account_name>.azuredatalakestore.net
   In addition to configuring Hue with your ADLS credentials, Hadoop will also need to be configured with the ADLS authentication credentials in order to read from and save to ADLS. This can be done by setting the following properties in your <a href="https://hadoop.apache.org/docs/current/hadoop-azure-datalake/index.html#Using_Client_Keys">core-site.xml</a> file:
 </p>
 
-{{< highlight xml >}}
+<pre><code class="xml">
 
 <property>
 
@@ -214,7 +214,7 @@ webhdfs_url=https://<account_name>.azuredatalakestore.net
 
 </property/>
 
-{{< /highlight >}}
+</code></pre>
 
 <p class="p1">
   With Hue and Hadoop configured, we can verify that Hue is able to successfully connect to ADLS by restarting Hue and checking the configuration page. You should not see any errors related to ADLS, and you should notice an additional option in the menu from the main navigation.

+ 4 - 4
docs/gethue/content/posts/2017-12-08-browsing-impala-query-execution-within-the-sql-editor.md

@@ -76,11 +76,11 @@ categories:
 
 Display the [explain][2] plan which outlines logical execution steps. You can verify here that the execution will not proceed in an unexpected way (i.e. wrong join type, join order, projection order). This can happen if the statistics for the table are out of date as shown in the image below by the mention of "cardinality: unavailable". You can obtain statistics by running:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 COMPUTE STATS <TABLE_NAME>
 
-{{< /highlight >}}
+</code></pre>
 
 <img class="aligncenter wp-image-5077" src="https://cdn.gethue.com/uploads/2017/11/Explain.png"/>
 
@@ -116,7 +116,7 @@ Manually close an opened query.
   The enable_query_browser flag should be on by default. All you need to access the new browser is to make sure Impala is configured inside of Hue.
 </p>
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 [impala]
 
@@ -128,7 +128,7 @@ server_port=<impala_port>
 
 enable_query_browser=true
 
-{{< /highlight >}}
+</code></pre>
 
 As always, if you have any questions, feel free to comment here or on the [hue-user list][8] or [@gethue][9]!
 

+ 22 - 22
docs/gethue/content/posts/2017-12-13-using-hue-to-interact-with-apache-kylin.md

@@ -57,49 +57,49 @@ In this post, we will demonstrate how you can connect Hue to Apache Kylin and ge
 
 Use docker to pull the latest hue.
 
-{{< highlight bash >}}docker pull gethue/hue:latest{{< /highlight >}}
+<pre><code class="bash">docker pull gethue/hue:latest</code></pre>
 
 #### <a class="md-header-anchor " name="header-n16"></a>Prepare kylin jdbc driver
 
 Download Apache Kylin installer package
 
-{{< highlight bash >}}wget -c http://mirror.bit.edu.cn/apache/kylin/apache-kylin-2.2.0/apache-kylin-2.2.0-bin-hbase1x.tar.gz{{< /highlight >}}
+<pre><code class="bash">wget -c http://mirror.bit.edu.cn/apache/kylin/apache-kylin-2.2.0/apache-kylin-2.2.0-bin-hbase1x.tar.gz</code></pre>
 
 Unzip package
 
-{{< highlight bash >}}tar -zxvf apache-kylin-2.2.0-bin-hbase1x.tar.gz{{< /highlight >}}
+<pre><code class="bash">tar -zxvf apache-kylin-2.2.0-bin-hbase1x.tar.gz</code></pre>
 
 cp Kylin jdbc driver
 
-{{< highlight bash >}}cp apache-kylin-2.2.0-bin/lib/kylin-jdbc-2.2.0.jar .
+<pre><code class="bash">cp apache-kylin-2.2.0-bin/lib/kylin-jdbc-2.2.0.jar .
 
 hue$ ls
 
 apache-kylin-2.2.0-bin apache-kylin-2.2.0-bin-hbase1x.tar.gz kylin-jdbc-2.2.0.jar
 
-{{< /highlight >}}
+</code></pre>
 
 #### <a class="md-header-anchor " name="header-n27"></a>Copy hub config file to host machine
 
 Copy the file from docker
 
-{{< highlight bash >}}docker run -it -d -name hue_tmp gethue/hue /bin/bash
+<pre><code class="bash">docker run -it -d -name hue_tmp gethue/hue /bin/bash
 
 cp hue_tmp:/hue/desktop/conf/pseudo-distributed.ini .
 
 docker stop hue_tmp; docker rm hue_tmp
 
-{{< /highlight >}}
+</code></pre>
 
 Now you should have the `pseudo-distributed.ini` in your current directory.
 
 #### <a class="md-header-anchor " name="header-n35"></a>Configure pseudo-distributed.ini with Kylin connection
 
-{{< highlight bash >}}vim pseudo-distributed.ini{{< /highlight >}}
+<pre><code class="bash">vim pseudo-distributed.ini</code></pre>
 
 copy below kylin section in the file
 
-{{< highlight bash >}}dbproxy_extra_classpath=/hue/kylin-jdbc-2.2.0.jar
+<pre><code class="bash">dbproxy_extra_classpath=/hue/kylin-jdbc-2.2.0.jar
 
 [[[kylin]]]
 
@@ -109,11 +109,11 @@ interface=jdbc
 
 options='{"url": "jdbc:kylin://<your_host>:<port>/<project_name>","driver": "org.apache.kylin.jdbc.Driver", "user": "<username>", "password": "<password>"}'
 
-{{< /highlight >}}
+</code></pre>
 
 For example, add below configuration section in the file
 
-{{< highlight bash >}}dbproxy_extra_classpath=/hue/kylin-jdbc-2.2.0.jar
+<pre><code class="bash">dbproxy_extra_classpath=/hue/kylin-jdbc-2.2.0.jar
 
 \# One entry for each type of snippet.
 
@@ -139,19 +139,19 @@ name=Hive
 
 interface=hiveserver2
 
-{{< /highlight >}}
+</code></pre>
 
 #### <a class="md-header-anchor " name="header-n43"></a>Edit Dockerfile
 
-{{< highlight bash >}}touch Dockerfile
+<pre><code class="bash">touch Dockerfile
 
 vim Dockerfile
 
-{{< /highlight >}}
+</code></pre>
 
 paste below script in Dockerfile
 
-{{< highlight bash >}}FROM gethue/hue:latest
+<pre><code class="bash">FROM gethue/hue:latest
 
 COPY ./kylin-jdbc-2.2.0.jar /hue/kylin-jdbc-2.2.0.jar
 
@@ -159,17 +159,17 @@ COPY ./pseudo-distributed.ini /hue/desktop/conf/pseudo-distributed.ini
 
 EXPOSE 8888
 
-{{< /highlight >}}
+</code></pre>
 
 This configuration will copy the kylin jdbc jar and pseudo-distributed.ini into the hue in Docker. And expose port 8888 in Docker.
 
 #### <a class="md-header-anchor " name="header-n50"></a>Build and start docker container
 
-{{< highlight bash >}}docker build -t hue-demo -f Dockerfile .
+<pre><code class="bash">docker build -t hue-demo -f Dockerfile .
 
 docker run -itd -p 8888:8888 -name hue hue-demo
 
-{{< /highlight >}}
+</code></pre>
 
 Hue is now up and running in your localhost:8888
 
@@ -211,7 +211,7 @@ After you installed Apache Kylin on AWS EMR, you can now deploy Hue on AWS EMR w
   </li>
 </ol>
 
-{{< highlight bash >}}[
+<pre><code class="bash">[
 
 {
 
@@ -273,9 +273,9 @@ After you installed Apache Kylin on AWS EMR, you can now deploy Hue on AWS EMR w
 
 ]
 
-{{< /highlight >}}
+</code></pre>
 
-{{< highlight bash >}}aws emr create-cluster -name "HUE Cluster" -release-label emr-5.10.0 \
+<pre><code class="bash">aws emr create-cluster -name "HUE Cluster" -release-label emr-5.10.0 \
 
 -ec2-attributes KeyName=<keypair_name>,InstanceProfile=EMR_EC2_DefaultRole,SubnetId=<subnet_id> \
 
@@ -291,7 +291,7 @@ After you installed Apache Kylin on AWS EMR, you can now deploy Hue on AWS EMR w
 
 -bootstrap-action Path="s3://<your_bucket>/download.sh"
 
-{{< /highlight >}}
+</code></pre>
 
 <ol start="3">
   <li>

+ 2 - 2
docs/gethue/content/posts/2018-01-16-intuitively-discovering-and-exploring-a-wine-dataset-with-the-dynamic-dashboards.md

@@ -69,13 +69,13 @@ On top of this, the Solr 7 Analytic Facets are close to be fully supported in th
 
 If you are not getting any suggestion, and opening the field information popup on the right assist shows the error below, it means the collection needs to have the [Solr Term Handler][7] configured.
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 <h1>HTTP Status 404 - /solr/jira_search/terms</h1>
 
 There are no terms to be shown
 
-{{< /highlight >}}
+</code></pre>
 
 ## Coming up Next!
 

+ 8 - 8
docs/gethue/content/posts/2018-04-05-sql-editor-variables.md

@@ -77,11 +77,11 @@ In Hue 4.1, we added the ability to share any query you've saved with other Hue
 
 [<img src="https://cdn.gethue.com/uploads/2018/04/variables_basic.png"class="alignnone size-medium wp-image-5319" />][3]
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 select * from web_logs where country_code = "${country_code}"
 
-{{< /highlight >}}
+</code></pre>
 
 <p class="p1">
   In Hue 4.1, we've added the ability to add default values to your variables. Default values can be of two types:
@@ -89,29 +89,29 @@ select * from web_logs where country_code = "${country_code}"
 
 **Single Valued**
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 select * from web_logs where country_code = "${country_code=US}"
 
-{{< /highlight >}}
+</code></pre>
 
 **Multi Valued**
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 select * from web_logs where country_code = "${country_code=CA, FR, US}"
 
-{{< /highlight >}}
+</code></pre>
 
 <p class="p1">
   In addition, the displayed text for multi valued variables can be changed.
 </p>
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 select * from web_logs where country_code = "${country_code=CA(Canada), FR(France), US(United States)}"
 
-{{< /highlight >}}
+</code></pre>
 
 [<img src="https://cdn.gethue.com/uploads/2018/04/variables_multi.png"class="alignnone size-full wp-image-5321" />][4]
 

+ 10 - 10
docs/gethue/content/posts/2018-08-16-live-analytics-of-live-apache-log-files.md

@@ -61,7 +61,7 @@ Here we are leveraging Apache Flume and installed one agent on the Apache Server
 
 Then in Cloudera Manager, in the Flume service we enter this Flume configuration:
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 tier1.sources = source1
 
@@ -89,7 +89,7 @@ tier1.sinks.sink1.morphlineFile = /tmp/morphline.conf
 
 tier1.sinks.sink1.channel = channel1
 
-{{< /highlight >}}
+</code></pre>
 
 Note: for a more robust sourcing, using [TaildirSource][6] instead of the 'tail -F /var/log/hue/access.log'. Additionally, a [KafkaChannel][7] would make sure that we don't drop events in case of crashes of the command.
 
@@ -97,25 +97,25 @@ Note: when doing this, we need to make sure that the Flume Agent user runs as a
 
 Note: this is how to create the Kafka topic via the CLI (until the UI supports it):
 
-{{< highlight bash >}}kafka-topics -create -topic=hueAccessLogs -partitions=1 -replication-factor=1 -zookeeper=analytics-1.gce.cloudera.com:2181
+<pre><code class="bash">kafka-topics -create -topic=hueAccessLogs -partitions=1 -replication-factor=1 -zookeeper=analytics-1.gce.cloudera.com:2181
 
-{{< /highlight >}}
+</code></pre>
 
 Note: as explained in previous Cloudera blog post, the '/tmp/morphline.conf will grok and parse the logs and convert it into a table. Depending on your Apache webserver, you might or might not have the first hostname field.
 
-{{< highlight bash >}}demo.gethue.com:80 92.58.20.110 - - [12/May/2018:14:07:39 +0000] "POST /jobbrowser/jobs/ HTTP/1.1" 200 392 "http://demo.gethue.com/hue/dashboard/new_search?engine=solr" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0"
+<pre><code class="bash">demo.gethue.com:80 92.58.20.110 - - [12/May/2018:14:07:39 +0000] "POST /jobbrowser/jobs/ HTTP/1.1" 200 392 "http://demo.gethue.com/hue/dashboard/new_search?engine=solr" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0"
 
-{{< /highlight >}}
+</code></pre>
 
-{{< highlight bash >}}
+<pre><code class="bash">
 
 columns: [C0,client_ip,C1,C2,time,dummy1,request,code,bytes,referer,user_agent]
 
-{{< /highlight >}}
+</code></pre>
 
 We also used the UTC timezone conversion as Solr expects dates in UTC.
 
-{{< highlight bash >}}[/code]
+<pre><code class="bash">[/code]
 
 inputTimezone : UTC
 
@@ -125,7 +125,7 @@ After the refresh of the Flume configuration, the Metrics tab will show the busi
 
 Note: if you want to delete all the documents in the log_analytics_demo collection to start fresh, you could delete and recreate it via Hue UI or issue this command:
 
-{{< highlight bash >}}curl "http://demo.gethue.com:8983/solr/log_analytics_demo/update?commit=true" -H "Content-Type: text/xml" -data-binary '<delete><query>\*:\*</query></delete>'{{< /highlight >}}
+<pre><code class="bash">curl "http://demo.gethue.com:8983/solr/log_analytics_demo/update?commit=true" -H "Content-Type: text/xml" -data-binary '<delete><query>\*:\*</query></delete>'</code></pre>
 
 ## Live Querying
 

Some files were not shown because too many files changed in this diff