Browse Source

[doc] Update README

* Tidy up configuration documentation for HDFS
bc Wong 13 years ago
parent
commit
6fd3e7cab4
3 changed files with 20 additions and 43 deletions
  1. 7 34
      README.rst
  2. 12 8
      desktop/conf.dist/hue.ini
  3. 1 1
      desktop/libs/hadoop/src/hadoop/conf.py

+ 7 - 34
README.rst

@@ -66,7 +66,8 @@ Development Prerequisites
       * libsqlite3-dev
       * libsqlite3-dev
       * libxml2-dev
       * libxml2-dev
       * libxslt-dev
       * libxslt-dev
-      * maven2
+      * mvn (from ``maven2`` package or tarball)
+      * openldap-dev
       * python-dev
       * python-dev
       * python-simplejson
       * python-simplejson
 
 
@@ -78,9 +79,10 @@ Development Prerequisites
       * gcc-c++
       * gcc-c++
       * libxml2-devel
       * libxml2-devel
       * libxslt-devel
       * libxslt-devel
-      * maven2
+      * mvn (from ``maven2`` package or tarball)
       * mysql
       * mysql
       * mysql-devel
       * mysql-devel
+      * openldap-devel
       * python-devel
       * python-devel
       * python-simplejson (for the crepo tool)
       * python-simplejson (for the crepo tool)
       * sqlite-devel
       * sqlite-devel
@@ -117,18 +119,6 @@ To start the helper daemons::
 Now Hue should be running on http://localhost:8000.
 Now Hue should be running on http://localhost:8000.
 
 
 
 
-Setting up Hadoop
-=================
-In order to start up a pseudo-distributed cluster with the plugins enabled,
-run::
-
-    $ ./tools/scripts/configure-hadoop.sh all
-
-After doing so, running ``jps`` should show all the daemons running (NN, JT,
-TT, DN) and you should be able to see the web UI on http://localhost:50030/ and
-http://localhost:50070/.
-
-
 FAQ
 FAQ
 ===
 ===
 1: What does "Exception: no app!" mean?
 1: What does "Exception: no app!" mean?
@@ -149,7 +139,7 @@ because there's ambiguity in the view, be sure to prefix the name
 with the application name.  The url name namespace is global.  So
 with the application name.  The url name namespace is global.  So
 ``jobsub.list`` is fine, but ``list`` is not.
 ``jobsub.list`` is fine, but ``list`` is not.
 
 
-Hue is using Django 1.1, which supports the notion of URL namespaces:
+Hue is using Django 1.2, which supports the notion of URL namespaces:
 http://docs.djangoproject.com/en/dev/topics/http/urls/#url-namespaces.
 http://docs.djangoproject.com/en/dev/topics/http/urls/#url-namespaces.
 We have yet to move over our URLs to this construct. Brownie points for the
 We have yet to move over our URLs to this construct. Brownie points for the
 developer who takes this on.
 developer who takes this on.
@@ -158,31 +148,14 @@ developer who takes this on.
 Using and Installing Thrift
 Using and Installing Thrift
 ===========================
 ===========================
 Right now, we check in the generated thrift code.
 Right now, we check in the generated thrift code.
-To generate the code, you'll need the thrift binary.
-Compile it like so::
-
-    $ git clone http://github.com/dreiss/thrift.git
-    $ cd thrift
-    $ ./bootstrap.sh
-    $ ./configure --with-py=no --with-java=no --with-perl=no --prefix=$HOME/pub
-
-We exclude python, java, and perl because they don't like
-to install in prefix.  If you look around at configure's --help,
-there are environment variables that determine where those
-runtime bindings are installed.
-::
-
-    $ make && make install
+To generate the code, you'll need the thrift binary version 0.7.0.
+Please download from http://thrift.apache.org/.
 
 
 When preparing ``.thrift`` files, you can use she-bangs to generate
 When preparing ``.thrift`` files, you can use she-bangs to generate
 the python bindings like so::
 the python bindings like so::
 
 
     #!/usr/bin/env thrift -r --gen py:new_style -o ../../../
     #!/usr/bin/env thrift -r --gen py:new_style -o ../../../
 
 
-.. note::
-    This file is in reStructuredText. You may run
-    ``rst2html README.rst > README.html`` to produce a HTML.
-
 
 
 Profiling Hue Apps
 Profiling Hue Apps
 ==================
 ==================

+ 12 - 8
desktop/conf.dist/hue.ini

@@ -22,7 +22,7 @@
 
 
   # Webserver listens on this address and port
   # Webserver listens on this address and port
   http_host=0.0.0.0
   http_host=0.0.0.0
-  http_port=8088
+  http_port=8888
 
 
   # Time zone name
   # Time zone name
   time_zone=America/Los_Angeles
   time_zone=America/Los_Angeles
@@ -165,9 +165,10 @@
   # If you installed Hadoop in a different location, you need to set
   # If you installed Hadoop in a different location, you need to set
   # hadoop_home, in which bin/hadoop, the Hadoop wrapper script, is found.
   # hadoop_home, in which bin/hadoop, the Hadoop wrapper script, is found.
   #
   #
-  # NOTE: Hue depends on Cloudera's Distribution of Hadoop version 3 (CDH3)
+  # NOTE: Hue depends on Cloudera's Distribution of Hadoop version 4 (CDH4)
   # or later.
   # or later.
   hadoop_home=/usr/lib/hadoop
   hadoop_home=/usr/lib/hadoop
+  hadoop_bin=/usr/bin/hadoop
   # hadoop_conf_dir=/etc/hadoop/conf
   # hadoop_conf_dir=/etc/hadoop/conf
 
 
   # The URL where the Oozie service runs on. This is required in order for
   # The URL where the Oozie service runs on. This is required in order for
@@ -180,12 +181,10 @@
   [[hdfs_clusters]]
   [[hdfs_clusters]]
 
 
     [[[default]]]
     [[[default]]]
-      # Enter the host and port on which you are running the Hadoop NameNode
-      namenode_host=localhost
-      hdfs_port=8020
-      http_port=50070
-      # Thrift plugin port for the name node
-      ## thrift_port=10090
+      # Enter the filesystem uri
+      fs_defaultfs=hdfs://localhost:8020
+      # The NameNode http port
+      ## http_port=50070
 
 
       # Use WebHdfs/HttpFs as the communication mechanism. To fallback to
       # Use WebHdfs/HttpFs as the communication mechanism. To fallback to
       # using the Thrift plugin (used in Hue 1.x), this must be uncommented
       # using the Thrift plugin (used in Hue 1.x), this must be uncommented
@@ -217,3 +216,8 @@
       resourcemanager_port=8032
       resourcemanager_port=8032
       # Whether to submit jobs to this cluster
       # Whether to submit jobs to this cluster
       ## submit_to=False
       ## submit_to=False
+
+[jobsub]
+  # The URL where the Oozie service runs on. This is required in order for
+  # users to submit jobs.
+  oozie_url=http://localhost:11000/oozie

+ 1 - 1
desktop/libs/hadoop/src/hadoop/conf.py

@@ -86,7 +86,7 @@ HADOOP_EXAMPLES_JAR = Config(
 
 
 HADOOP_STREAMING_JAR = Config(
 HADOOP_STREAMING_JAR = Config(
   key="hadoop_streaming_jar",
   key="hadoop_streaming_jar",
-  dynamic_default=find_file_recursive("hadoop-*streaming*.jar"),
+  dynamic_default=find_file_recursive("hadoop-*streaming*.jar", lambda: HADOOP_MR1_HOME.get()),
   help="Path to the hadoop-streaming.jar (used by jobdesigner)",
   help="Path to the hadoop-streaming.jar (used by jobdesigner)",
   type=str,
   type=str,
   private=True)
   private=True)