Quellcode durchsuchen

Merge remote-tracking branch 'cauldron/upstream-master' into cauldron-cdh6.x

Bump

Change-Id: Ie9beacffb530b057bca2f71160ed2bd9dc67944e
Romain Rigaux vor 7 Jahren
Ursprung
Commit
e60270c25f
100 geänderte Dateien mit 3662 neuen und 752 gelöschten Zeilen
  1. 1 1
      VERSION
  2. 18 9
      apps/beeswax/src/beeswax/api.py
  3. 7 1
      apps/beeswax/src/beeswax/create_table.py
  4. 62 26
      apps/beeswax/src/beeswax/server/dbms.py
  5. 59 36
      apps/beeswax/src/beeswax/server/hive_server2_lib.py
  6. 2 2
      apps/beeswax/src/beeswax/templates/execute.mako
  7. 1 1
      apps/beeswax/src/beeswax/templates/watch_results.mako
  8. 4 2
      apps/beeswax/src/beeswax/tests.py
  9. 3 1
      apps/beeswax/src/beeswax/views.py
  10. 10 2
      apps/filebrowser/src/filebrowser/lib/xxd_test.py
  11. 13 10
      apps/filebrowser/src/filebrowser/templates/display.mako
  12. 1 1
      apps/filebrowser/src/filebrowser/templates/listdir.mako
  13. 2 2
      apps/filebrowser/src/filebrowser/templates/listdir_components.mako
  14. 12 16
      apps/filebrowser/src/filebrowser/views.py
  15. 2 2
      apps/hbase/src/hbase/api.py
  16. 11 4
      apps/hbase/src/hbase/conf.py
  17. 19 2
      apps/hbase/src/hbase/hbase_site.py
  18. 75 6
      apps/impala/src/impala/api.py
  19. 27 7
      apps/impala/src/impala/conf.py
  20. 21 12
      apps/impala/src/impala/dbms.py
  21. 61 6
      apps/impala/src/impala/server.py
  22. 29 25
      apps/impala/src/impala/tests.py
  23. 2 0
      apps/impala/src/impala/urls.py
  24. 8 0
      apps/jobbrowser/src/jobbrowser/apis/base_api.py
  25. 1 2
      apps/jobbrowser/src/jobbrowser/apis/bundle_api.py
  26. 131 0
      apps/jobbrowser/src/jobbrowser/apis/clusters.py
  27. 9 6
      apps/jobbrowser/src/jobbrowser/apis/data_eng_api.py
  28. 121 0
      apps/jobbrowser/src/jobbrowser/apis/data_warehouse.py
  29. 5 4
      apps/jobbrowser/src/jobbrowser/apis/job_api.py
  30. 78 29
      apps/jobbrowser/src/jobbrowser/apis/query_api.py
  31. 1 1
      apps/jobbrowser/src/jobbrowser/apis/schedule_api.py
  32. 5 3
      apps/jobbrowser/src/jobbrowser/apis/workflow_api.py
  33. 0 1
      apps/jobbrowser/src/jobbrowser/static/jobbrowser/css/jobbrowser-embeddable.css
  34. 132 20
      apps/jobbrowser/src/jobbrowser/static/jobbrowser/js/impala_dagre.js
  35. 102 10
      apps/jobbrowser/src/jobbrowser/static/jobbrowser/less/jobbrowser-embeddable.less
  36. 565 228
      apps/jobbrowser/src/jobbrowser/templates/job_browser.mako
  37. 8 7
      apps/jobbrowser/src/jobbrowser/tests.py
  38. 4 1
      apps/jobbrowser/src/jobbrowser/yarn_models.py
  39. 31 25
      apps/metastore/src/metastore/static/metastore/js/metastore.ko.js
  40. 15 5
      apps/metastore/src/metastore/static/metastore/js/metastore.model.js
  41. 64 125
      apps/metastore/src/metastore/templates/metastore.mako
  42. 4 4
      apps/metastore/src/metastore/tests.py
  43. 75 22
      apps/metastore/src/metastore/views.py
  44. 2 1
      apps/oozie/src/oozie/management/commands/oozie_setup.py
  45. 74 10
      apps/oozie/src/oozie/models2.py
  46. 5 3
      apps/oozie/src/oozie/templates/editor2/common_workflow.mako
  47. 39 0
      apps/oozie/src/oozie/templates/editor2/gen/workflow-start.xml.mako
  48. 5 1
      apps/oozie/src/oozie/templates/editor2/workflow_editor.mako
  49. 56 6
      apps/oozie/src/oozie/views/editor2.py
  50. 3 3
      apps/pig/src/pig/management/commands/pig_setup.py
  51. 4 2
      apps/search/src/search/management/commands/search_setup.py
  52. 1 1
      apps/security/src/security/static/security/js/hive.ko.js
  53. 1 1
      apps/security/src/security/static/security/js/sentry.ko.js
  54. 3 1
      apps/security/src/security/templates/hdfs.mako
  55. 4 2
      apps/security/src/security/templates/hive.mako
  56. 1 1
      apps/useradmin/src/useradmin/forms.py
  57. 5 1
      apps/useradmin/src/useradmin/middleware.py
  58. 1 1
      apps/useradmin/src/useradmin/old_migrations/0001_permissions_and_profiles.py
  59. 1 1
      apps/useradmin/src/useradmin/tests.py
  60. 1 0
      desktop/Makefile
  61. 24 11
      desktop/conf.dist/hue.ini
  62. 24 11
      desktop/conf/pseudo-distributed.ini.tmpl
  63. 1 1
      desktop/core/ext-py/djangosaml2-0.16.11/djangosaml2/acs_failures.py
  64. 6 6
      desktop/core/ext-py/djangosaml2-0.16.11/djangosaml2/views.py
  65. 1487 0
      desktop/core/ext-py/dnspython-1.15.0/ChangeLog
  66. 16 0
      desktop/core/ext-py/dnspython-1.15.0/LICENSE
  67. 3 0
      desktop/core/ext-py/dnspython-1.15.0/MANIFEST.in
  68. 35 0
      desktop/core/ext-py/dnspython-1.15.0/PKG-INFO
  69. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/__init__.py
  70. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/_compat.py
  71. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/dnssec.py
  72. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/e164.py
  73. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/edns.py
  74. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/entropy.py
  75. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/exception.py
  76. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/flags.py
  77. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/grange.py
  78. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/hash.py
  79. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/inet.py
  80. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/ipv4.py
  81. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/ipv6.py
  82. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/message.py
  83. 59 21
      desktop/core/ext-py/dnspython-1.15.0/dns/name.py
  84. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/namedict.py
  85. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/node.py
  86. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/opcode.py
  87. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/query.py
  88. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/rcode.py
  89. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/rdata.py
  90. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/rdataclass.py
  91. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/rdataset.py
  92. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/rdatatype.py
  93. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/rdtypes/ANY/AFSDB.py
  94. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/rdtypes/ANY/AVC.py
  95. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/rdtypes/ANY/CAA.py
  96. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/rdtypes/ANY/CDNSKEY.py
  97. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/rdtypes/ANY/CDS.py
  98. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/rdtypes/ANY/CERT.py
  99. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/rdtypes/ANY/CNAME.py
  100. 0 0
      desktop/core/ext-py/dnspython-1.15.0/dns/rdtypes/ANY/CSYNC.py

+ 1 - 1
VERSION

@@ -17,4 +17,4 @@
 # This file should be the one source of truth for for versions within HUE.
 # It is at least included by each of the default hue app's setup.py.
 
-VERSION="4.2.0"
+VERSION="4.3.0"

+ 18 - 9
apps/beeswax/src/beeswax/api.py

@@ -148,7 +148,7 @@ def _autocomplete(db, database=None, table=None, column=None, nested=None, query
         response = parse_tree
         # If column or nested type is scalar/primitive, add sample of values
         if parser.is_scalar_type(parse_tree['type']):
-          sample = _get_sample_data(db, database, table, column)
+          sample = _get_sample_data(db, database, table, column, cluster=cluster)
           if 'rows' in sample:
             response['sample'] = sample['rows']
       else:
@@ -649,20 +649,22 @@ def clear_history(request):
 @error_handler
 def get_sample_data(request, database, table, column=None):
   app_name = get_app_name(request)
-  query_server = get_query_server_config(app_name)
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+
+  query_server = get_query_server_config(app_name, cluster=cluster)
   db = dbms.get(request.user, query_server)
 
-  response = _get_sample_data(db, database, table, column)
+  response = _get_sample_data(db, database, table, column, cluster=cluster)
   return JsonResponse(response)
 
 
-def _get_sample_data(db, database, table, column, async=False, cluster=None):
+def _get_sample_data(db, database, table, column, async=False, cluster=None, operation=None):
   table_obj = db.get_table(database, table)
   if table_obj.is_impala_only and db.client.query_server['server_name'] != 'impala':
     query_server = get_query_server_config('impala', cluster=cluster)
     db = dbms.get(db.client.user, query_server, cluster=cluster)
 
-  sample_data = db.get_sample(database, table_obj, column, generate_sql_only=async)
+  sample_data = db.get_sample(database, table_obj, column, generate_sql_only=async, operation=operation)
   response = {'status': -1}
 
   if sample_data:
@@ -674,7 +676,8 @@ def _get_sample_data(db, database, table, column, async=False, cluster=None):
           statement=sample_data,
           status='ready-execute',
           skip_historify=True,
-          is_task=False
+          is_task=False,
+          compute=cluster if cluster else None
       )
       response['result'] = notebook.execute(request=MockedDjangoRequest(user=db.client.user), batch=False)
       if table_obj.is_impala_only:
@@ -748,7 +751,9 @@ def get_functions(request):
 @error_handler
 def analyze_table(request, database, table, columns=None):
   app_name = get_app_name(request)
-  query_server = get_query_server_config(app_name)
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+
+  query_server = get_query_server_config(app_name, cluster=cluster)
   db = dbms.get(request.user, query_server)
 
   table_obj = db.get_table(database, table)
@@ -775,7 +780,9 @@ def analyze_table(request, database, table, columns=None):
 @error_handler
 def get_table_stats(request, database, table, column=None):
   app_name = get_app_name(request)
-  query_server = get_query_server_config(app_name)
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+
+  query_server = get_query_server_config(app_name, cluster=cluster)
   db = dbms.get(request.user, query_server)
 
   response = {'status': -1, 'message': '', 'redirect': ''}
@@ -796,7 +803,9 @@ def get_table_stats(request, database, table, column=None):
 @error_handler
 def get_top_terms(request, database, table, column, prefix=None):
   app_name = get_app_name(request)
-  query_server = get_query_server_config(app_name)
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+
+  query_server = get_query_server_config(app_name, cluster=cluster)
   db = dbms.get(request.user, query_server)
 
   response = {'status': -1, 'message': '', 'redirect': ''}

+ 7 - 1
apps/beeswax/src/beeswax/create_table.py

@@ -32,6 +32,7 @@ from desktop.lib import django_mako, i18n
 from desktop.lib.django_util import render
 from desktop.lib.exceptions_renderable import PopupException
 from desktop.lib.django_forms import MultiForm
+from desktop.models import _get_apps
 from hadoop.fs import hadoopfs
 
 from beeswax.common import TERMINATORS
@@ -83,7 +84,9 @@ def create_table(request, database='default'):
   else:
     form.bind()
 
+  apps_list = _get_apps(request.user, '')
   return render("create_table_manually.mako", request, {
+    'apps': apps_list,
     'action': "#",
     'databases': databases,
     'table_form': form.table,
@@ -193,7 +196,9 @@ def import_wizard(request, database='default'):
                                                             (s2_delim_form.cleaned_data['delimiter'],))
 
       if do_s2_auto_delim or do_s2_user_delim or cancel_s3_column_def:
+        apps_list = _get_apps(request.user, '')
         return render('import_wizard_choose_delimiter.mako', request, {
+          'apps': apps_list,
           'action': reverse(app_name + ':import_wizard', kwargs={'database': database}),
           'delim_readable': DELIMITER_READABLE.get(s2_delim_form['delimiter'].data[0], s2_delim_form['delimiter'].data[1]),
           'initial': delim_is_auto,
@@ -222,8 +227,9 @@ def import_wizard(request, database='default'):
           fields_list_for_json = list(fields_list)
           if fields_list_for_json:
             fields_list_for_json[0] = map(lambda a: re.sub('[^\w]', '', a), fields_list_for_json[0]) # Cleaning headers
-
+          apps_list = _get_apps(request.user, '')
           return render('import_wizard_define_columns.mako', request, {
+            'apps': apps_list,
             'action': reverse(app_name + ':import_wizard', kwargs={'database': database}),
             'file_form': s1_file_form,
             'delim_form': s2_delim_form,

+ 62 - 26
apps/beeswax/src/beeswax/server/dbms.py

@@ -24,6 +24,7 @@ from django.urls import reverse
 from django.utils.encoding import force_unicode
 from django.utils.translation import ugettext as _
 
+from desktop.conf import CLUSTER_ID
 from desktop.lib.django_util import format_preserving_redirect
 from desktop.lib.exceptions_renderable import PopupException
 from desktop.lib.parameterization import substitute_variables
@@ -57,30 +58,29 @@ def get(user, query_server=None, cluster=None):
 
   DBMS_CACHE_LOCK.acquire()
   try:
-    DBMS_CACHE.setdefault(user.username, {})
+    DBMS_CACHE.setdefault(user.id, {})
 
-    if query_server['server_name'] not in DBMS_CACHE[user.username]:
+    if query_server['server_name'] not in DBMS_CACHE[user.id]:
       # Avoid circular dependency
       from beeswax.server.hive_server2_lib import HiveServerClientCompatible
 
-      if query_server['server_name'] == 'impala':
+      if query_server['server_name'].startswith('impala'):
         from impala.dbms import ImpalaDbms
         from impala.server import ImpalaServerClient
-        DBMS_CACHE[user.username][query_server['server_name']] = ImpalaDbms(HiveServerClientCompatible(ImpalaServerClient(query_server, user)), QueryHistory.SERVER_TYPE[1][0])
+        DBMS_CACHE[user.id][query_server['server_name']] = ImpalaDbms(HiveServerClientCompatible(ImpalaServerClient(query_server, user)), QueryHistory.SERVER_TYPE[1][0])
       else:
         from beeswax.server.hive_server2_lib import HiveServerClient
-        DBMS_CACHE[user.username][query_server['server_name']] = HiveServer2Dbms(HiveServerClientCompatible(HiveServerClient(query_server, user)), QueryHistory.SERVER_TYPE[1][0])
+        DBMS_CACHE[user.id][query_server['server_name']] = HiveServer2Dbms(HiveServerClientCompatible(HiveServerClient(query_server, user)), QueryHistory.SERVER_TYPE[1][0])
 
-    return DBMS_CACHE[user.username][query_server['server_name']]
+    return DBMS_CACHE[user.id][query_server['server_name']]
   finally:
     DBMS_CACHE_LOCK.release()
 
 
 def get_query_server_config(name='beeswax', server=None, cluster=None):
-  if cluster and cluster != CLUSTER_ID.get():
-    cluster_config = Cluster(user=None).get_config(cluster)
-  else:
-    cluster_config = None
+  LOG.debug("Query cluster %s: %s" % (name, cluster))
+
+  cluster_config = get_cluster_config(cluster)
 
   if name == 'impala':
     from impala.dbms import get_query_server_config as impala_query_server_config
@@ -120,6 +120,19 @@ def get_query_server_config(name='beeswax', server=None, cluster=None):
   return query_server
 
 
+def get_cluster_config(cluster=None):
+  if cluster and cluster.get('id') != CLUSTER_ID.get():
+    if 'altus:dataware:k8s' in cluster['id']:
+      compute_end_point = cluster['compute_end_point'][0] if type(cluster['compute_end_point']) == list else cluster['compute_end_point'] # TODO getting list from left assist
+      cluster_config = {'server_host': compute_end_point, 'name': cluster['name']} # TODO get port too
+    else:
+      cluster_config = Cluster(user=None).get_config(cluster['id']) # Direct cluster
+  else:
+    cluster_config = None
+
+  return cluster_config
+
+
 class QueryServerException(Exception):
   # Ideally the query handle will be stored here too.
 
@@ -313,7 +326,7 @@ class HiveServer2Dbms(object):
 
 
   def execute_statement(self, hql):
-    if self.server_name == 'impala':
+    if self.server_name.startswith('impala'):
       query = hql_query(hql, QUERY_TYPES[1])
     else:
       query = hql_query(hql, QUERY_TYPES[0])
@@ -354,28 +367,42 @@ class HiveServer2Dbms(object):
 
   def cancel_operation(self, query_handle):
     resp = self.client.cancel_operation(query_handle)
-    if self.client.query_server['server_name'] == 'impala':
+    if self.client.query_server['server_name'].startswith('impala'):
       resp = self.client.close_operation(query_handle)
     return resp
 
 
-  def get_sample(self, database, table, column=None, nested=None, limit=100, generate_sql_only=False):
+  def get_sample(self, database, table, column=None, nested=None, limit=100, generate_sql_only=False, operation=None):
     result = None
     hql = None
 
     # Filter on max # of partitions for partitioned tables
     column = '`%s`' % column if column else '*'
     if table.partition_keys:
-      hql = self._get_sample_partition_query(database, table, column, limit)
-    elif self.server_name == 'impala':
+      hql = self._get_sample_partition_query(database, table, column, limit, operation)
+    elif self.server_name.startswith('impala'):
       if column or nested:
         from impala.dbms import ImpalaDbms
         select_clause, from_clause = ImpalaDbms.get_nested_select(database, table.name, column, nested)
-        hql = 'SELECT %s FROM %s LIMIT %s;' % (select_clause, from_clause, limit)
+        if operation == 'distinct':
+          hql = 'SELECT DISTINCT %s FROM %s LIMIT %s;' % (select_clause, from_clause, limit)
+        elif operation == 'max':
+          hql = 'SELECT max(%s) FROM %s;' % (select_clause, from_clause)
+        elif operation == 'min':
+          hql = 'SELECT min(%s) FROM %s;' % (select_clause, from_clause)
+        else:
+          hql = 'SELECT %s FROM %s LIMIT %s;' % (select_clause, from_clause, limit)
       else:
         hql = "SELECT * FROM `%s`.`%s` LIMIT %s;" % (database, table.name, limit)
     else:
-      hql = "SELECT %s FROM `%s`.`%s` LIMIT %s;" % (column, database, table.name, limit)
+      if operation == 'distinct':
+        hql = "SELECT DISTINCT %s FROM `%s`.`%s` LIMIT %s;" % (column, database, table.name, limit)
+      if operation == 'max':
+        hql = "SELECT max(%s) FROM `%s`.`%s`;" % (column, database, table.name)
+      if operation == 'min':
+        hql = "SELECT min(%s) FROM `%s`.`%s`;" % (column, database, table.name)
+      else:
+        hql = "SELECT %s FROM `%s`.`%s` LIMIT %s;" % (column, database, table.name, limit)
       # TODO: Add nested select support for HS2
 
     if hql:
@@ -392,7 +419,7 @@ class HiveServer2Dbms(object):
     return result
 
 
-  def _get_sample_partition_query(self, database, table, column='*', limit=100):
+  def _get_sample_partition_query(self, database, table, column='*', limit=100, operation=None):
     max_parts = QUERY_PARTITIONS_LIMIT.get()
     partitions = self.get_partitions(database, table, partition_spec=None, max_parts=max_parts)
 
@@ -404,12 +431,21 @@ class HiveServer2Dbms(object):
     else:
       partition_clause = ''
 
-    return "SELECT %(column)s FROM `%(database)s`.`%(table)s` %(partition_clause)s LIMIT %(limit)s" % \
-      {'column': column, 'database': database, 'table': table.name, 'partition_clause': partition_clause, 'limit': limit}
+    if operation == 'distinct':
+      prefix = 'SELECT DISTINCT %s' % column
+    elif operation == 'max':
+      prefix = 'SELECT max(%s)' % column
+    elif operation == 'min':
+      prefix = 'SELECT min(%s)' % column
+    else:
+      prefix = 'SELECT %s' % column
+
+    return prefix + " FROM `%(database)s`.`%(table)s` %(partition_clause)s LIMIT %(limit)s" % \
+      {'database': database, 'table': table.name, 'partition_clause': partition_clause, 'limit': limit}
 
 
   def analyze_table(self, database, table):
-    if self.server_name == 'impala':
+    if self.server_name.startswith('impala'):
       hql = 'COMPUTE STATS `%(database)s`.`%(table)s`' % {'database': database, 'table': table}
     else:
       table_obj = self.get_table(database, table)
@@ -425,7 +461,7 @@ class HiveServer2Dbms(object):
 
 
   def analyze_table_columns(self, database, table):
-    if self.server_name == 'impala':
+    if self.server_name.startswith('impala'):
       hql = 'COMPUTE STATS `%(database)s`.`%(table)s`' % {'database': database, 'table': table}
     else:
       table_obj = self.get_table(database, table)
@@ -440,7 +476,7 @@ class HiveServer2Dbms(object):
   def get_table_stats(self, database, table):
     stats = []
 
-    if self.server_name == 'impala':
+    if self.server_name.startswith('impala'):
       hql = 'SHOW TABLE STATS `%(database)s`.`%(table)s`' % {'database': database, 'table': table}
 
       query = hql_query(hql)
@@ -458,7 +494,7 @@ class HiveServer2Dbms(object):
 
 
   def get_table_columns_stats(self, database, table, column):
-    if self.server_name == 'impala':
+    if self.server_name.startswith('impala'):
       hql = 'SHOW COLUMN STATS `%(database)s`.`%(table)s`' % {'database': database, 'table': table}
     else:
       hql = 'DESCRIBE FORMATTED `%(database)s`.`%(table)s` `%(column)s`' % {'database': database, 'table': table, 'column': column}
@@ -471,7 +507,7 @@ class HiveServer2Dbms(object):
       self.close(handle)
       data = list(result.rows())
 
-      if self.server_name == 'impala':
+      if self.server_name.startswith('impala'):
         if column == -1: # All the columns
           return [self._extract_impala_column(col) for col in data]
         else:
@@ -838,7 +874,7 @@ class HiveServer2Dbms(object):
 
 
   def get_partition(self, db_name, table_name, partition_spec, generate_ddl_only=False):
-    if partition_spec and self.server_name == 'impala': # partition_spec not supported
+    if partition_spec and self.server_name.startswith('impala'): # partition_spec not supported
       partition_query = " AND ".join(partition_spec.split(','))
     else:
       table = self.get_table(db_name, table_name)

+ 59 - 36
apps/beeswax/src/beeswax/server/hive_server2_lib.py

@@ -312,7 +312,7 @@ class HiveServerDataTable(DataTable):
     self.schema = schema and schema.schema
     self.row_set = HiveServerTRowSet(results.results, schema)
     self.operation_handle = operation_handle
-    if query_server['server_name'] == 'impala':
+    if query_server['server_name'].startswith('impala'):
       self.has_more = results.hasMoreRows
     else:
       self.has_more = not self.row_set.is_empty()    # Should be results.hasMoreRows but always True in HS2
@@ -486,15 +486,15 @@ class HiveServerClient:
 
     use_sasl, mechanism, kerberos_principal_short_name, impersonation_enabled, auth_username, auth_password = self.get_security()
     LOG.info(
-        '%s: use_sasl=%s, mechanism=%s, kerberos_principal_short_name=%s, impersonation_enabled=%s, auth_username=%s' % (
-        self.query_server['server_name'], use_sasl, mechanism, kerberos_principal_short_name, impersonation_enabled, auth_username)
+        '%s: server_host=%s, use_sasl=%s, mechanism=%s, kerberos_principal_short_name=%s, impersonation_enabled=%s, auth_username=%s' % (
+        self.query_server['server_name'], self.query_server['server_host'], use_sasl, mechanism, kerberos_principal_short_name, impersonation_enabled, auth_username)
     )
 
     self.use_sasl = use_sasl
     self.kerberos_principal_short_name = kerberos_principal_short_name
     self.impersonation_enabled = impersonation_enabled
 
-    if self.query_server['server_name'] == 'impala':
+    if self.query_server['server_name'].startswith('impala'):
       from impala import conf as impala_conf
 
       ssl_enabled = impala_conf.SSL.ENABLED.get()
@@ -519,10 +519,12 @@ class HiveServerClient:
       password = None
 
     thrift_class = TCLIService
-    if self.query_server['server_name'] == 'impala':
+    if self.query_server['server_name'].startswith('impala'):
       from ImpalaService import ImpalaHiveServer2Service
       thrift_class = ImpalaHiveServer2Service
 
+    LOG.debug('Using %s for host_name %s' % (thrift_class, query_server['server_host']))
+
     self._client = thrift_util.get_client(
         thrift_class.Client,
         query_server['server_host'],
@@ -556,7 +558,7 @@ class HiveServerClient:
     else:
       kerberos_principal_short_name = None
 
-    if self.query_server['server_name'] == 'impala':
+    if self.query_server['server_name'].startswith('impala'):
       if auth_password: # Force LDAP/PAM.. auth if auth_password is provided
         use_sasl = True
         mechanism = HiveServerClient.HS2_MECHANISMS['NONE']
@@ -588,7 +590,7 @@ class HiveServerClient:
     if self.impersonation_enabled:
       kwargs.update({'username': DEFAULT_USER})
 
-      if self.query_server['server_name'] == 'impala': # Only when Impala accepts it
+      if self.query_server['server_name'].startswith('impala'): # Only when Impala accepts it
         kwargs['configuration'].update({'impala.doas.user': user.username})
 
     if self.query_server['server_name'] == 'beeswax': # All the time
@@ -597,7 +599,7 @@ class HiveServerClient:
     if self.query_server['server_name'] == 'sparksql': # All the time
       kwargs['configuration'].update({'hive.server2.proxy.user': user.username})
 
-    if self.query_server['server_name'] == 'impala' and self.query_server['SESSION_TIMEOUT_S'] > 0:
+    if self.query_server['server_name'].startswith('impala') and self.query_server['SESSION_TIMEOUT_S'] > 0:
       kwargs['configuration'].update({'idle_session_timeout': str(self.query_server['SESSION_TIMEOUT_S'])})
 
     LOG.info('Opening %s thrift session for user %s' % (self.query_server['server_name'], user.username))
@@ -605,6 +607,8 @@ class HiveServerClient:
     req = TOpenSessionReq(**kwargs)
     res = self._client.OpenSession(req)
     self.coordinator_host = self._client.get_coordinator_host()
+    if self.coordinator_host:
+      res.configuration['coordinator_host'] = self.coordinator_host
 
     if res.status is not None and res.status.statusCode not in (TStatusCode.SUCCESS_STATUS,):
       if hasattr(res.status, 'errorMessage') and res.status.errorMessage:
@@ -619,13 +623,15 @@ class HiveServerClient:
     encoded_status, encoded_guid = HiveServerQueryHandle(secret=sessionId.secret, guid=sessionId.guid).get()
     properties = json.dumps(res.configuration)
 
-    session = Session.objects.create(owner=user,
-                                     application=self.query_server['server_name'],
-                                     status_code=res.status.statusCode,
-                                     secret=encoded_status,
-                                     guid=encoded_guid,
-                                     server_protocol_version=res.serverProtocolVersion,
-                                     properties=properties)
+    session = Session.objects.create(
+        owner=user,
+        application=self.query_server['server_name'],
+        status_code=res.status.statusCode,
+        secret=encoded_status,
+        guid=encoded_guid,
+        server_protocol_version=res.serverProtocolVersion,
+        properties=properties
+    )
 
     # HS2 does not return properties in TOpenSessionResp
     if not session.get_properties():
@@ -741,7 +747,7 @@ class HiveServerClient:
     req = TGetSchemasReq()
     if schemaName is not None:
       req.schemaName = schemaName
-    if self.query_server['server_name'] == 'impala':
+    if self.query_server['server_name'].startswith('impala'):
       req.schemaName = None
 
     res = self.call(self._client.GetSchemas, req)
@@ -759,13 +765,28 @@ class HiveServerClient:
     (desc_results, desc_schema), operation_handle = self.execute_statement(query, max_rows=5000, orientation=TFetchOrientation.FETCH_NEXT)
     self.close_operation(operation_handle)
 
-    cols = ('db_name', 'comment', 'location','owner_name', 'owner_type', 'parameters')
-
-    if len(HiveServerTRowSet(desc_results.results, desc_schema.schema).cols(cols)) != 1:
-      raise ValueError(_("%(query)s returned more than 1 row") % {'query': query})
-
-    return HiveServerTRowSet(desc_results.results, desc_schema.schema).cols(cols)[0]  # Should only contain one row
-
+    print '-----'
+    print self.query_server['server_name']
+    print desc_results.results
+    print desc_schema.schema
+    if self.query_server['server_name'].startswith('impala'):
+      cols = ('name', 'location', 'comment') # Skip owner as on a new line
+    else:
+      cols = ('db_name', 'comment', 'location', 'owner_name', 'owner_type', 'parameters')
+
+#     print cols
+#     try:
+#       if len(HiveServerTRowSet(desc_results.results, desc_schema.schema).cols(cols)) != 1:
+#         raise ValueError(_("%(query)s returned more than 1 row") % {'query': query})
+#     except Exception, e:
+#       print e
+#       raise e
+
+    
+    a = HiveServerTRowSet(desc_results.results, desc_schema.schema).cols(cols)[0]  # Should only contain one row
+    print a
+    # {'comment': 'Default Hive database', 'name': 'default', 'location': 'hdfs://nightly6x-unsecure-1.vpc.cloudera.com:8020/user/hive/warehouse'}
+    return a
 
   def get_tables_meta(self, database, table_names, table_types=None):
     if not table_types:
@@ -873,7 +894,7 @@ class HiveServerClient:
 
     configuration = {}
 
-    if self.query_server['server_name'] == 'impala' and self.query_server['querycache_rows'] > 0:
+    if self.query_server['server_name'].startswith('impala') and self.query_server['querycache_rows'] > 0:
       configuration[IMPALA_RESULTSET_CACHE_SIZE] = str(self.query_server['querycache_rows'])
 
     # The query can override the default configuration
@@ -884,7 +905,7 @@ class HiveServerClient:
 
 
   def execute_statement(self, statement, max_rows=1000, configuration={}, orientation=TFetchOrientation.FETCH_NEXT):
-    if self.query_server['server_name'] == 'impala' and self.query_server['QUERY_TIMEOUT_S'] > 0:
+    if self.query_server['server_name'].startswith('impala') and self.query_server['QUERY_TIMEOUT_S'] > 0:
       configuration['QUERY_TIMEOUT_S'] = str(self.query_server['QUERY_TIMEOUT_S'])
 
     req = TExecuteStatementReq(statement=statement.encode('utf-8'), confOverlay=configuration)
@@ -894,18 +915,20 @@ class HiveServerClient:
 
 
   def execute_async_statement(self, statement, confOverlay, with_multiple_session=False):
-    if self.query_server['server_name'] == 'impala' and self.query_server['QUERY_TIMEOUT_S'] > 0:
+    if self.query_server['server_name'].startswith('impala') and self.query_server['QUERY_TIMEOUT_S'] > 0:
       confOverlay['QUERY_TIMEOUT_S'] = str(self.query_server['QUERY_TIMEOUT_S'])
 
     req = TExecuteStatementReq(statement=statement.encode('utf-8'), confOverlay=confOverlay, runAsync=True)
     (res, session) = self.call_return_result_and_session(self._client.ExecuteStatement, req, with_multiple_session=with_multiple_session)
 
-    return HiveServerQueryHandle(secret=res.operationHandle.operationId.secret,
-                                 guid=res.operationHandle.operationId.guid,
-                                 operation_type=res.operationHandle.operationType,
-                                 has_result_set=res.operationHandle.hasResultSet,
-                                 modified_row_count=res.operationHandle.modifiedRowCount,
-                                 session_guid=session.guid)
+    return HiveServerQueryHandle(
+        secret=res.operationHandle.operationId.secret,
+        guid=res.operationHandle.operationId.guid,
+        operation_type=res.operationHandle.operationType,
+        has_result_set=res.operationHandle.hasResultSet,
+        modified_row_count=res.operationHandle.modifiedRowCount,
+        session_guid=session.guid
+    )
 
 
   def fetch_data(self, operation_handle, orientation=TFetchOrientation.FETCH_NEXT, max_rows=1000):
@@ -999,7 +1022,7 @@ class HiveServerClient:
     # Need to fetch more like this until SHOW PARTITIONS offers a LIMIT and ORDER BY
     partition_table = self.execute_query_statement(query, max_rows=10000, orientation=TFetchOrientation.FETCH_NEXT, close_operation=True)
 
-    if self.query_server['server_name'] == 'impala':
+    if self.query_server['server_name'].startswith('impala'):
       try:
         # Fetch all partition key names, which are listed before the #Rows column
         cols = [col.name for col in partition_table.cols()]
@@ -1038,7 +1061,7 @@ class HiveServerClient:
   def get_configuration(self):
     configuration = {}
 
-    if self.query_server['server_name'] == 'impala':  # Return all configuration settings
+    if self.query_server['server_name'].startswith('impala'):  # Return all configuration settings
       query = 'SET'
       results = self.execute_query_statement(query, orientation=TFetchOrientation.FETCH_NEXT, close_operation=True)
       configuration = dict((row[0], row[1]) for row in results.rows())
@@ -1197,7 +1220,7 @@ class HiveServerClientCompatible(object):
     if max_rows is None:
       max_rows = 1000
 
-    if start_over and not (self.query_server['server_name'] == 'impala' and self.query_server['querycache_rows'] == 0): # Backward compatibility for impala
+    if start_over and not (self.query_server['server_name'].startswith('impala') and self.query_server['querycache_rows'] == 0): # Backward compatibility for impala
       orientation = TFetchOrientation.FETCH_FIRST
     else:
       orientation = TFetchOrientation.FETCH_NEXT
@@ -1233,7 +1256,7 @@ class HiveServerClientCompatible(object):
   def get_log(self, handle, start_over=True):
     operationHandle = handle.get_rpc_handle()
 
-    if beeswax_conf.USE_GET_LOG_API.get() or self.query_server['server_name'] == 'impala':
+    if beeswax_conf.USE_GET_LOG_API.get() or self.query_server['server_name'].startswith('impala'):
       return self._client.get_log(operationHandle)
     else:
       if start_over:

+ 2 - 2
apps/beeswax/src/beeswax/templates/execute.mako

@@ -1806,11 +1806,11 @@ $(document).one('fetched.design', editables);
 $(document).one('fetched.query', editables);
 
 function isNumericColumn(type) {
-  return $.inArray(type, ['TINYINT_TYPE', 'SMALLINT_TYPE', 'INT_TYPE', 'BIGINT_TYPE', 'FLOAT_TYPE', 'DOUBLE_TYPE', 'DECIMAL_TYPE', 'TIMESTAMP_TYPE', 'DATE_TYPE']) > -1;
+  return $.inArray(type, ['TINYINT_TYPE', 'SMALLINT_TYPE', 'INT_TYPE', 'BIGINT_TYPE', 'FLOAT_TYPE', 'DOUBLE_TYPE', 'DECIMAL_TYPE', 'TIMESTAMP_TYPE', 'DATE_TYPE', 'DATETIME_TYPE']) > -1;
 }
 
 function isDateTimeColumn(type) {
-  return $.inArray(type, ['TIMESTAMP_TYPE', 'DATE_TYPE']) > -1;
+  return $.inArray(type, ['TIMESTAMP_TYPE', 'DATE_TYPE', 'DATETIME_TYPE']) > -1;
 }
 
 function isStringColumn(type) {

+ 1 - 1
apps/beeswax/src/beeswax/templates/watch_results.mako

@@ -363,7 +363,7 @@ $(document).ready(function () {
           sType = "string"
           if col.type in ["TINYINT_TYPE", "SMALLINT_TYPE", "INT_TYPE", "BIGINT_TYPE", "FLOAT_TYPE", "DOUBLE_TYPE", "DECIMAL_TYPE"]:
             sType = "numeric"
-          elif col.type in ["TIMESTAMP_TYPE", "DATE_TYPE"]:
+          elif col.type in ["TIMESTAMP_TYPE", "DATE_TYPE", "DATETIME_TYPE"]:
             sType = "date"
           %>
         { "sSortDataType":"dom-text", "sType":"${ sType }"},

+ 4 - 2
apps/beeswax/src/beeswax/tests.py

@@ -1317,7 +1317,7 @@ for x in sys.stdin:
     }, follow=True)
 
     # Ensure we can see table.
-    response = self.client.get("/metastore/table/%s/my_table?format=json" % self.db_name)
+    response = self.client.post("/metastore/table/%s/my_table?format=json" % self.db_name, {'format': 'json'})
     data = json.loads(response.content)
     assert_true("my_col" in [col['name'] for col in data['cols']], data)
 
@@ -1887,7 +1887,7 @@ for x in sys.stdin:
       # Retrieve stats before analyze
       resp = self.client.get(reverse('beeswax:get_table_stats', kwargs={'database': self.db_name, 'table': 'test'}))
       stats = json.loads(resp.content)['stats']
-      assert_false([stat for stat in stats if stat['data_type'] == 'numRows'], resp.content)
+      assert_true(any([stat for stat in stats if stat['data_type'] == 'numRows' and stat['comment'] == '0']), resp.content)
 
       resp = self.client.get(reverse('beeswax:get_table_stats', kwargs={'database': self.db_name, 'table': 'test', 'column': 'foo'}))
       stats = json.loads(resp.content)['stats']
@@ -2118,6 +2118,8 @@ def test_index_page():
 
 
 def test_history_page():
+  raise SkipTest
+
   client = make_logged_in_client()
   test_user = User.objects.get(username='test')
 

+ 3 - 1
apps/beeswax/src/beeswax/views.py

@@ -40,7 +40,7 @@ from desktop.lib.django_util import JsonResponse
 from desktop.lib.django_util import copy_query_dict, format_preserving_redirect, render
 from desktop.lib.django_util import login_notrequired, get_desktop_uri_prefix
 from desktop.lib.exceptions_renderable import PopupException
-from desktop.models import Document
+from desktop.models import Document, _get_apps
 from desktop.lib.parameterization import find_variables
 from desktop.views import serve_403_error
 from notebook.models import escape_rows
@@ -444,9 +444,11 @@ def execute_query(request, design_id=None, query_history_id=None):
     design = safe_get_design(request, query_type, design_id)
     query_history = None
 
+  current_app, other_apps, apps_list = _get_apps(request.user, '')
   doc = design and design.id and design.doc.get()
   context = {
     'design': design,
+    'apps': apps_list,
     'query': query_history, # Backward
     'query_history': query_history,
     'autocomplete_base_url': reverse(get_app_name(request) + ':api_autocomplete_databases', kwargs={}),

+ 10 - 2
apps/filebrowser/src/filebrowser/lib/xxd_test.py

@@ -17,14 +17,17 @@
 
 import unittest
 import logging
-import StringIO
 import random
+import StringIO
+import subprocess
 
 import xxd
 
+from nose.plugins.skip import SkipTest
+
 from subprocess import Popen, PIPE
 
-logger = logging.getLogger(__name__)
+LOG = logging.getLogger(__name__)
 
 LENGTH = 1024*10 # 10KB
 
@@ -73,6 +76,11 @@ class XxdTest(unittest.TestCase):
     To be honest, this test was written after this was working.
     I tested using a temporary file and a side-by-side diff tool (vimdiff).
     """
+    try:
+      subprocess.check_output('type xxd', shell=True)
+    except subprocess.CalledProcessError as e:
+      LOG.warn('xxd not found')
+      raise SkipTest
     # /dev/random tends to hang on Linux, so we use python instead.
     # It's inefficient, but it's not terrible.
     random_text = "".join(chr(random.getrandbits(8)) for _ in range(LENGTH))

+ 13 - 10
apps/filebrowser/src/filebrowser/templates/display.mako

@@ -44,6 +44,14 @@ ${ fb_components.menubar() }
         <!-- ko if: $root.file -->
         <ul class="nav nav-list">
           <!-- ko if: $root.isViewing -->
+            <li><a href="${url('filebrowser.views.view', path=dirname_enc)}"><i class="fa fa-reply"></i> ${_('Back')}</a></li>
+
+            <!-- ko if: $root.file().view.compression() && $root.file().view.compression() === "none" && $root.file().editable -->
+              <li><a class="pointer" data-bind="click: $root.editFile"><i class="fa fa-pencil"></i> ${_('Edit file')}</a></li>
+            <!-- /ko -->
+
+            <li><a class="pointer" data-bind="click: changePage"><i class="fa fa-refresh"></i> ${_('Refresh')}</a></li>
+
             <!-- ko if: $root.file().view.mode() === 'binary' -->
             <li><a class="pointer" data-bind="click: function(){ switchMode('text'); }"><i class="fa fa-font"></i> ${_('View as text')}</a></li>
             <!-- /ko -->
@@ -67,21 +75,16 @@ ${ fb_components.menubar() }
             <!-- ko if: $root.file().view.compression() && $root.file().view.compression() !== "none" -->
               <li><a class="pointer" data-bind="click: function(){ switchCompression('none'); }"><i class="fa fa-times-circle"></i> ${_('Stop preview')}</a></li>
             <!-- /ko -->
-
-            <!-- ko if: $root.file().view.compression() && $root.file().view.compression() === "none" && $root.file().editable -->
-              <li><a class="pointer" data-bind="click: $root.editFile"><i class="fa fa-pencil"></i> ${_('Edit file')}</a></li>
-            <!-- /ko -->
           <!-- /ko -->
+
           <!-- ko ifnot: $root.isViewing -->
-            <li><a class="pointer" data-bind="click: $root.viewFile"><i class="fa fa-eye"></i> ${_('View file')}</a></li>
+            <li><a class="pointer" data-bind="click: $root.viewFile"><i class="fa fa-reply"></i> ${_('View file')}</a></li>
           <!-- /ko -->
 
           <!-- ko if: $root.isViewing -->
-          <!-- ko if: $root.file().show_download_button -->
-           <li><a class="pointer" data-bind="click: $root.downloadFile"><i class="fa fa-download"></i> ${_('Download')}</a></li>
-          <!-- /ko -->
-           <li><a href="${url('filebrowser.views.view', path=dirname_enc)}"><i class="fa fa-file-text"></i> ${_('View file location')}</a></li>
-           <li><a class="pointer" data-bind="click: changePage"><i class="fa fa-refresh"></i> ${_('Refresh')}</a></li>
+            <!-- ko if: $root.file().show_download_button -->
+              <li><a class="pointer" data-bind="click: $root.downloadFile"><i class="fa fa-download"></i> ${_('Download')}</a></li>
+            <!-- /ko -->
           <!-- /ko -->
 
            <!-- ko if: $root.file().stats -->

+ 1 - 1
apps/filebrowser/src/filebrowser/templates/listdir.mako

@@ -114,7 +114,7 @@ ${ fb_components.menubar() }
           <button class="btn fileToolbarBtn" title="${_('Restore from trash')}" data-bind="visible: inRestorableTrash(), click: restoreTrashSelected, enable: selectedFiles().length > 0 && isCurrentDirSelected().length == 0"><i class="fa fa-cloud-upload"></i> ${_('Restore')}</button>
           <!-- ko ifnot: inTrash -->
           % if not is_trash_enabled:
-          <button class="btn fileToolbarBtn delete-link" title="${_('Delete forever')}" data-bind="enable: selectedFiles().length > 0, click: deleteSelected"><i class="fa fa-bolt"></i> ${_('Delete forever')}</button>
+          <button class="btn fileToolbarBtn delete-link" title="${_('Delete forever')}" data-bind="enable: selectedFiles().length > 0 && isCurrentDirSelected().length == 0, click: deleteSelected"><i class="fa fa-bolt"></i> ${_('Delete forever')}</button>
           % else:
           <div id="delete-dropdown" class="btn-group" style="vertical-align: middle">
             <button id="trash-btn" class="btn toolbarBtn" data-bind="enable: selectedFiles().length > 0 && isCurrentDirSelected().length == 0, click: trashSelected"><i class="fa fa-times"></i> ${_('Move to trash')}</button>

+ 2 - 2
apps/filebrowser/src/filebrowser/templates/listdir_components.mako

@@ -591,7 +591,7 @@ from filebrowser.conf import ENABLE_EXTRACT_UPLOADED_ARCHIVE
     <a href="javascript: void(0)" data-bind="click: ($root.selectedFiles().length > 0 && isCurrentDirSelected().length == 0) ? $root.trashSelected: void(0)">
     <i class="fa fa-fw fa-times"></i> ${_('Move to trash')}</a></li>
     % endif
-    <li><a href="javascript: void(0)" class="delete-link" title="${_('Delete forever')}" data-bind="enable: $root.selectedFiles().length > 0, click: $root.deleteSelected"><i class="fa fa-fw fa-bolt"></i> ${_('Delete forever')}</a></li>
+    <li><a href="javascript: void(0)" class="delete-link" title="${_('Delete forever')}" data-bind="enable: $root.selectedFiles().length > 0 && isCurrentDirSelected().length == 0, click: $root.deleteSelected"><i class="fa fa-fw fa-bolt"></i> ${_('Delete forever')}</a></li>
     <li class="divider" data-bind="visible: isSummaryEnabled()"></li>
     <li data-bind="css: {'disabled': selectedFiles().length > 1 }, visible: isSummaryEnabled()">
       <a class="pointer" data-bind="click: function(){ selectedFiles().length == 1 ? showSummary(): void(0)}"><i class="fa fa-fw fa-pie-chart"></i> ${_('Summary')}</a>
@@ -623,7 +623,7 @@ from filebrowser.conf import ENABLE_EXTRACT_UPLOADED_ARCHIVE
 
   <script id="fileTemplate" type="text/html">
     <tr class="row-animated" style="cursor: pointer" data-bind="drop: { enabled: name !== '.' && type !== 'file' && (!$root.isS3() || ($root.isS3() && !$root.isS3Root())), value: $data }, event: { mouseover: toggleHover, mouseout: toggleHover, contextmenu: showContextMenu }, click: $root.viewFile, css: { 'row-selected': selected(), 'row-highlighted': highlighted(), 'row-deleted': deleted() }">
-      <td class="center" data-bind="click: handleSelect" style="cursor: default" data-bind="enabled: name !== '..' ">
+      <td class="center" data-bind="click: name !== '..' ? handleSelect : void(0)" style="cursor: default">
         <div data-bind="multiCheck: '#fileBrowserTable', visible: name != '..', css: { 'hue-checkbox': name != '..', 'fa': name != '..', 'fa-check': selected }"></div>
       </td>
       <td class="left"><i data-bind="click: $root.viewFile, css: { 'fa': true,

+ 12 - 16
apps/filebrowser/src/filebrowser/views.py

@@ -58,6 +58,8 @@ from desktop.lib.i18n import smart_str
 from desktop.lib.tasks.compress_files.compress_utils import compress_files_in_hdfs
 from desktop.lib.tasks.extract_archive.extract_utils import extract_archive_in_hdfs
 from desktop.views import serve_403_error
+
+from hadoop.core_site import get_trash_interval
 from hadoop.fs.hadoopfs import Hdfs
 from hadoop.fs.exceptions import WebHdfsException
 from hadoop.fs.fsutils import do_overwrite_save
@@ -362,7 +364,7 @@ def listdir(request, path):
         'breadcrumbs': breadcrumbs,
         'current_dir_path': urllib.quote(path.encode('utf-8'), safe='~@#$&()*!+=:;,.?/\''),
         'current_request_path': urllib.quote(request.path.encode('utf-8'), safe='~@#$&()*!+=:;,.?/\''),
-        'home_directory': request.fs.isdir(home_dir_path) and home_dir_path or None,
+        'home_directory': home_dir_path if home_dir_path and request.fs.isdir(home_dir_path) else None,
         'cwd_set': True,
         'is_superuser': request.user.username == request.fs.superuser,
         'groups': request.user.username == request.fs.superuser and [str(x) for x in Group.objects.values_list('name', flat=True)] or [],
@@ -435,7 +437,10 @@ def listdir_paged(request, path):
     if hasattr(request, 'doas'):
       do_as = request.doas
 
-    home_dir_path = request.user.get_home_directory()
+    if request.fs._get_scheme(path) == 'hdfs':
+      home_dir_path = request.user.get_home_directory()
+    else:
+      home_dir_path = None
     breadcrumbs = parse_breadcrumbs(path)
 
     if do_as:
@@ -495,9 +500,7 @@ def listdir_paged(request, path):
     if page:
       page.object_list = [ _massage_stats(request, stat_absolute_path(path, s)) for s in shown_stats ]
 
-    is_trash_enabled = request.fs._get_scheme(path) == 'hdfs' and \
-                       (request.fs.isdir(_home_trash_path(request.fs, request.user, path)) or
-                        request.fs.isdir(request.fs.trash_path(path)))
+    is_trash_enabled = request.fs._get_scheme(path) == 'hdfs' and int(get_trash_interval()) > 0
 
     is_fs_superuser = _is_hdfs_superuser(request)
     data = {
@@ -508,7 +511,7 @@ def listdir_paged(request, path):
         'files': page.object_list if page else [],
         'page': _massage_page(page, paginator) if page else {},
         'pagesize': pagesize,
-        'home_directory': request.fs.isdir(home_dir_path) and home_dir_path or None,
+        'home_directory': home_dir_path if home_dir_path and request.fs.isdir(home_dir_path) else None,
         'descending': descending_param,
         # The following should probably be deprecated
         'cwd_set': True,
@@ -652,8 +655,7 @@ def display(request, path):
     if mode == 'binary':
         compression = 'none'
         # Read out based on meta.
-    compression, offset, length, contents =\
-    read_contents(compression, path, request.fs, offset, length)
+    compression, offset, length, contents = read_contents(compression, path, request.fs, offset, length)
 
     # Get contents as string for text mode, or at least try
     uni_contents = None
@@ -1065,16 +1067,10 @@ def generic_op(form_class, request, op, parameter_names, piggyback=None, templat
                 if is_admin(request.user) and not _is_hdfs_superuser(request):
                     msg += _(' Note: you are a Hue admin but not a HDFS superuser, "%(superuser)s" or part of HDFS supergroup, "%(supergroup)s".') \
                            % {'superuser': request.fs.superuser, 'supergroup': request.fs.supergroup}
-                if request.is_ajax():
-                    return HttpResponseForbidden(smart_str(e))
-                else:
-                    raise PopupException(msg, detail=e)
+                raise PopupException(msg, detail=e)
             except S3FileSystemException, e:
               msg = _("S3 filesystem exception.")
-              if request.is_ajax():
-                  return HttpResponseForbidden(smart_str(e))
-              else:
-                  raise PopupException(msg, detail=e)
+              raise PopupException(msg, detail=e)
             except NotImplementedError, e:
                 msg = _("Cannot perform operation.")
                 raise PopupException(msg, detail=e)

+ 2 - 2
apps/hbase/src/hbase/api.py

@@ -27,7 +27,7 @@ from desktop.lib import thrift_util
 from desktop.lib.exceptions_renderable import PopupException
 
 from hbase import conf
-from hbase.hbase_site import get_server_principal, get_server_authentication, is_using_thrift_ssl, is_using_thrift_http
+from hbase.hbase_site import get_server_principal, get_server_authentication, is_using_thrift_ssl, is_using_thrift_http, get_thrift_transport
 from hbase.server.hbase_lib import get_thrift_type, get_client_type
 
 
@@ -99,7 +99,7 @@ class HbaseApi(object):
         kerberos_principal=_security['kerberos_principal_short_name'],
         use_sasl=_security['use_sasl'],
         timeout_seconds=30,
-        transport=conf.THRIFT_TRANSPORT.get(),
+        transport=get_thrift_transport(),
         transport_mode='http' if is_using_thrift_http() else 'socket',
         http_url=('https://' if is_using_thrift_ssl() else 'http://') + target['host'] + ':' + str(target['port']),
         validate=conf.SSL_CERT_CA_VERIFY.get()

+ 11 - 4
apps/hbase/src/hbase/conf.py

@@ -23,6 +23,7 @@ from django.utils.translation import ugettext_lazy as _t, ugettext as _
 
 from desktop.conf import default_ssl_validate
 from desktop.lib.conf import Config, validate_thrift_transport, coerce_bool
+from hbase.hbase_site import get_thrift_transport
 
 
 LOG = logging.getLogger(__name__)
@@ -45,9 +46,9 @@ TRUNCATE_LIMIT = Config(
 
 THRIFT_TRANSPORT = Config(
   key="thrift_transport",
-  default="framed",
-  help=_t("'framed' is used to chunk up responses, which is useful when used in conjunction with the nonblocking server in Thrift."
-       "'buffered' used to be the default of the HBase Thrift Server."),
+  default="buffered",
+  help=_t("Should come from hbase-site.xml, do not set. 'framed' is used to chunk up responses, used with the nonblocking server in Thrift but is not supported in Hue."
+       "'buffered' used to be the default of the HBase Thrift Server. Default is buffered when not set in hbase-site.xml."),
   type=str
 )
 
@@ -60,7 +61,7 @@ HBASE_CONF_DIR = Config(
 # Hidden, just for making patching of older version of Hue easier. To remove in Hue 4.
 USE_DOAS = Config(
   key='use_doas',
-  help=_t('Force Hue to use Http Thrift mode with doas impersonation, regarless of hbase-site.xml properties.'),
+  help=_t('Should come from hbase-site.xml, do not set. Force Hue to use Http Thrift mode with doas impersonation, regarless of hbase-site.xml properties.'),
   default=False,
   type=coerce_bool
 )
@@ -95,6 +96,12 @@ def config_validator(user):
     LOG.exception(msg)
     res.append((NICE_NAME, _(msg)))
 
+  if get_thrift_transport() == "framed":
+    msg = "Hbase config thrift_transport=framed is not supported"
+    LOG.exception(msg)
+    res.append((NICE_NAME, _(msg)))
+
+
 
   res.extend(validate_thrift_transport(THRIFT_TRANSPORT))
 

+ 19 - 2
apps/hbase/src/hbase/hbase_site.py

@@ -22,8 +22,6 @@ import os.path
 from hadoop import confparse
 from desktop.lib.security_util import get_components
 
-from hbase.conf import HBASE_CONF_DIR, USE_DOAS
-
 
 LOG = logging.getLogger(__name__)
 
@@ -33,6 +31,7 @@ SITE_DICT = None
 
 _CNF_HBASE_THRIFT_KERBEROS_PRINCIPAL = 'hbase.thrift.kerberos.principal'
 _CNF_HBASE_AUTHENTICATION = 'hbase.security.authentication'
+_CNF_HBASE_REGIONSERVER_THRIFT_FRAMED = 'hbase.regionserver.thrift.framed'
 
 _CNF_HBASE_IMPERSONATION_ENABLED = 'hbase.thrift.support.proxyuser'
 _CNF_HBASE_USE_THRIFT_HTTP = 'hbase.regionserver.thrift.http'
@@ -61,10 +60,26 @@ def get_server_principal():
 def get_server_authentication():
   return get_conf().get(_CNF_HBASE_AUTHENTICATION, 'NOSASL').upper()
 
+def get_thrift_transport():
+  use_framed = get_conf().get(_CNF_HBASE_REGIONSERVER_THRIFT_FRAMED)
+  if use_framed is not None:
+    if use_framed.upper() == "TRUE":
+      return "framed"
+    else:
+      return "buffered"
+  else:
+    #Avoid circular import
+    from hbase.conf import THRIFT_TRANSPORT
+    return THRIFT_TRANSPORT.get()
+
 def is_impersonation_enabled():
+  #Avoid circular import
+  from hbase.conf import USE_DOAS
   return get_conf().get(_CNF_HBASE_IMPERSONATION_ENABLED, 'FALSE').upper() == 'TRUE' or USE_DOAS.get()
 
 def is_using_thrift_http():
+  #Avoid circular import
+  from hbase.conf import USE_DOAS
   return get_conf().get(_CNF_HBASE_USE_THRIFT_HTTP, 'FALSE').upper() == 'TRUE' or USE_DOAS.get()
 
 def is_using_thrift_ssl():
@@ -75,6 +90,8 @@ def _parse_site():
   global SITE_DICT
   global SITE_PATH
 
+  #Avoid circular import
+  from hbase.conf import HBASE_CONF_DIR
   SITE_PATH = os.path.join(HBASE_CONF_DIR.get(), 'hbase-site.xml')
   try:
     data = file(SITE_PATH, 'r').read()

+ 75 - 6
apps/impala/src/impala/api.py

@@ -17,36 +17,50 @@
 
 ## Main views are inherited from Beeswax.
 
+import base64
 import logging
+import json
+import struct
 
 from django.utils.translation import ugettext as _
 from django.views.decorators.http import require_POST
 
 from desktop.lib.django_util import JsonResponse
+from desktop.models import Document2
 
 from beeswax.api import error_handler
+from beeswax.server.dbms import get_cluster_config
 from beeswax.models import Session
 from beeswax.server import dbms as beeswax_dbms
 from beeswax.views import authorized_get_query_history
 
 from impala import dbms
+from impala.dbms import _get_server_name
+from impala.server import get_api as get_impalad_api, _get_impala_server_url
 
+from libanalyze import analyze as analyzer
+from libanalyze import rules
 
-LOG = logging.getLogger(__name__)
+from notebook.models import make_notebook
 
+LOG = logging.getLogger(__name__)
+ANALYZER = rules.TopDownAnalysis() # We need to parse some files so save as global
 
 @require_POST
 @error_handler
 def invalidate(request):
-  query_server = dbms.get_query_server_config()
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+  database = request.POST.get('database', None)
+  table = request.POST.get('table', None)
+  flush_all = request.POST.get('flush_all', 'false').lower() == 'true'
+
+  cluster_config = get_cluster_config(cluster)
+  query_server = dbms.get_query_server_config(cluster_config=cluster_config)
   db = beeswax_dbms.get(request.user, query_server=query_server)
 
   response = {'status': 0, 'message': ''}
 
-  database = request.POST.get('database', None)
-  flush_all = request.POST.get('flush_all', 'false').lower() == 'true'
-
-  db.invalidate(database=database, flush_all=flush_all)
+  db.invalidate(database=database, table=table, flush_all=flush_all)
   response['message'] = _('Successfully invalidated metadata')
 
   return JsonResponse(response)
@@ -108,3 +122,58 @@ def get_runtime_profile(request, query_history_id):
     response['profile'] = profile
 
   return JsonResponse(response)
+
+@require_POST
+@error_handler
+def alanize(request):
+  response = {'status': -1}
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+  query_id = json.loads(request.POST.get('query_id'))
+
+  application = _get_server_name(cluster)
+  query_server = dbms.get_query_server_config()
+  session = Session.objects.get_session(request.user, query_server['server_name'])
+  server_url = _get_impala_server_url(session)
+
+  if query_id:
+    LOG.debug("Attempting to get Impala query profile at server_url %s for query ID: %s" % (server_url, query_id))
+    doc = Document2.objects.get(id=query_id)
+    snippets = doc.data_dict.get('snippets', [])
+    secret = snippets[0]['result']['handle']['secret']
+    api = get_impalad_api(user=request.user, url=server_url)
+    impala_query_id = "%x:%x" % struct.unpack(b"QQ", base64.decodestring(secret))
+    api.kill(impala_query_id) # There are many statistics that are not present when the query is open. Close it first.
+    query_profile = api.get_query_profile_encoded(impala_query_id)
+    profile = analyzer.analyze(analyzer.parse_data(query_profile))
+    result = ANALYZER.run(profile)
+
+    heatmap = {}
+    summary = analyzer.summary(profile)
+    heatmapMetrics = ['AverageThreadTokens', 'BloomFilterBytes', 'PeakMemoryUsage', 'PerHostPeakMemUsage', 'PrepareTime', 'RowsProduced', 'TotalCpuTime', 'TotalNetworkReceiveTime', 'TotalNetworkSendTime', 'TotalStorageWaitTime', 'TotalTime']
+    for key in heatmapMetrics:
+      metrics = analyzer.heatmap_by_host(profile, key)
+      if metrics['data']:
+        heatmap[key] = metrics
+    response['data'] = { 'query': { 'healthChecks' : result[0]['result'], 'summary': summary, 'heatmap': heatmap, 'heatmapMetrics': sorted(list(heatmap.iterkeys())) } }
+    response['status'] = 0
+  return JsonResponse(response)
+
+@require_POST
+@error_handler
+def alanize_fix(request):
+  response = {'status': -1}
+  fix = json.loads(request.POST.get('fix'))
+  start_time = json.loads(request.POST.get('start_time'), '-1')
+  if fix['id'] == 0:
+    notebook = make_notebook(
+      name=_('compute stats %(data)s') % fix,
+      editor_type='impala',
+      statement='compute stats %(data)s' % fix,
+      status='ready',
+      last_executed=start_time,
+      is_task=True
+    )
+    response['details'] = { 'task': notebook.execute(request, batch=True) }
+    response['status'] = 0
+
+  return JsonResponse(response)

+ 27 - 7
apps/impala/src/impala/conf.py

@@ -118,28 +118,24 @@ SSL = ConfigSection(
       type=coerce_bool,
       default=False
     ),
-
     CACERTS = Config(
       key="cacerts",
       help=_t("Path to Certificate Authority certificates."),
       type=str,
       dynamic_default=default_ssl_cacerts,
     ),
-
     KEY = Config(
       key="key",
       help=_t("Path to the private key file, e.g. /etc/hue/key.pem"),
       type=str,
       default=None
     ),
-
     CERT = Config(
       key="cert",
       help=_t("Path to the public certificate file, e.g. /etc/hue/cert.pem"),
       type=str,
       default=None
     ),
-
     VALIDATE = Config(
       key="validate",
       help=_t("Choose whether Hue should validate certificates received from the server."),
@@ -171,14 +167,38 @@ AUTH_PASSWORD = Config(
   key="auth_password",
   help=_t("LDAP/PAM/.. password of the hue user used for authentications."),
   private=True,
-  dynamic_default=get_auth_password)
+  dynamic_default=get_auth_password
+)
 
 AUTH_PASSWORD_SCRIPT = Config(
   key="auth_password_script",
   help=_t("Execute this script to produce the auth password. This will be used when `auth_password` is not set."),
   private=True,
   type=coerce_password_from_script,
-  default=None)
+  default=None
+)
+
+DAEMON_API_PASSWORD = Config(
+  key="daemon_api_password",
+  help=_t("Password for Impala Daemon when username/password authentication is enabled for the Impala Daemon UI."),
+  private=True,
+  default=None
+)
+
+DAEMON_API_PASSWORD_SCRIPT = Config(
+  key="daemon_api_password_script",
+  help=_t("Execute this script to produce the Impala Daemon Password. This will be used when `daemon_api_password` is not set."),
+  private=True,
+  type=coerce_password_from_script,
+  default=None
+)
+
+DAEMON_API_USERNAME = Config(
+  key="daemon_api_username",
+  help=_t("Username for Impala Daemon when username/password authentication is enabled for the Impala Daemon UI."),
+  private=True,
+  default=None
+)
 
 DAEMON_API_PASSWORD = Config(
   key="daemon_api_password",
@@ -226,7 +246,7 @@ def config_validator(user):
         LOG.exception(msg)
         res.append((NICE_NAME, _(msg)))
       else:
-       raise ex
+        raise ex
   except Exception, ex:
     msg = "No available Impalad to send queries to."
     LOG.exception(msg)

+ 21 - 12
apps/impala/src/impala/dbms.py

@@ -33,16 +33,16 @@ LOG = logging.getLogger(__name__)
 
 def get_query_server_config(cluster_config=None):
   query_server = {
-        'server_name': 'impala' + ('-' + cluster_config.get('name') if cluster_config else ''),
-        'server_host': conf.SERVER_HOST.get() if not cluster_config else cluster_config.get('server_host'),
-        'server_port': conf.SERVER_PORT.get(),
-        'principal': conf.IMPALA_PRINCIPAL.get(),
-        'impersonation_enabled': conf.IMPERSONATION_ENABLED.get(),
-        'querycache_rows': conf.QUERYCACHE_ROWS.get(),
-        'QUERY_TIMEOUT_S': conf.QUERY_TIMEOUT_S.get(),
-        'SESSION_TIMEOUT_S': conf.SESSION_TIMEOUT_S.get(),
-        'auth_username': conf.AUTH_USERNAME.get(),
-        'auth_password': conf.AUTH_PASSWORD.get()
+      'server_name': _get_server_name(cluster_config),
+      'server_host': conf.SERVER_HOST.get() if not cluster_config else cluster_config.get('server_host'),
+      'server_port': conf.SERVER_PORT.get() if not cluster_config else 21050,
+      'principal': conf.IMPALA_PRINCIPAL.get(),
+      'impersonation_enabled': conf.IMPERSONATION_ENABLED.get(),
+      'querycache_rows': conf.QUERYCACHE_ROWS.get(),
+      'QUERY_TIMEOUT_S': conf.QUERY_TIMEOUT_S.get(),
+      'SESSION_TIMEOUT_S': conf.SESSION_TIMEOUT_S.get(),
+      'auth_username': conf.AUTH_USERNAME.get(),
+      'auth_password': conf.AUTH_PASSWORD.get()
   }
 
   debug_query_server = query_server.copy()
@@ -52,6 +52,10 @@ def get_query_server_config(cluster_config=None):
   return query_server
 
 
+def _get_server_name(cluster_config):
+  return 'impala' + ('-' + cluster_config.get('name') if cluster_config else '')
+
+
 class ImpalaDbms(HiveServer2Dbms):
 
   @classmethod
@@ -87,19 +91,24 @@ class ImpalaDbms(HiveServer2Dbms):
     return 'SELECT histogram(%s) FROM %s' % (select_clause, from_clause)
 
 
-  def invalidate(self, database=None, flush_all=False):
+  def invalidate(self, database=None, table=None, flush_all=False):
     handle = None
+
     try:
       if flush_all or database is None:
         hql = "INVALIDATE METADATA"
         query = hql_query(hql, query_type=QUERY_TYPES[1])
         handle = self.execute_and_wait(query, timeout_sec=10.0)
-      else:
+      elif table is None:
         diff_tables = self._get_different_tables(database)
         for table in diff_tables:
           hql = "INVALIDATE METADATA `%s`.`%s`" % (database, table)
           query = hql_query(hql, query_type=QUERY_TYPES[1])
           handle = self.execute_and_wait(query, timeout_sec=10.0)
+      else:
+        hql = "INVALIDATE METADATA `%s`.`%s`" % (database, table)
+        query = hql_query(hql, query_type=QUERY_TYPES[1])
+        handle = self.execute_and_wait(query, timeout_sec=10.0)
     except QueryServerTimeoutException, e:
       # Allow timeout exceptions to propagate
       raise e

+ 61 - 6
apps/impala/src/impala/server.py

@@ -38,20 +38,23 @@ API_CACHE_LOCK = threading.Lock()
 
 def get_api(user, url):
   global  API_CACHE
-  if API_CACHE is None:
+  if API_CACHE is None or API_CACHE.get(url) is None:
     API_CACHE_LOCK.acquire()
     try:
       if API_CACHE is None:
-        API_CACHE = ImpalaDaemonApi(url)
+        API_CACHE = {}
+      if API_CACHE.get(url) is None:
+        API_CACHE[url] = ImpalaDaemonApi(url)
     finally:
       API_CACHE_LOCK.release()
-  API_CACHE.set_user(user)
-  return API_CACHE
+  api = API_CACHE[url]
+  api.set_user(user)
+  return api
 
 
 def _get_impala_server_url(session):
-  impala_settings = session.get_formatted_properties()
-  http_addr = next((setting['value'] for setting in impala_settings if setting['key'].lower() == 'http_addr'), None)
+  properties = session.get_properties()
+  http_addr = properties.get('coordinator_host', properties.get('http_addr'))
   # Remove scheme if found
   http_addr = http_addr.replace('http://', '').replace('https://', '')
   return ('https://' if get_webserver_certificate_file() else 'http://') + http_addr
@@ -251,3 +254,55 @@ class ImpalaDaemonApi(object):
         return resp
     except ValueError, e:
       raise ImpalaDaemonApiException('ImpalaDaemonApi kill did not return valid JSON: %s' % e)
+
+  def get_query_backends(self, query_id):
+    params = {
+      'query_id': query_id,
+      'json': 'true'
+    }
+
+    resp = self._root.get('query_backends', params=params)
+    try:
+      if isinstance(resp, basestring):
+        return json.loads(resp)
+      else:
+        return resp
+    except ValueError, e:
+      raise ImpalaDaemonApiException('ImpalaDaemonApi query_backends did not return valid JSON: %s' % e)
+
+  def get_query_finstances(self, query_id):
+    params = {
+      'query_id': query_id,
+      'json': 'true'
+    }
+
+    resp = self._root.get('query_finstances', params=params)
+    try:
+      if isinstance(resp, basestring):
+        return json.loads(resp)
+      else:
+        return resp
+    except ValueError, e:
+      raise ImpalaDaemonApiException('ImpalaDaemonApi query_finstances did not return valid JSON: %s' % e)
+
+  def get_query_summary(self, query_id):
+    params = {
+      'query_id': query_id,
+      'json': 'true'
+    }
+
+    resp = self._root.get('query_summary', params=params)
+    try:
+      if isinstance(resp, basestring):
+        return json.loads(resp)
+      else:
+        return resp
+    except ValueError, e:
+      raise ImpalaDaemonApiException('ImpalaDaemonApi query_summary did not return valid JSON: %s' % e)
+
+  def get_query_profile_encoded(self, query_id):
+    params = {
+      'query_id': query_id
+    }
+
+    return self._root.get('query_profile_encoded', params=params)

+ 29 - 25
apps/impala/src/impala/tests.py

@@ -120,48 +120,52 @@ class TestImpalaIntegration:
     cls.db = dbms.get(cls.user, get_query_server_config(name='impala'))
     cls.DATABASE = get_db_prefix(name='impala')
 
-    hql = """
-      USE default;
+    queries = ["""
       DROP TABLE IF EXISTS %(db)s.tweets;
+    """ % {'db': cls.DATABASE}, """
       DROP DATABASE IF EXISTS %(db)s CASCADE;
+    """ % {'db': cls.DATABASE}, """
       CREATE DATABASE %(db)s;
+    """ % {'db': cls.DATABASE}]
 
-      USE %(db)s;
-    """ % {'db': cls.DATABASE}
-
-    resp = _make_query(cls.client, hql, database='default', local=False, server_name='impala')
-    resp = wait_for_query_to_finish(cls.client, resp, max=180.0)
-
-    content = json.loads(resp.content)
-    assert_true(content['status'] == 0, resp.content)
+    for query in queries:
+       resp = _make_query(cls.client, query, database='default', local=False, server_name='impala')
+       resp = wait_for_query_to_finish(cls.client, resp, max=180.0)
+       content = json.loads(resp.content)
+       assert_true(content['status'] == 0, resp.content)
 
-    hql = """
+    queries = ["""
       CREATE TABLE tweets (row_num INTEGER, id_str STRING, text STRING) STORED AS PARQUET;
-
+    """, """
       INSERT INTO TABLE tweets VALUES (1, "531091827395682000", "My dad looks younger than costa");
+    """, """
       INSERT INTO TABLE tweets VALUES (2, "531091827781550000", "There is a thin line between your partner being vengeful and you reaping the consequences of your bad actions towards your partner.");
+    """, """
       INSERT INTO TABLE tweets VALUES (3, "531091827768979000", "@Mustang_Sally83 and they need to get into you :))))");
+    """, """
       INSERT INTO TABLE tweets VALUES (4, "531091827114668000", "@RachelZJohnson thank you rach!xxx");
+    """, """
       INSERT INTO TABLE tweets VALUES (5, "531091827949309000", "i think @WWERollins was robbed of the IC title match this week on RAW also i wonder if he will get a rematch i hope so @WWE");
-    """
-
-    resp = _make_query(cls.client, hql, database=cls.DATABASE, local=False, server_name='impala')
-    resp = wait_for_query_to_finish(cls.client, resp, max=180.0)
+    """]
 
-    content = json.loads(resp.content)
-    assert_true(content['status'] == 0, resp.content)
+    for query in queries:
+       resp = _make_query(cls.client, query, database=cls.DATABASE, local=False, server_name='impala')
+       resp = wait_for_query_to_finish(cls.client, resp, max=180.0)
+       content = json.loads(resp.content)
+       assert_true(content['status'] == 0, resp.content)
 
 
   @classmethod
   def teardown_class(cls):
     # We need to drop tables before dropping the database
-    hql = """
-    USE default;
-    DROP TABLE IF EXISTS %(db)s.tweets;
-    DROP DATABASE %(db)s CASCADE;
-    """ % {'db': cls.DATABASE}
-    resp = _make_query(cls.client, hql, database='default', local=False, server_name='impala')
-    resp = wait_for_query_to_finish(cls.client, resp, max=180.0)
+    queries = ["""
+      DROP TABLE IF EXISTS %(db)s.tweets;
+    """ % {'db': cls.DATABASE}, """
+      DROP DATABASE %(db)s CASCADE;
+    """ % {'db': cls.DATABASE}]
+    for query in queries:
+      resp = _make_query(cls.client, query, database='default', local=False, server_name='impala')
+      resp = wait_for_query_to_finish(cls.client, resp, max=180.0)
 
     # Check the cleanup
     databases = cls.db.get_databases()

+ 2 - 0
apps/impala/src/impala/urls.py

@@ -25,6 +25,8 @@ urlpatterns = [
   url(r'^api/refresh/(?P<database>\w+)/(?P<table>\w+)$', impala_api.refresh_table, name='refresh_table'),
   url(r'^api/query/(?P<query_history_id>\d+)/exec_summary$', impala_api.get_exec_summary, name='get_exec_summary'),
   url(r'^api/query/(?P<query_history_id>\d+)/runtime_profile', impala_api.get_runtime_profile, name='get_runtime_profile'),
+  url(r'^api/query/alanize$', impala_api.alanize, name='alanize'),
+  url(r'^api/query/alanize/fix$', impala_api.alanize_fix, name='alanize_fix'),
 ]
 
 urlpatterns += beeswax_urls

+ 8 - 0
apps/jobbrowser/src/jobbrowser/apis/base_api.py

@@ -31,6 +31,8 @@ LOG = logging.getLogger(__name__)
 def get_api(user, interface):
   from jobbrowser.apis.bundle_api import BundleApi
   from jobbrowser.apis.data_eng_api import DataEngClusterApi, DataEngJobApi
+  from jobbrowser.apis.clusters import ClusterApi
+  from jobbrowser.apis.data_warehouse import DataWarehouseClusterApi
   from jobbrowser.apis.livy_api import LivySessionsApi, LivyJobApi
   from jobbrowser.apis.job_api import JobApi
   from jobbrowser.apis.query_api import QueryApi
@@ -47,8 +49,14 @@ def get_api(user, interface):
     return ScheduleApi(user)
   elif interface == 'bundles':
     return BundleApi(user)
+  elif interface == 'engines':
+    return ClusterApi(user)
   elif interface == 'dataeng-clusters':
     return DataEngClusterApi(user)
+  elif interface == 'dataware-clusters':
+    return DataWarehouseClusterApi(user)
+  elif interface == 'dataware2-clusters':
+    return DataWarehouseClusterApi(user, version=2)
   elif interface == 'dataeng-jobs':
     return DataEngJobApi(user)
   elif interface == 'livy-sessions':

+ 1 - 2
apps/jobbrowser/src/jobbrowser/apis/bundle_api.py

@@ -25,7 +25,6 @@ from liboozie.oozie_api import get_oozie
 from jobbrowser.apis.base_api import Api, MockDjangoRequest
 from jobbrowser.apis.workflow_api import _manage_oozie_job, _filter_oozie_jobs
 from jobbrowser.apis.schedule_api import MockGet
-from oozie.views.dashboard import list_oozie_bundle
 
 
 LOG = logging.getLogger(__name__)
@@ -33,7 +32,7 @@ LOG = logging.getLogger(__name__)
 
 try:
   from oozie.conf import OOZIE_JOBS_COUNT
-  from oozie.views.dashboard import get_oozie_job_log, massaged_oozie_jobs_for_json
+  from oozie.views.dashboard import get_oozie_job_log, massaged_oozie_jobs_for_json, list_oozie_bundle
 except Exception, e:
   LOG.exception('Some application are not enabled: %s' % e)
 

+ 131 - 0
apps/jobbrowser/src/jobbrowser/apis/clusters.py

@@ -0,0 +1,131 @@
+#!/usr/bin/env python
+# Licensed to Cloudera, Inc. under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  Cloudera, Inc. licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+
+from datetime import datetime
+from dateutil import parser
+
+from django.utils import timezone
+from django.utils.translation import ugettext as _
+
+from notebook.connectors.altus import DataWarehouse2Api
+
+from jobbrowser.apis.base_api import Api
+
+
+
+LOG = logging.getLogger(__name__)
+
+
+RUNNING_STATES = ('QUEUED', 'RUNNING', 'SUBMITTING')
+
+
+class ClusterApi(Api):
+
+  def __init__(self, user, version=1):
+    super(ClusterApi, self).__init__(user)
+
+    self.version = version
+    self.api = DataWarehouse2Api(self.user) 
+
+
+  def apps(self, filters):
+    #jobs = self.api.list_clusters()
+
+    return {
+      u'status': 0,
+      u'total': 3,
+      u'apps': [
+        {u'status': u'ONLINE', u'name': u'Internal EDH', u'submitted': u'2018-10-04 08:34:39.128886', u'queue': u'group', u'user': u'jo0', u'canWrite': False, u'duration': 0, u'progress': u'100 / 100', u'type': u'GKE 100 nodes 100CPU 20TB', u'id': u'crn:altus:engine:k8s:12a0079b-1591-4ca0-b721-a446bda74e67:cluster:jo0/cbf7bbb1-f956-45e4-a269-d239efbc9996', u'apiStatus': u'RUNNING'},
+        {u'status': u'ONLINE', u'name': u'gke_gcp-eng-dsdw_us-west2-b_impala-demo', u'submitted': u'2018-10-04 08:34:39.128881', u'queue': u'group', u'user': u'r0', u'canWrite': False, u'duration': 0, u'progress': u'4 / 4', u'type': u'GKE 4 nodes 16CPU 64GB', u'id': u'crn:altus:engine:k8s:12a0079b-1591-4ca0-b721-a446bda74e67:cluster:r0/0da5e627-ee33-45c5-9179-cc6b95008d2e', u'apiStatus': u'RUNNING'},
+        {u'status': u'ONLINE', u'name': u'DW-fraud', u'submitted': u'2018-10-04 08:34:39.128881', u'queue': u'group', u'user': u'r0', u'canWrite': False, u'duration': 0, u'progress': u'50 / 50', u'type': u'OpenShift 50 nodes 30CPU 2TB', u'id': u'crn:altus:engine:k8s:12a0079b-1591-4ca0-b721-a446bda74e67:cluster:r0/0da5e627-ee33-45c5-9179-cc6b95008d2e', u'apiStatus': u'RUNNING'},
+      ]
+    }
+
+    return {
+      'apps': [{
+        'id': app['crn'],
+        'name': '%(clusterName)s' % app,
+        'status': app['status'],
+        'apiStatus': self._api_status(app['status']),
+        'type': 'Altus %(workersGroupSize)sX %(instanceType)s %(cdhVersion)s' % app,
+        'user': app['clusterName'].split('-', 1)[0],
+        'progress': app.get('progress', 100),
+        'queue': 'group',
+        'duration': ((datetime.now() - parser.parse(app['creationDate']).replace(tzinfo=None)).seconds * 1000) if app['creationDate'] else 0,
+        'submitted': app['creationDate'],
+        'canWrite': True
+      } for app in sorted(jobs['clusters'], key=lambda a: a['creationDate'], reverse=True)],
+      'total': len(jobs['clusters'])
+    }
+
+
+  def app(self, appid):
+    handle = self.api.describe_cluster(cluster_id=appid)
+
+    cluster = handle['cluster']
+
+    cluster['workerAutoResize'] = False
+
+    common = {
+        'id': cluster['crn'],
+        'name': cluster['clusterName'],
+        'status': cluster['status'],
+        'apiStatus': self._api_status(cluster['status']),
+        'progress': 50 if self._api_status(cluster['status']) == 'RUNNING' else 100,
+        'duration': 10 * 3600,
+        'submitted': cluster['creationDate'],
+        'type': 'dataware2-cluster' if self.version == 2 else 'dataware-cluster',
+        'canWrite': True
+    }
+
+    common['properties'] = {
+      'properties': cluster
+    }
+
+    return common
+
+  def action(self, appid, action):
+    message = {'message': '', 'status': 0}
+
+    if action.get('action') == 'kill':
+      for _id in appid:
+        result = self.api.delete_cluster(_id)
+        if result.get('error'):
+          message['message'] = result.get('error')
+          message['status'] = -1
+        elif result.get('contents') and message.get('status') != -1:
+          message['message'] = result.get('contents')
+
+    return message;
+
+
+  def logs(self, appid, app_type, log_name=None, is_embeddable=False):
+    return {'logs': ''}
+
+
+  def profile(self, appid, app_type, app_property):
+    return {}
+
+  def _api_status(self, status):
+    if status in ['CREATING', 'CREATED', 'ONLINE', 'SCALING_UP', 'SCALING_DOWN', 'STARTING']: # ONLINE ... are from K8s
+      return 'RUNNING'
+    elif status in ['ARCHIVING', 'COMPLETED', 'TERMINATING', 'STOPPED']:
+      return 'SUCCEEDED'
+    else:
+      return 'FAILED' # KILLED and FAILED

+ 9 - 6
apps/jobbrowser/src/jobbrowser/apis/data_eng_api.py

@@ -29,6 +29,9 @@ from jobbrowser.apis.base_api import Api
 LOG = logging.getLogger(__name__)
 
 
+RUNNING_STATES = ('QUEUED', 'RUNNING', 'SUBMITTING')
+
+
 class DataEngClusterApi(Api):
 
   def apps(self, filters):
@@ -49,7 +52,7 @@ class DataEngClusterApi(Api):
         'duration': 1,
         'submitted': app['creationDate'],
         'canWrite': True
-      } for app in jobs['clusters']],
+      } for app in sorted(jobs['clusters'], key=lambda a: a['creationDate'], reverse=True)],
       'total': len(jobs)
     }
 
@@ -113,12 +116,12 @@ class DataEngJobApi(Api):
     return {
       'apps': [{
         'id': app['jobId'],
-        'name': app['creationDate'],
+        'name': app['jobName'],
         'status': app['status'],
         'apiStatus': self._api_status(app['status']),
         'type': 'Altus %(jobType)s' % app,
         'user': '',
-        'progress': 100,
+        'progress': 50 if self._api_status(app['status']) == 'RUNNING' else 100,
         'duration': 10 * 3600,
         'submitted': app['creationDate'],
         'canWrite': True
@@ -133,10 +136,10 @@ class DataEngJobApi(Api):
 
     common = {
         'id': job['jobId'],
-        'name': job['jobId'],
+        'name': job['jobName'],
         'status': job['status'],
         'apiStatus': self._api_status(job['status']),
-        'progress': 50,
+        'progress': 50 if self._api_status(job['status']) == 'RUNNING' else 100,
         'duration': 10 * 3600,
         'submitted': job['creationDate'],
         'type': 'dataeng-job-%s' % job['jobType'],
@@ -162,7 +165,7 @@ class DataEngJobApi(Api):
     return {}
 
   def _api_status(self, status):
-    if status in ['CREATING', 'CREATED', 'TERMINATING']:
+    if status in RUNNING_STATES:
       return 'RUNNING'
     elif status in ['COMPLETED']:
       return 'SUCCEEDED'

+ 121 - 0
apps/jobbrowser/src/jobbrowser/apis/data_warehouse.py

@@ -0,0 +1,121 @@
+#!/usr/bin/env python
+# Licensed to Cloudera, Inc. under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  Cloudera, Inc. licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+
+from datetime import datetime
+from dateutil import parser
+
+from django.utils import timezone
+from django.utils.translation import ugettext as _
+
+from notebook.connectors.altus import AnalyticDbApi, DataWarehouse2Api
+
+from jobbrowser.apis.base_api import Api
+
+
+
+LOG = logging.getLogger(__name__)
+
+
+RUNNING_STATES = ('QUEUED', 'RUNNING', 'SUBMITTING')
+
+
+class DataWarehouseClusterApi(Api):
+
+  def __init__(self, user, version=1):
+    super(DataWarehouseClusterApi, self).__init__(user)
+
+    self.version = version
+    self.api = DataWarehouse2Api(self.user) if version == 2 else AnalyticDbApi(self.user) 
+
+
+  def apps(self, filters):
+    jobs = self.api.list_clusters()
+
+    return {
+      'apps': [{
+        'id': app['crn'],
+        'name': '%(clusterName)s' % app,
+        'status': app['status'],
+        'apiStatus': self._api_status(app['status']),
+        'type': '%(instanceType)s' % app, #'Altus %(workersGroupSize)sX %(instanceType)s %(cdhVersion)s' % app,
+        'user': app['clusterName'].split('-', 1)[0],
+        'progress': app.get('progress', 100),
+        'queue': 'group',
+        'duration': ((datetime.now() - parser.parse(app['creationDate']).replace(tzinfo=None)).seconds * 1000) if app['creationDate'] else 0,
+        'submitted': app['creationDate'],
+        'canWrite': True
+      } for app in sorted(jobs['clusters'], key=lambda a: a['creationDate'], reverse=True)],
+      'total': len(jobs['clusters'])
+    }
+
+
+  def app(self, appid):
+    handle = self.api.describe_cluster(cluster_id=appid)
+
+    cluster = handle['cluster']
+
+    common = {
+        'id': cluster['crn'],
+        'name': cluster['clusterName'],
+        'status': cluster['status'],
+        'apiStatus': self._api_status(cluster['status']),
+        'progress': 50 if self._api_status(cluster['status']) == 'RUNNING' else 100,
+        'duration': 10 * 3600,
+        'submitted': cluster['creationDate'],
+        'type': 'dataware2-cluster' if self.version == 2 else 'dataware-cluster',
+        'canWrite': True
+    }
+
+    common['properties'] = {
+      'properties': cluster
+    }
+
+    return common
+
+  def action(self, appid, action):
+    message = {'message': '', 'status': 0}
+
+    if action.get('action') == 'kill':
+      for _id in appid:
+        result = self.api.delete_cluster(_id)
+        if result.get('error'):
+          message['message'] = result.get('error')
+          message['status'] = -1
+        elif result.get('contents') and message.get('status') != -1:
+          message['message'] = result.get('contents')
+
+    return message;
+
+
+  def logs(self, appid, app_type, log_name=None, is_embeddable=False):
+    return {'logs': ''}
+
+
+  def profile(self, app_id, app_type, app_property, app_filters):
+    return {}
+
+  def _api_status(self, status):
+    if status in ['CREATING', 'CREATED', 'ONLINE', 'SCALING_UP', 'SCALING_DOWN', 'STARTING']:
+      return 'RUNNING'
+    elif status == 'STOPPED':
+      return 'PAUSED'
+    elif status in ['ARCHIVING', 'COMPLETED', 'TERMINATING', 'TERMINATED']:
+      return 'SUCCEEDED'
+    else:
+      return 'FAILED' # KILLED and FAILED

+ 5 - 4
apps/jobbrowser/src/jobbrowser/apis/job_api.py

@@ -148,7 +148,7 @@ class YarnApi(Api):
         'name': app['name'],
         'type': app['applicationType'],
         'status': app['status'],
-        'apiStatus': self._api_status(app['status'], app['applicationType']),
+        'apiStatus': self._api_status(app['status']),
         'user': app['user'],
         'progress': app['progress'],
         'duration': app['durationMs'],
@@ -180,9 +180,10 @@ class YarnApi(Api):
       }
     elif app['applicationType'] == 'SPARK':
       app['logs'] = job.logs_url if hasattr(job, 'logs_url') else ''
+      app['trackingUrl'] = job.trackingUrl if hasattr(job, 'trackingUrl') else ''
       common['type'] = 'SPARK'
       common['properties'] = {
-        'metadata': [{'name': name, 'value': value} for name, value in app.iteritems()],
+        'metadata': [{'name': name, 'value': value} for name, value in app.iteritems() if name != "url" and name != "killUrl"],
         'executors': []
       }
       if hasattr(job, 'metrics'):
@@ -258,10 +259,10 @@ class YarnApi(Api):
         }
     return {}
 
-  def _api_status(self, status, app_type=None):
+  def _api_status(self, status):
     if status in ['NEW', 'NEW_SAVING', 'SUBMITTED', 'ACCEPTED', 'RUNNING']:
       return 'RUNNING'
-    elif status == 'SUCCEEDED' or (app_type == 'Oozie Launcher' and status == 'FINISHED'):
+    elif status == 'SUCCEEDED':
       return 'SUCCEEDED'
     else:
       return 'FAILED' # FAILED, KILLED

+ 78 - 29
apps/jobbrowser/src/jobbrowser/apis/query_api.py

@@ -66,6 +66,7 @@ class QueryApi(Api):
         'user': job['effective_user'],
         'queue': job.get('resource_pool'),
         'progress': job['progress'],
+        'isRunning': job['start_time'] > job['end_time'],
         'canWrite': job in jobs['in_flight_queries'],
         'duration': self._time_in_ms_groups(re.search(r"\s*(([\d.]*)([a-z]*))(([\d.]*)([a-z]*))?(([\d.]*)([a-z]*))?", job['duration'], re.MULTILINE).groups()),
         'submitted': job['start_time'],
@@ -114,28 +115,20 @@ class QueryApi(Api):
       }
     app = apps.get('apps')[0]
     progress_groups = re.search(r"([\d\.\,]+)%", app.get('progress'))
+    app.update({
+      'progress': float(progress_groups.group(1)) if progress_groups and progress_groups.group(1) else 100 if self._api_status(app.get('status')) in ['SUCCEEDED', 'FAILED'] else 1,
+      'type': 'queries',
+      'doc_url': "%s/query_plan?query_id=%s" % (self.api.url, appid),
+      'properties': {
+        'memory': '',
+        'profile': '',
+        'plan': '',
+        'backends': '',
+        'finstances': ''
+      }
+    })
 
-    common = {
-        'id': app.get('id'),
-        'name': app.get('name'),
-        'status': app.get('status'),
-        'apiStatus': app.get('apiStatus'),
-        'user': app.get('user'),
-        'progress': float(progress_groups.group(1)) if progress_groups and progress_groups.group(1) else 100,
-        'duration': app.get('duration'),
-        'submitted': app.get('submitted'),
-        'type': 'queries',
-        'doc_url': "%s/query_plan?query_id=%s" % (self.api.url, appid)
-    }
-
-    common['properties'] = {
-      'memory': '',
-      'profile': '',
-      'plan': ''
-    }
-
-    return common
-
+    return app
 
   def action(self, appid, action):
     message = {'message': '', 'status': 0}
@@ -161,9 +154,16 @@ class QueryApi(Api):
       return self._memory(appid, app_type, app_property, app_filters)
     elif app_property == 'profile':
       return self._query_profile(appid)
+    elif app_property == 'backends':
+      return self._query_backends(appid)
+    elif app_property == 'finstances':
+      return self._query_finstances(appid)
     else:
       return self._query(appid)
 
+  def profile_encoded(self, appid):
+    return self.api.get_query_profile_encoded(query_id=appid)
+
   def _memory(self, appid, app_type, app_property, app_filters):
     return self.api.get_query_memory(query_id=appid);
 
@@ -171,26 +171,75 @@ class QueryApi(Api):
     query = self.api.get_query(query_id=appid)
     query['summary'] = query.get('summary').strip() if query.get('summary') else ''
     query['plan'] = query.get('plan').strip() if query.get('plan') else ''
+    if query['plan_json']:
+      def get_exchange_icon (o):
+        if re.search(r'broadcast', o['label_detail'], re.IGNORECASE):
+          return { 'svg': 'hi-broadcast' }
+        elif re.search(r'hash', o['label_detail'], re.IGNORECASE):
+          return { 'font': 'fa-random' }
+        else:
+          return { 'font': 'fa-exchange' }
+      mapping = {
+        'TOP-N': { 'type': 'TOPN', 'icon': { 'svg': 'hi-filter' } },
+        'SORT': { 'type': 'SORT', 'icon': { 'svg': 'hi-sort' } },
+        'MERGING-EXCHANGE': {'type': 'EXCHANGE', 'icon': { 'fn': get_exchange_icon } },
+        'EXCHANGE': { 'type': 'EXCHANGE', 'icon': { 'fn': get_exchange_icon } },
+        'SCAN HDFS': { 'type': 'SCAN_HDFS', 'icon': { 'font': 'fa-files-o' } },
+        'SCAN KUDU': { 'type': 'SCAN_KUDU', 'icon': { 'font': 'fa-table' } },
+        'SCAN HBASE': { 'type': 'SCAN_HBASE', 'icon': { 'font': 'fa-th-large' } },
+        'HASH JOIN': { 'type': 'HASH_JOIN', 'icon': { 'svg': 'hi-join' } },
+        'AGGREGATE': { 'type': 'AGGREGATE', 'icon': { 'svg': 'hi-sigma' } },
+        'NESTED LOOP JOIN': { 'type': 'LOOP_JOIN', 'icon': { 'svg': 'hi-nested-loop' } },
+        'SUBPLAN': { 'type': 'SUBPLAN', 'icon': { 'svg': 'hi-map' } },
+        'UNNEST': { 'type': 'UNNEST', 'icon': { 'svg': 'hi-unnest' } },
+        'SINGULAR ROW SRC': { 'type': 'SINGULAR', 'icon': { 'svg': 'hi-vertical-align' } },
+        'ANALYTIC': { 'type': 'SINGULAR', 'icon': { 'svg': 'hi-timeline' } },
+        'UNION': { 'type': 'UNION', 'icon': { 'svg': 'hi-merge' } }
+      }
+      def process(node, mapping=mapping):
+        node['id'], node['name'] = node['label'].split(':')
+        details = mapping.get(node['name'])
+        if details:
+          icon = details['icon']
+          if icon and icon.get('fn'):
+            icon = icon['fn'](node)
+          node['icon'] = icon
+
+      for node in query['plan_json']['plan_nodes']:
+        self._for_each_node(node, process)
     return query
 
+  def _for_each_node(self, node, fn):
+    fn(node)
+    for child in node['children']:
+      self._for_each_node(child, fn)
+
   def _query_profile(self, appid):
     return self.api.get_query_profile(query_id=appid)
 
+  def _query_backends(self, appid):
+    return self.api.get_query_backends(query_id=appid)
+
+  def _query_finstances(self, appid):
+    return self.api.get_query_finstances(query_id=appid)
+
   def _api_status_filter(self, status):
-    if status in ['RUNNING', 'CREATED']:
-      return 'RUNNING'
-    elif status in ['FINISHED']:
+    if status == 'FINISHED':
       return 'COMPLETED'
-    else:
+    elif status == 'EXCEPTION':
       return 'FAILED'
+    elif status == 'RUNNING':
+      return 'RUNNING'
 
   def _api_status(self, status):
-    if status in ['RUNNING', 'CREATED']:
-      return 'RUNNING'
-    elif status in ['FINISHED']:
+    if status == 'FINISHED':
       return 'SUCCEEDED'
-    else:
+    elif status == 'EXCEPTION':
       return 'FAILED'
+    elif status == 'RUNNING':
+      return 'RUNNING'
+    else:
+      return 'PAUSED'
 
   def _get_filter_list(self, filters):
     filter_list = []

+ 1 - 1
apps/jobbrowser/src/jobbrowser/apis/schedule_api.py

@@ -34,7 +34,7 @@ try:
   from oozie.conf import OOZIE_JOBS_COUNT
   from oozie.views.dashboard import list_oozie_coordinator, get_oozie_job_log, massaged_oozie_jobs_for_json, has_job_edition_permission
 except Exception, e:
-  LOG.exception('Some application are not enabled: %s' % e)
+  LOG.warn('Some application are not enabled: %s' % e)
 
 
 class ScheduleApi(Api):

+ 5 - 3
apps/jobbrowser/src/jobbrowser/apis/workflow_api.py

@@ -32,8 +32,10 @@ try:
   from oozie.conf import OOZIE_JOBS_COUNT, ENABLE_OOZIE_BACKEND_FILTERING
   from oozie.views.dashboard import get_oozie_job_log, list_oozie_workflow, manage_oozie_jobs, bulk_manage_oozie_jobs, has_dashboard_jobs_access, massaged_oozie_jobs_for_json, \
       has_job_edition_permission
+  has_oozie_installed = True
 except Exception, e:
-  LOG.exception('Some applications are not enabled for Job Browser v2: %s' % e)
+  LOG.warn('Some applications are not enabled for Job Browser v2: %s' % e)
+  has_oozie_installed = False
 
 
 class WorkflowApi(Api):
@@ -200,7 +202,7 @@ def _manage_oozie_job(user, action, app_ids):
 def _filter_oozie_jobs(user, filters, kwargs):
     text_filters = _extract_query_params(filters)
 
-    if not has_dashboard_jobs_access(user):
+    if has_oozie_installed and not has_dashboard_jobs_access(user):
       kwargs['filters'].append(('user', user.username))
     elif 'username' in text_filters:
       kwargs['filters'].append(('user', text_filters['username']))
@@ -208,7 +210,7 @@ def _filter_oozie_jobs(user, filters, kwargs):
     if 'time' in filters:
       kwargs['filters'].extend([('startcreatedtime', '-%s%s' % (filters['time']['time_value'], filters['time']['time_unit'][:1]))])
 
-    if hasattr(ENABLE_OOZIE_BACKEND_FILTERING, 'get') and ENABLE_OOZIE_BACKEND_FILTERING.get() and text_filters.get('text'):
+    if has_oozie_installed and ENABLE_OOZIE_BACKEND_FILTERING.get() and text_filters.get('text'):
       kwargs['filters'].extend([('text', text_filters.get('text'))])
 
     if filters['pagination']:

Datei-Diff unterdrückt, da er zu groß ist
+ 0 - 1
apps/jobbrowser/src/jobbrowser/static/jobbrowser/css/jobbrowser-embeddable.css


+ 132 - 20
apps/jobbrowser/src/jobbrowser/static/jobbrowser/js/impala_dagre.js

@@ -13,21 +13,87 @@ function impalaDagre(id) {
   var svg = d3.select("#"+id + " svg");
   var inner = svg.select("g");
   var _impalaDagree = {
+    init: function (initialScale) {
+      _impalaDagree.scale = initialScale;
+      zoom.translate([((svg.attr("width") || $("#"+id).width()) - g.graph().width * initialScale) / 2, 20])
+      .scale(initialScale)
+      .event(svg);
+    },
     update: function(plan) {
       renderGraph(plan);
+      _impalaDagree._width = $(svg[0]).width();
     },
     height: function(value) {
       var scale = _impalaDagree.scale || 1;
       var height = value || 600;
-      svg.attr('height', Math.min(g.graph().height * scale + 40, height) || height);
-    }
+      _impalaDagree._height = height;
+      svg.attr('height', height);
+    },
+    action: function(type) {
+      if (type == 'plus') {
+        zoom.scale(zoom.scale() + 0.25)
+        .event(svg);
+      } else if (type == 'minus') {
+        zoom.scale(zoom.scale() - 0.25)
+        .event(svg);
+      } else if (type == 'reset') {
+        _impalaDagree.init(1);
+      }
+    },
+    moveTo: function(id) {
+      zoomToNode(id);
+    },
+    select: function(id) {
+      select(id);
+    },
   };
+  createActions();
+
+  function createActions () {
+    d3.select("#"+id)
+      .style('position', 'relative')
+    .append('div')
+      .style('position', 'absolute')
+      .style('right', '5px')
+      .style('bottom', '5px')
+      .classed('buttons', true)
+    .selectAll('button').data([{ type: 'reset', svg: 'hi-crop-free', divider: true }, { type: 'plus', icon: 'fa-plus', divider: true }, { type: 'minus', icon: 'fa-minus' }])
+    .enter()
+    .append(function (data) {
+      var text = "";
+      if (data.svg) {
+        text += "<div><svg class='hi'><use xlink:href='#"+ data.svg +"'></use></svg>";
+        if (data.divider) {
+          text += "<div class='divider'></div>";
+        }
+        text += "</div>";
+        button = $()[0];
+      } else if (data.icon) {
+        text += "<div><div class='fa fa-fw valign-middle " + data.icon + "'></div>";
+        if (data.divider) {
+          text += "<div class='divider'></div></div>";
+        }
+        text += "</div>";
+      }
+      var button = $(text)[0];
+      $(button).on('click', function () {
+        _impalaDagree.action(data.type);
+      });
+      return button;
+    });
+  }
 
   // Set up zoom support
   var zoom = d3.behavior.zoom().on("zoom", function() {
-    _impalaDagree.scale = d3.event.scale;
-    inner.attr("transform", "translate(" + d3.event.translate + ")" +
-               "scale(" + d3.event.scale + ")");
+    var e = d3.event,
+        scale = Math.min(Math.max(e.scale, Math.min(_impalaDagree._width / g.graph().width, _impalaDagree._height / g.graph().height)), 2),
+        tx = Math.min(40, Math.max(e.translate[0], _impalaDagree._width - 40 - g.graph().width * scale)),
+        ty = Math.min(40, Math.max(e.translate[1], _impalaDagree._height - 40 - g.graph().height * scale));
+    _impalaDagree.scale = scale;
+    zoom.translate([tx, ty]);
+    zoom.scale(scale);
+    inner.attr("transform", "translate(" + [tx, ty] + ")" +
+               "scale(" + scale + ")");
   });
   svg.call(zoom);
 
@@ -42,15 +108,20 @@ function impalaDagre(id) {
 
   // Recursively build a list of edges and states that comprise the plan graph
   function build(node, parent, edges, states, colour_idx, max_node_time) {
+    if (node["output_card"] === null || node["output_card"] === undefined) {
+      return;
+    }
     states.push({ "name": node["label"],
+                  "type": node["type"],
+                  "label": node["name"],
                   "detail": node["label_detail"],
                   "num_instances": node["num_instances"],
                   "num_active": node["num_active"],
                   "max_time": node["max_time"],
                   "avg_time": node["avg_time"],
+                  "icon": node["icon"],
                   "is_broadcast": node["is_broadcast"],
-                  "max_time_val": node["max_time_val"],
-                  "style": "fill: " + colours[colour_idx]});
+                  "max_time_val": node["max_time_val"]});
     if (parent) {
       var label_val = "" + node["output_card"].toLocaleString();
       edges.push({ start: node["label"], end: parent,
@@ -62,7 +133,7 @@ function impalaDagre(id) {
       edges.push({ "start": node["label"],
                    "end": node["data_stream_target"],
                    "style": { label: "" + node["output_card"].toLocaleString(),
-                              style: "stroke: #f66; stroke-dasharray: 5, 5;"}});
+                              style: "stroke-dasharray: 5, 5;"}});
     }
     max_node_time = Math.max(node["max_time_val"], max_node_time)
     for (var i = 0; i < node["children"].length; ++i) {
@@ -74,6 +145,46 @@ function impalaDagre(id) {
 
   var is_first = true;
 
+  function select(node) {
+    var key = getKey(node);
+    if (!key) {
+      return;
+    }
+    $("g.node").attr('class', 'node') // addClass doesn't work in svg on our version of jQuery
+    $("g.node:contains('" + key + "')").attr('class', 'node active');
+  }
+
+  function getKey(node) {
+    var nodes = g.nodes();
+    var key;
+    var nNode = parseInt(node, 10);
+    var keys = Object.keys(nodes);
+    for (var i = 0; i < keys.length; i++) {
+      if (parseInt(nodes[keys[i]].split(':')[0], 10) == nNode) {
+        key = nodes[keys[i]];
+        break;
+      }
+    }
+    return key;
+  }
+
+  function zoomToNode(node) {
+    var key = getKey(node);
+    if (!key) {
+      return;
+    }
+    var n = $("g.node:contains('" + key + "')")[0];
+    var t = d3.transform(d3.select(n).attr("transform")),
+        x = t.translate[0],
+        y = t.translate[1];
+
+    var scale = 1;
+
+    svg.transition().duration(1000)
+        .call(zoom.translate([((x * -scale) + (svg.property("clientWidth") / 2)), ((y * -scale) + svg.property("clientHeight") / 2)])
+            .scale(scale).event);
+  }
+
   function renderGraph(plan) {
     if (!plan || !plan.plan_nodes || !plan.plan_nodes.length) return;
     var states = [];
@@ -92,14 +203,18 @@ function impalaDagre(id) {
     var states_by_name = { };
     states.forEach(function(state) {
       // Build the label for the node from the name and the detail
-      var html = "<span>" + state.name + "</span><br/>";
-      html += "<span>" + state.detail + "</span><br/>";
-      html += "<span>" + state.num_instances + " instance";
-      if (state.num_instances > 1) {
-        html += "s";
+      var html = "";
+      if (state.icon && state.icon.svg) {
+        html += '<svg class="hi"><use xlink:href="#'+ state.icon.svg +'"></use></svg>'
+        //html += "<img src=\"" + icon.svg + "\"></img>";
+      } else if (state.icon && state.icon.font){
+        html += "<span class='fa fa-fw valign-middle " + state.icon.font + "'></span>";
       }
-      html += "</span><br/>";
-      html += "<span>Max: " + state.max_time + ", avg: " + state.avg_time + "</span>";
+      html += "<span class='name'>" + state.label + "</span><br/>";
+      html += "<span class='metric'>" + state.max_time + "</span>";
+      html += "<span class='detail'>" + state.detail + "</span><br/>";
+      html += "<span class='metric'>" + state.max_time + "</span>"
+      html += "<span class='id'>" + state.name + "</span>";;
 
       var style = state.style;
 
@@ -120,7 +235,7 @@ function impalaDagre(id) {
       // Impala marks 'broadcast' as a property of the receiver, not the sender. We use
       // '(BCAST)' to denote that a node is duplicating its output to all receivers.
       if (states_by_name[edge.end].is_broadcast) {
-        edge.style.label += " \n(BCAST * " + states_by_name[edge.end].num_instances + ")";
+        edge.style.label += " * " + states_by_name[edge.end].num_instances;
       }
       g.setEdge(edge.start, edge.end, edge.style);
     });
@@ -139,10 +254,7 @@ function impalaDagre(id) {
     // Center the graph, but only the first time through (so as to not lose user zooms).
     if (is_first) {
       var initialScale = 1;
-      _impalaDagree.scale = initialScale;
-      zoom.translate([((svg.attr("width") || $("#"+id).width()) - g.graph().width * initialScale) / 2, 20])
-        .scale(initialScale)
-        .event(svg);
+      _impalaDagree.init(initialScale);
       svg.attr('height', Math.min(g.graph().height * initialScale + 40, 600));
       is_first = false;
     }

+ 102 - 10
apps/jobbrowser/src/jobbrowser/static/jobbrowser/less/jobbrowser-embeddable.less

@@ -90,17 +90,109 @@
   }
 
   .query-plan {
-    border: 1px solid @cui-gray-300
-  }
-
-  .node rect {
-    stroke: @cui-gray-300;
-    fill: @cui-white;
+    border: 1px solid @cui-gray-300;
+    .label,
+    .badge {
+      color: @cui-gray-800;
+      text-shadow: none;
+    }
+    .metric {
+      position: absolute;
+      top: 0px;
+      right: 0px;
+      font-weight: normal;
+    }
+    .name {
+      padding-right: 80px;
+    }
+    .detail {
+      font-weight: normal;
+      overflow: hidden;
+      text-overflow: ellipsis;
+      width: calc(~"100% - 32px");
+      display: inline-block;
+    }
+    span.fa {
+      color: @cui-gray-600;
+      float: left;
+      font-size: 21px;
+      padding-top: 3px;
+      padding-right: 2px;
+    }
+    svg .hi {
+      color: @cui-gray-600 !important;
+      float: left;
+      width: 2.2em !important;
+      height: 2.2em !important;
+      padding-right: 5px;
+    }
+    .buttons {
+      .hi {
+        color: @cui-gray-600 !important;
+        width: 2.2em !important;
+        height: 2.2em !important;
+        padding-left: 2px;
+      }
+      .divider {
+        background-color: @cui-gray-300;
+        width: 10px;
+        height: 1px;
+        margin-left: 5px;
+      }
+    }
+    .id {
+      display: none;
+    }
+    .node.active {
+      rect {
+        filter: url(#dropshadow);
+        stroke: @hue-primary-color-dark;
+        fill: @hue-primary-color-light;
+      }
+    }
+    .output {
+      path,
+      rect {
+        stroke: @cui-gray-600;
+      }
+    }
+    .buttons {
+      background-color: white;
+      box-shadow: 0px 0px 2px 0px;
+      border: 1px solid @cui-gray-300;
+      color: @cui-gray-600;
+      div {
+        line-height: 32px !important;
+        display: block;
+        width: 32px;
+        height: 32px;
+      }
+    }
+    .button div:hover {
+      color: @hue-primary-color-dark
+    }
+    .node rect {
+      fill: @cui-white;
+      stroke-width: 1px
+    }
+    .edgePath > path {
+      fill: none;
+      stroke-width: 1.5px;
+    }
+    .edgePath marker path {
+      fill: @cui-gray-600;
+      stroke-width: 1.5px;
+    }
+    .edgeLabel text {
+      font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
+    }
+    foreignObject > div {
+      position: relative;
+    }
   }
 
-  .edgePath path {
-    stroke: @cui-gray-800;
-    fill: @cui-gray-800;
-    stroke-width: 1.5px;
+  div[data-jobType="queries"] pre {
+    overflow-x: auto;
+    white-space: pre;
   }
 }

Datei-Diff unterdrückt, da er zu groß ist
+ 565 - 228
apps/jobbrowser/src/jobbrowser/templates/job_browser.mako


+ 8 - 7
apps/jobbrowser/src/jobbrowser/tests.py

@@ -514,6 +514,7 @@ class TestResourceManagerHaNoHadoop:
 
   def tearDown(self):
     resource_manager_api.ResourceManagerApi = getattr(resource_manager_api, 'old_ResourceManagerApi')
+    resource_manager_api.API_CACHE = None
     mapreduce_api.get_mapreduce_api = getattr(mapreduce_api, 'old_get_mapreduce_api')
     history_server_api.get_history_server_api = getattr(history_server_api, 'old_get_history_server_api')
 
@@ -1222,30 +1223,30 @@ def test_make_log_links():
 
   # JobBrowser
   assert_equal(
-      """<a href="/jobbrowser/jobs/job_201306261521_0058">job_201306261521_0058</a>""",
+      """<a href="/hue/jobbrowser/jobs/job_201306261521_0058">job_201306261521_0058</a>""",
       LinkJobLogs._make_links('job_201306261521_0058')
   )
   assert_equal(
-      """Hadoop Job IDs executed by Pig: <a href="/jobbrowser/jobs/job_201306261521_0058">job_201306261521_0058</a>""",
+      """Hadoop Job IDs executed by Pig: <a href="/hue/jobbrowser/jobs/job_201306261521_0058">job_201306261521_0058</a>""",
       LinkJobLogs._make_links('Hadoop Job IDs executed by Pig: job_201306261521_0058')
   )
   assert_equal(
-      """MapReduceLauncher  - HadoopJobId: <a href="/jobbrowser/jobs/job_201306261521_0058">job_201306261521_0058</a>""",
+      """MapReduceLauncher  - HadoopJobId: <a href="/hue/jobbrowser/jobs/job_201306261521_0058">job_201306261521_0058</a>""",
       LinkJobLogs._make_links('MapReduceLauncher  - HadoopJobId: job_201306261521_0058')
   )
   assert_equal(
-      """- More information at: http://localhost:50030/jobdetails.jsp?jobid=<a href="/jobbrowser/jobs/job_201306261521_0058">job_201306261521_0058</a>""",
+      """- More information at: http://localhost:50030/jobdetails.jsp?jobid=<a href="/hue/jobbrowser/jobs/job_201306261521_0058">job_201306261521_0058</a>""",
       LinkJobLogs._make_links('- More information at: http://localhost:50030/jobdetails.jsp?jobid=job_201306261521_0058')
   )
   assert_equal(
-      """ Logging error messages to: <a href="/jobbrowser/jobs/job_201307091553_0028">job_201307091553_0028</a>/attempt_201307091553_002""",
+      """ Logging error messages to: <a href="/hue/jobbrowser/jobs/job_201307091553_0028">job_201307091553_0028</a>/attempt_201307091553_002""",
       LinkJobLogs._make_links(' Logging error messages to: job_201307091553_0028/attempt_201307091553_002')
   )
   assert_equal(
-      """ pig-<a href="/jobbrowser/jobs/job_201307091553_0028">job_201307091553_0028</a>.log""",
+      """ pig-<a href="/hue/jobbrowser/jobs/job_201307091553_0028">job_201307091553_0028</a>.log""",
       LinkJobLogs._make_links(' pig-job_201307091553_0028.log')
   )
   assert_equal(
-      """MapReduceLauncher  - HadoopJobId: <a href="/jobbrowser/jobs/job_201306261521_0058">job_201306261521_0058</a>. Look at the UI""",
+      """MapReduceLauncher  - HadoopJobId: <a href="/hue/jobbrowser/jobs/job_201306261521_0058">job_201306261521_0058</a>. Look at the UI""",
       LinkJobLogs._make_links('MapReduceLauncher  - HadoopJobId: job_201306261521_0058. Look at the UI')
   )

+ 4 - 1
apps/jobbrowser/src/jobbrowser/yarn_models.py

@@ -351,7 +351,10 @@ class OozieYarnJob(Job):
   def _fixup(self):
     jobid = self.id
 
-    setattr(self, 'status', self.state)
+    if self.state in ('FINISHED', 'FAILED', 'KILLED'):
+      setattr(self, 'status', self.finalStatus)
+    else:
+      setattr(self, 'status', self.state)
     setattr(self, 'jobName', self.name)
     setattr(self, 'jobId', jobid)
     setattr(self, 'jobId_short', self.jobId.replace('job_', ''))

+ 31 - 25
apps/metastore/src/metastore/static/metastore/js/metastore.ko.js

@@ -32,6 +32,7 @@ var MetastoreViewModel = (function () {
     self.apiHelper.withTotalStorage('assist', 'assist_panel_visible', self.isLeftPanelVisible, true);
     self.optimizerEnabled = ko.observable(options.optimizerEnabled || false);
     self.navigatorEnabled = ko.observable(options.navigatorEnabled || false);
+    self.appConfig = ko.observable();
 
     self.source = ko.observable();
     self.sources = ko.observableArray();
@@ -115,6 +116,7 @@ var MetastoreViewModel = (function () {
           return;
         }
       }
+    });
 
       if (self.source().namespace().id !== databaseDef.namespace.id) {
         var found = self.source().namespaces().some(function (namespace) {
@@ -272,10 +274,10 @@ var MetastoreViewModel = (function () {
 
     loadedDeferred.done(function () {
       var search = location.search;
+      var namespaceId;
+      var sourceType;
       if (search) {
         search = search.replace('?', '');
-        var namespaceId;
-        var sourceType;
         search.split('&').forEach(function (param) {
           if (param.indexOf('namespace=') === 0) {
             namespaceId = param.replace('namespace=', '');
@@ -284,39 +286,43 @@ var MetastoreViewModel = (function () {
             sourceType = param.replace('source=', '');
           }
         });
+      }
+
+      if (sourceType && sourceType !== self.source().type) {
+        var found = self.sources().some(function (source) {
+          if (source.type === sourceType) {
+            self.source(source);
+            return true;
+          }
+        });
+        if (!found) {
+          sourceAndNamespaceDeferred.reject();
+          return;
+        }
+      }
 
-        if (sourceType && sourceType !== self.source().type) {
-          var found = self.sources().some(function (source) {
-            if (source.type === sourceType) {
-              self.source(source);
+      if (!namespaceId && ApiHelper.getInstance().getFromTotalStorage('contextSelector', 'lastSelectedNamespace')) {
+        namespaceId = ApiHelper.getInstance().getFromTotalStorage('contextSelector', 'lastSelectedNamespace').id;
+      }
+
+      self.source().lastLoadNamespacesDeferred.done(function () {
+        if (namespaceId && namespaceId !== self.source().namespace().id) {
+          var found = self.source().namespaces().some(function (namespace) {
+            if (namespace.id === namespaceId) {
+              self.source().namespace(namespace);
               return true;
             }
           });
           if (!found) {
             sourceAndNamespaceDeferred.reject();
             return;
-          }
-        }
-
-        self.source().lastLoadNamespacesDeferred.done(function () {
-          if (namespaceId && namespaceId !== self.source().namespace().id) {
-            var found = self.source().namespaces().some(function (namespace) {
-              if (namespace.id === namespaceId) {
-                self.source().namespace(namespace);
-                return true;
-              }
-            });
-            if (!found) {
-              sourceAndNamespaceDeferred.reject();
-              return;
-            } else {
-              sourceAndNamespaceDeferred.resolve();
-            }
           } else {
             sourceAndNamespaceDeferred.resolve();
           }
-        });
-      }
+        } else {
+          sourceAndNamespaceDeferred.resolve();
+        }
+      });
     });
 
 

+ 15 - 5
apps/metastore/src/metastore/static/metastore/js/metastore.model.js

@@ -36,16 +36,24 @@ var MetastoreSource = (function () {
     });
 
     // When manually changed through dropdown
-    self.namespaceChanged = function () {
-      huePubSub.publish('metastore.url.change')
+    self.namespaceChanged = function (newNamespace, previousNamespace) {
+      if (previousNamespace.database() && !self.namespace().database()) {
+        // Try to set the same database by name, if not there it will revert to 'default'
+        self.namespace().setDatabaseByName(previousNamespace.database().catalogEntry.name, function () {
+          huePubSub.publish('metastore.url.change');
+        });
+      } else {
+        huePubSub.publish('metastore.url.change');
+      }
     };
 
     huePubSub.subscribe("assist.db.panel.ready", function () {
       self.lastLoadNamespacesDeferred.done(function () {
+        var lastSelectedDb = ApiHelper.getInstance().getFromTotalStorage('assist_' + self.sourceType + '_' + self.namespace.id, 'lastSelectedDb', 'default');
         huePubSub.publish('assist.set.database', {
           source: self.type,
           namespace: self.namespace().namespace,
-          name: null
+          name: lastSelectedDb
         });
       });
     });
@@ -151,7 +159,8 @@ var MetastoreSource = (function () {
     var self = this;
     self.loading(true);
     ContextCatalog.getNamespaces({ sourceType: self.type }).done(function (context) {
-      self.namespaces($.map(context.namespaces, function (namespace) {
+      var namespacesWithComputes = context.namespaces.filter(function (namespace) { return namespace.computes.length });
+      self.namespaces($.map(namespacesWithComputes, function (namespace) {
         return new MetastoreNamespace({
           metastoreViewModel: self.metastoreViewModel,
           sourceType: self.type,
@@ -940,7 +949,8 @@ var MetastoreTable = (function () {
       $.post('/tables/drop/' + self.database.catalogEntry.name, {
         table_selection: ko.mapping.toJSON([self.name]),
         skip_trash: 'off',
-        is_embeddable: true
+        is_embeddable: true,
+        cluster: JSON.stringify(self.database.catalogEntry.compute)
       }, function(resp) {
         if (resp.history_uuid) {
           huePubSub.publish('notebook.task.submitted', resp.history_uuid);

+ 64 - 125
apps/metastore/src/metastore/templates/metastore.mako

@@ -83,10 +83,10 @@ ${ components.menubar(is_embeddable) }
 
 <script type="text/html" id="metastore-breadcrumbs">
   <div style="font-size: 14px; margin: 0 12px; line-height: 27px;">
-    <div data-bind="component: { name: 'hue-drop-down', params: { value: source, entries: sources, onChanged: sourceChanged, labelAttribute: 'name', searchable: true, linkTitle: '${ _ko('Source') }' } }" style="display: inline-block"></div>
+    <div data-bind="component: { name: 'hue-drop-down', params: { value: source, entries: sources, onSelect: sourceChanged, labelAttribute: 'name', searchable: true, linkTitle: '${ _ko('Source') }' } }" style="display: inline-block"></div>
     <!-- ko with: source -->
     <!-- ko if: window.HAS_MULTI_CLUSTER -->
-    <div class="margin-left-10" data-bind="component: { name: 'hue-drop-down', params: { value: namespace, entries: namespaces, onChanged: namespaceChanged, labelAttribute: 'name', searchable: true, linkTitle: '${ _ko('Namespace') }' } }" style="display: inline-block"></div>
+    <div class="margin-left-10" data-bind="component: { name: 'hue-drop-down', params: { value: namespace, entries: namespaces, onSelect: namespaceChanged, labelAttribute: 'name', searchable: true, linkTitle: '${ _ko('Namespace') }' } }" style="display: inline-block"></div>
     <!-- /ko -->
     <!-- /ko -->
   </div>
@@ -181,7 +181,7 @@ ${ components.menubar(is_embeddable) }
         </td>
         <td title="${_('Query partition data')}">
           <!-- ko if: IS_HUE_4 -->
-            <a data-bind="click: function() { queryAndWatch(notebookUrl, $root.source().type); }, text: '[\'' + columns.join('\',\'') + '\']'" href="javascript:void(0)"></a>
+            <a data-bind="click: function() { queryAndWatchUrl(notebookUrl, $root.source().type); }, text: '[\'' + columns.join('\',\'') + '\']'" href="javascript:void(0)"></a>
           <!-- /ko -->
           <!-- ko if: ! IS_HUE_4 -->
             <a data-bind="attr: { 'href': readUrl }, text: '[\'' + columns.join('\',\'') + '\']'"></a>
@@ -356,6 +356,10 @@ ${ components.menubar(is_embeddable) }
             <input type="hidden" name="is_embeddable" value="true"/>
             <input type="hidden" name="start_time" value=""/>
             <input type="hidden" name="source_type" data-bind="value: $root.source().type"/>
+            <!-- ko with: catalogEntry -->
+            <input type="hidden" name="namespace" data-bind="value: namespace.id"/>
+            <input type="hidden" name="cluster" data-bind="value: JSON.stringify(compute)"/>
+            <!-- /ko -->
         % else:
           <form id="dropDatabaseForm" action="/metastore/databases/drop" method="POST">
         % endif
@@ -367,7 +371,7 @@ ${ components.menubar(is_embeddable) }
             <div class="modal-body">
               <ul data-bind="foreach: selectedDatabases">
                 <li>
-                  <span data-bind="text: catalogEntry.name"></span>
+                  <span data-bind="text: catalogEntry().name"></span>
                 </li>
               </ul>
             </div>
@@ -377,15 +381,15 @@ ${ components.menubar(is_embeddable) }
               <input type="submit" class="btn btn-danger" value="${_('Yes')}"/>
             </div>
             <!-- ko foreach: selectedDatabases -->
-            <input type="hidden" name="database_selection" data-bind="value: catalogEntry.name" />
+            <input type="hidden" name="database_selection" data-bind="value: catalogEntry().name" />
             <!-- /ko -->
           </form>
         </div>
 
         % if is_embeddable:
-          <button href="javascript: void(0);" class="btn btn-default" data-bind="publish: { 'open.link': '${ url('indexer:importer_prefill', source_type='manual', target_type='database') }' + '/?sourceType=' + catalogEntry.getSourceType() + '&namespace=' + catalogEntry.namespace.id + '&compute=' + catalogEntry.compute.id  }" title="${_('Create a new database')}"><i class="fa fa-plus"></i> ${_('New')}</button>
+          <button href="javascript: void(0);" class="btn btn-default" data-bind="publish: { 'open.link': '${ url('indexer:importer_prefill', source_type='manual', target_type='database') }' + '/?sourceType=' + catalogEntry().getSourceType() + '&namespace=' + catalogEntry().namespace.id + '&compute=' + catalogEntry().compute.id  }" title="${_('Create a new database')}"><i class="fa fa-plus"></i> ${_('New')}</button>
         % elif ENABLE_NEW_CREATE_TABLE.get():
-          <button class="btn btn-default" data-bind="attr: { 'href': '${ url('indexer:importer_prefill', source_type='manual', target_type='database') }' + '/?sourceType=' + catalogEntry.getSourceType() + '&namespace=' + catalogEntry.namespace.id + '&compute=' + catalogEntry.compute.id }" title="${_('Create a new database')}"><i class="fa fa-plus"></i> ${_('New')}</button>
+          <button class="btn btn-default" data-bind="attr: { 'href': '${ url('indexer:importer_prefill', source_type='manual', target_type='database') }' + '/?sourceType=' + catalogEntry().getSourceType() + '&namespace=' + catalogEntry().namespace.id + '&compute=' + catalogEntry().compute.id }" title="${_('Create a new database')}"><i class="fa fa-plus"></i> ${_('New')}</button>
         % else:
           <button href="${ url('beeswax:create_database') }" class="btn btn-default" title="${_('Create a new database')}"><i class="fa fa-plus"></i> ${_('New')}</button>
         % endif
@@ -515,8 +519,8 @@ ${ components.menubar(is_embeddable) }
     <div class="span12 tile entries-table-container">
       <h4 class="entries-table-header">${ _('Tables') }</h4>
       <div class="actionbar-actions" data-bind="visible: tables().length > 0">
-        <button class="btn toolbarBtn margin-left-20" title="${_('Browse the selected table')}" data-bind="click: function () { onTableClick(selectedTables()[0].catalogEntry); selectedTables([]); }, disable: selectedTables().length !== 1"><i class="fa fa-eye"></i> ${_('View')}</button>
-        <button class="btn toolbarBtn" title="${_('Query the selected table')}" data-bind="click: function () { IS_HUE_4 ? queryAndWatch('/notebook/browse/' + selectedTables()[0].catalogEntry.path.join('/') + '/', $root.source().type) : location.href = '/notebook/browse/' + selectedTables()[0].catalogEntry.path.join('/'); }, disable: selectedTables().length !== 1">
+        <button class="btn toolbarBtn margin-left-20" title="${_('Browse the selected table')}" data-bind="click: function () { onTableClick(selectedTables()[0].catalogEntry()); selectedTables([]); }, disable: selectedTables().length !== 1"><i class="fa fa-eye"></i> ${_('View')}</button>
+        <button class="btn toolbarBtn" title="${_('Query the selected table')}" data-bind="click: function () { queryAndWatch(selectedTables()[0].catalogEntry()) }, disable: selectedTables().length !== 1">
           <i class="fa fa-play fa-fw"></i> ${_('Query')}
         </button>
         % if has_write_access:
@@ -543,8 +547,10 @@ ${ components.menubar(is_embeddable) }
         <input type="hidden" name="is_embeddable" value="true"/>
         <input type="hidden" name="start_time" value=""/>
         <input type="hidden" name="source_type" data-bind="value: $root.source().type"/>
+        <input type="hidden" name="namespace" data-bind="value: catalogEntry.namespace.id"/>
+        <input type="hidden" name="cluster" data-bind="value: JSON.stringify(catalogEntry.compute)"/>
     % else:
-      <form data-bind="attr: { 'action': '/metastore/tables/drop/' +catalogEntry. name }" method="POST">
+      <form data-bind="attr: { 'action': '/metastore/tables/drop/' + catalogEntry.name }" method="POST">
     % endif
       ${ csrf_token(request) | n,unicode }
       <div class="modal-header">
@@ -555,7 +561,7 @@ ${ components.menubar(is_embeddable) }
         <ul data-bind="foreach: selectedTables">
           <!-- ko if: $index() <= 9 -->
           <li>
-            <span data-bind="text: catalogEntry.name"></span>
+            <span data-bind="text: catalogEntry().name"></span>
           </li>
           <!-- /ko -->
         </ul>
@@ -571,7 +577,7 @@ ${ components.menubar(is_embeddable) }
         <input type="submit" class="btn btn-danger" value="${_('Yes')}"/>
       </div>
       <!-- ko foreach: selectedTables -->
-      <input type="hidden" name="table_selection" data-bind="value: catalogEntry.name" />
+      <input type="hidden" name="table_selection" data-bind="value: catalogEntry().name" />
       <!-- /ko -->
     </form>
   </div>
@@ -696,102 +702,6 @@ ${ components.menubar(is_embeddable) }
   <!-- /ko -->
 </script>
 
-<script type="text/html" id="metastore-permissions-tab">
-  <div class="acl-panel-content" style="height: 988px;">
-    <div class="pull-right">
-      <input class="input-medium no-margin" type="text" placeholder="Search privileges..."> &nbsp;
-      <a class="btn pointer">
-        <i class="fa fa-plus-circle"></i> Add role
-      </a>
-    </div>
-    <h4 style="margin-top: 4px;">Privileges &nbsp;</h4>
-
-    <div class="acl-block-title">
-      <i class="fa fa-cube muted"></i> <a class="pointer"><span>customerFraud</span></a>
-    </div>
-    <div>
-      <div class="acl-block acl-block-airy">
-        <span class="muted" title="3 months ago">TABLE</span>
-        <span>
-          <a class="muted" style="margin-left: 4px" title="Open in Sentry" href="/security/hive"><i class="fa fa-external-link"></i></a>
-        </span>
-        <br>
-        server=<span>server1</span>
-        <span>
-          <i class="fa fa-long-arrow-right"></i> db=<a class="pointer" title="Browse db privileges"><span data-bind="text: $root.database().catalogEntry.name"></span></a>
-        </span>
-        <span>
-          <i class="fa fa-long-arrow-right"></i> table=<a class="pointer" title="Browse table privileges"><span data-bind="text: catalogEntry.name"></span></a>
-        </span>
-        <span style="display: none;">
-          <i class="fa fa-long-arrow-right"></i> column=<a class="pointer" title="Browse column privileges"><span></span></a>
-        </span>
-        <i class="fa fa-long-arrow-right"></i> action=INSERT
-      </div>
-
-      <div class="acl-block acl-block-airy">
-        <span class="muted" title="3 months ago">TABLE</span>
-        <span>
-          <a class="muted" style="margin-left: 4px" title="Open in Sentry" href="/security/hive"><i class="fa fa-external-link"></i></a>
-        </span>
-        <br>
-        server=server1
-        <span>
-          <i class="fa fa-long-arrow-right"></i> db=<a class="pointer" title="Browse db privileges"><span data-bind="text: $root.database().catalogEntry.name"></span></a>
-        </span>
-        <span>
-          <i class="fa fa-long-arrow-right"></i> table=<a class="pointer" title="Browse table privileges"><span data-bind="text: catalogEntry.name"></span></a>
-        </span>
-        <span style="display: none;">
-          <i class="fa fa-long-arrow-right"></i> column=<a class="pointer" title="Browse column privileges"><span></span></a>
-        </span>
-
-        <i class="fa fa-long-arrow-right"></i> action=<span>SELECT</span>
-      </div>
-    </div>
-
-    <div class="acl-block acl-actions">
-      <span class="pointer" title="Show 50 more..." style="display: none;"><i class="fa fa-ellipsis-h"></i></span>
-      <span class="pointer" title="Add privilege"><i class="fa fa-plus"></i></span>
-      <span class="pointer" title="Undo" style="display: none;"> &nbsp; <i class="fa fa-undo"></i></span>
-      <span class="pointer" title="Save" style="display: none;"> &nbsp; <i class="fa fa-save"></i></span>
-    </div>
-
-    <div class="acl-block-title">
-      <i class="fa fa-cube muted"></i> <a class="pointer"><span>customerAccess</span></a>
-    </div>
-    <div>
-      <div class="acl-block acl-block-airy">
-        <span class="muted" title="3 months ago">TABLE</span>
-
-        <span>
-          <a class="muted" style="margin-left: 4px" title="Open in Sentry" href="/security/hive"><i class="fa fa-external-link"></i></a>
-        </span>
-        <br>
-
-        server=server1
-
-          <span>
-            <i class="fa fa-long-arrow-right"></i> db=<a class="pointer" title="Browse db privileges"><span data-bind="text: $root.database().catalogEntry.name"></span></a>
-          </span>
-          <span>
-            <i class="fa fa-long-arrow-right"></i> table=<a class="pointer" title="Browse table privileges"><span data-bind="text: catalogEntry.name"></span></a>
-          </span>
-          <span style="display: none;">
-            <i class="fa fa-long-arrow-right"></i> column=<a class="pointer" title="Browse column privileges"><span></span></a>
-          </span>
-
-        <i class="fa fa-long-arrow-right"></i> action=<span>ALL</span>
-      </div>
-      <div class="acl-block acl-actions">
-        <span class="pointer" title="Show 50 more..." style="display: none;"><i class="fa fa-ellipsis-h"></i></span>
-        <span class="pointer" title="Add privilege"><i class="fa fa-plus"></i></span>
-        <span class="pointer" title="Undo" style="display: none;"> &nbsp; <i class="fa fa-undo"></i></span>
-        <span class="pointer" title="Save" style="display: none;"> &nbsp; <i class="fa fa-save"></i></span>
-      </div>
-    </div>
-  </div>
-</script>
 
 <script type="text/html" id="metastore-queries-tab">
   <br/>
@@ -827,6 +737,7 @@ ${ components.menubar(is_embeddable) }
   </div>
 </script>
 
+
 <script type="text/html" id="metastore-view-sql-tab">
   <div style="padding: 5px 15px">
     <!-- ko hueSpinner: { spin: loadingViewSql, inline: true } --><!-- /ko -->
@@ -836,6 +747,7 @@ ${ components.menubar(is_embeddable) }
   </div>
 </script>
 
+
 <script type="text/html" id="metastore-details-tab">
   <!-- ko with: tableDetails -->
   <table class="properties-table">
@@ -905,7 +817,6 @@ ${ components.menubar(is_embeddable) }
     <!-- ko if: $root.optimizerEnabled() -->
       <li data-bind="css: { 'active': $root.currentTab() === 'relationships' }"><a href="javascript: void(0);" data-bind="click: function() { $root.currentTab('relationships'); }">${_('Relationships')} (<span data-bind="text: topJoins().length"></span>)</a></li>
 ##       <!-- ko if: $root.database().table().optimizerDetails() -->
-##       <li data-bind="css: { 'active': $root.currentTab() === 'permissions' }"><a href="javascript: void(0);" data-bind="click: function(){ $root.currentTab('permissions'); }">${_('Permissions')}</a></li>
 ##       <li data-bind="css: { 'active': $root.currentTab() === 'queries' }"><a href="javascript: void(0);" data-bind="click: function(){ $root.currentTab('queries'); }">${_('Queries')} (<span data-bind="text: $root.database().table().optimizerDetails().queryCount"></span>)</a></li>
 ##       <li data-bind="css: { 'active': $root.currentTab() === 'joins' }"><a href="javascript: void(0);" data-bind="click: function(){ $root.currentTab('joins'); }">${_('Joins')} (<span data-bind="text: $root.database().table().optimizerDetails().joinCount"></span>)</a></li>
 ##       <!-- /ko -->
@@ -913,14 +824,26 @@ ${ components.menubar(is_embeddable) }
 ##       <!-- /ko -->
     <!-- /ko -->
     <!-- ko if: tableDetails() && tableDetails().partition_keys.length -->
-      <li data-bind="css: { 'active': $root.currentTab() === 'partitions' }"><a href="javascript: void(0);" data-bind="click: function() { $root.currentTab('partitions'); }">${_('Partitions')} (<span data-bind="text: partitionsCountLabel"></span>)</a></li>
+    <li data-bind="css: { 'active': $root.currentTab() === 'partitions' }">
+      <a href="javascript: void(0);" data-bind="click: function() { $root.currentTab('partitions'); }">${_('Partitions')} (<span data-bind="text: partitionsCountLabel"></span>)</a>
+    </li>
     <!-- /ko -->
-    <li data-bind="css: { 'active': $root.currentTab() === 'sample' }"><a href="javascript: void(0);" data-bind="click: function() { $root.currentTab('sample'); }">${_('Sample')} (<span data-bind="text: samples.rows().length"></span>)</a></li>
+    <li data-bind="css: { 'active': $root.currentTab() === 'sample' }">
+      <a href="javascript: void(0);" data-bind="click: function() { $root.currentTab('sample'); }">${_('Sample')} (<span data-bind="text: samples.rows().length"></span>)</a>
+    </li>
     <!-- ko if: catalogEntry.isView() -->
-    <li data-bind="css: { 'active' : $root.currentTab() === 'viewSql' }"><a href="javascript: void(0);" data-bind="click: function() { $root.currentTab('viewSql'); }">${ _('View SQL') }</a></li>
+    <li data-bind="css: { 'active' : $root.currentTab() === 'viewSql' }">
+      <a href="javascript: void(0);" data-bind="click: function() { $root.currentTab('viewSql'); }">${ _('View SQL') }</a>
+    </li>
+    <!-- /ko -->
+    <li data-bind="css: { 'active' : $root.currentTab() === 'details' }">
+      <a href="javascript: void(0);" data-bind="click: function() { $root.currentTab('details'); }">${ _('Details') }</a>
+    </li>
+    <!-- ko if: $root.appConfig() && $root.appConfig()['browser'] && $root.appConfig()['browser']['interpreter_names'].indexOf('security') !== -1 -->
+    <li data-bind="css: { 'active' : $root.currentTab() === 'privileges' }">
+      <a href="javascript: void(0);" data-bind="click: function() { $root.currentTab('privileges'); }">${ _('Privileges') }</a>
+    </li>
     <!-- /ko -->
-    <li data-bind="css: { 'active' : $root.currentTab() === 'details' }"><a href="javascript: void(0);" data-bind="click: function() { $root.currentTab('details'); }">${ _('Details') }</a></li>
-    <li data-bind="css: { 'active' : $root.currentTab() === 'privileges' }"><a href="javascript: void(0);" data-bind="click: function() { $root.currentTab('privileges'); }">${ _('Privileges') }</a></li>
   </ul>
 
   <div class="tab-content margin-top-10" style="border: none; overflow: hidden">
@@ -941,10 +864,6 @@ ${ components.menubar(is_embeddable) }
         <!-- ko template: 'metastore-sample-tab' --><!-- /ko -->
       <!-- /ko -->
 
-      <!-- ko if: $root.currentTab() === 'permissions' -->
-        <!-- ko template: 'metastore-permissions-tab' --><!-- /ko -->
-      <!-- /ko -->
-
       <!-- ko if: $root.optimizerEnabled() && $root.currentTab() === 'queries' -->
         <!-- ko template: { name: 'metastore-queries-tab', data: $root.database().table() } --><!-- /ko -->
       <!-- /ko -->
@@ -1007,7 +926,7 @@ ${ components.menubar(is_embeddable) }
               <!-- ko with: table -->
               % if USE_NEW_EDITOR.get():
                 <!-- ko if: IS_HUE_4 -->
-                <a href="javascript: void(0);" class="btn btn-default" data-bind="click: function() { queryAndWatch('/notebook/browse/' + catalogEntry.path.join('/') + '/', $root.source().type); }" title="${_('Query')}"><i class="fa fa-play fa-fw"></i> ${_('Query')}</a>
+                <a href="javascript: void(0);" class="btn btn-default" data-bind="click: function() { queryAndWatch(catalogEntry); }" title="${_('Query')}"><i class="fa fa-play fa-fw"></i> ${_('Query')}</a>
                 <!-- /ko -->
                 <!-- ko if: ! IS_HUE_4 -->
                 <a class="btn btn-default" data-bind="attr: { 'href': '/notebook/browse/' + catalogEntry.path.join('/') }" title="${_('Query')}"><i class="fa fa-play fa-fw"></i> ${_('Query')}</a>
@@ -1124,10 +1043,12 @@ ${ components.menubar(is_embeddable) }
     });
   }
 
-  function queryAndWatch(url, sourceType) {
+  function queryAndWatchUrl(url, sourceType, namespaceId, compute) {
     $.post(url, {
       format: "json",
-      sourceType: sourceType
+      sourceType: sourceType,
+      namespace: namespaceId,
+      cluster: compute
     },function(resp) {
       if (resp.history_uuid) {
         huePubSub.publish('open.editor.query', resp.history_uuid);
@@ -1139,6 +1060,15 @@ ${ components.menubar(is_embeddable) }
     });
   }
 
+  function queryAndWatch(catalogEntry) {
+    if (!IS_HUE_4) {
+      location.href = '/notebook/browse/' + catalogEntry.path.join('/')
+    } else {
+      queryAndWatchUrl('/notebook/browse/' + catalogEntry.path.join('/') + '/', catalogEntry.getSourceType(),
+              catalogEntry.namespace && catalogEntry.namespace.id, catalogEntry.compute)
+    }
+  }
+
   (function () {
     if (ko.options) {
       ko.options.deferUpdates = true;
@@ -1164,10 +1094,14 @@ ${ components.menubar(is_embeddable) }
       });
 
       huePubSub.subscribe('metastore.clear.selection', function () {
-        viewModel.selectedDatabases.removeAll();
-        if (viewModel.database()) {
-          viewModel.database().selectedTables.removeAll();
-        }
+        viewModel.sources().forEach(function (source) {
+          source.namespaces().forEach(function (namespace) {
+            namespace.selectedDatabases.removeAll();
+            namespace.databases().forEach(function (database) {
+              database.selectedTables.removeAll();
+            })
+          })
+        });
       }, 'metastore');
 
       viewModel.currentTab.subscribe(function(tab){
@@ -1259,6 +1193,11 @@ ${ components.menubar(is_embeddable) }
 
       ko.applyBindings(viewModel, $('#metastoreComponents')[0]);
 
+      huePubSub.subscribe('cluster.config.set.config', function (clusterConfig) {
+        viewModel.appConfig(clusterConfig && clusterConfig['app_config']);
+      });
+      huePubSub.publish('cluster.config.get.config');
+
       if (location.getParameter('refresh') === 'true') {
         DataCatalog.getEntry({ namespace: viewModel.source().namespace().namespace, compute: viewModel.source().namespace().compute, sourceType: viewModel.source().type, path: [], definition: { type: 'source' }}).done(function (entry) {
           entry.clearCache({ invalidate: viewMode.source().type === 'impala' ? 'invalidate' : 'cache', silenceErrors: true });

+ 4 - 4
apps/metastore/src/metastore/tests.py

@@ -90,7 +90,7 @@ class TestMetastoreWithHadoop(BeeswaxSampleProvider):
     assert_equal(200, response.status_code)
 
     # And have detail
-    response = self.client.get("/metastore/table/%s/test?format=json" % self.db_name)
+    response = self.client.post("/metastore/table/%s/test/?format=json" % self.db_name, {'format': 'json'})
     data = json.loads(response.content)
     assert_true("foo" in [col['name'] for col in data['cols']])
     assert_true("SerDe Library:" in [prop['col_name'] for prop in data['properties']], data)
@@ -154,18 +154,18 @@ class TestMetastoreWithHadoop(BeeswaxSampleProvider):
     assert_false('test_index' in data['tables'])
 
   def test_describe_view(self):
-    resp = self.client.get('/metastore/table/%s/myview?format=json' % self.db_name)
+    resp = self.client.post('/metastore/table/%s/myview' % self.db_name, data={'format': 'json'})
     assert_equal(200, resp.status_code, resp.content)
     data = json.loads(resp.content)
     assert_true(data['is_view'])
     assert_equal("myview", data['name'])
 
   def test_describe_partitions(self):
-    response = self.client.get("/metastore/table/%s/test_partitions?format=json" % self.db_name)
+    response = self.client.post("/metastore/table/%s/test_partitions" % self.db_name, data={'format': 'json'})
     data = json.loads(response.content)
     assert_equal(2, len(data['partition_keys']), data)
 
-    response = self.client.get("/metastore/table/%s/test_partitions/partitions?format=json" % self.db_name, follow=True)
+    response = self.client.post("/metastore/table/%s/test_partitions/partitions" % self.db_name, data={'format': 'json'}, follow=True)
     data = json.loads(response.content)
     partition_columns = [col for cols in data['partition_values_json'] for col in cols['columns']]
     assert_true("baz_one" in partition_columns)

+ 75 - 22
apps/metastore/src/metastore/views.py

@@ -29,7 +29,7 @@ from django.views.decorators.http import require_http_methods
 from desktop.context_processors import get_app_name
 from desktop.lib.django_util import JsonResponse, render
 from desktop.lib.exceptions_renderable import PopupException
-from desktop.models import Document2, get_cluster_config
+from desktop.models import Document2, get_cluster_config, _get_apps
 
 from beeswax.design import hql_query
 from beeswax.models import SavedQuery
@@ -73,11 +73,14 @@ Database Views
 
 def databases(request):
   search_filter = request.GET.get('filter', '')
+  cluster = json.loads(request.POST.get('cluster', '{}'))
 
-  db = _get_db(user=request.user)
+  db = _get_db(user=request.user, cluster=cluster)
   databases = db.get_databases(search_filter)
+  apps_list = _get_apps(request.user, '')
 
   return render("metastore.mako", request, {
+    'apps': apps_list,
     'breadcrumbs': [],
     'database': None,
     'databases': databases,
@@ -95,7 +98,9 @@ def databases(request):
 @check_has_write_access_permission
 def drop_database(request):
   source_type = request.POST.get('source_type', 'hive')
-  db = _get_db(user=request.user, source_type=source_type)
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+
+  db = _get_db(user=request.user, source_type=source_type, cluster=cluster)
 
   if request.method == 'POST':
     databases = request.POST.getlist('database_selection')
@@ -104,6 +109,8 @@ def drop_database(request):
       if request.POST.get('is_embeddable'):
         design = SavedQuery.create_empty(app_name=source_type if source_type != 'hive' else 'beeswax', owner=request.user, data=hql_query('').dumps())
         last_executed = json.loads(request.POST.get('start_time'), '-1')
+        cluster = json.loads(request.POST.get('cluster', '{}'))
+        namespace = request.POST.get('namespace')
         sql = db.drop_databases(databases, design, generate_ddl_only=True)
         job = make_notebook(
             name=_('Drop database %s') % ', '.join(databases)[:100],
@@ -111,6 +118,8 @@ def drop_database(request):
             statement=sql.strip(),
             status='ready',
             database=None,
+            namespace=namespace,
+            compute=cluster,
             on_success_url='assist.db.refresh',
             is_task=True,
             last_executed=last_executed
@@ -136,7 +145,9 @@ def alter_database(request, database):
   response = {'status': -1, 'data': ''}
 
   source_type = request.POST.get('source_type', 'hive')
-  db = _get_db(user=request.user, source_type=source_type)
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+
+  db = _get_db(user=request.user, source_type=source_type, cluster=cluster)
 
   try:
     properties = request.POST.get('properties')
@@ -160,12 +171,21 @@ def alter_database(request, database):
 
 def get_database_metadata(request, database, cluster=None):
   response = {'status': -1, 'data': ''}
+
   source_type = request.POST.get('source_type', 'hive')
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+
   db = _get_db(user=request.user, source_type=source_type, cluster=cluster)
 
   try:
     db_metadata = db.get_database(database)
     response['status'] = 0
+    if not db_metadata.get('owner_name'):
+      db_metadata['owner_name'] = ''
+    if not db_metadata.get('owner_type'):
+      db_metadata['owner_type'] = ''
+    if not db_metadata.get('parameters'):
+      db_metadata['parameters'] = ''
     db_metadata['hdfs_link'] = location_to_url(db_metadata['location'])
     response['data'] = db_metadata
   except Exception, ex:
@@ -180,8 +200,10 @@ def table_queries(request, database, table):
 
   response = {'status': -1, 'queries': []}
   try:
-    queries = [{'doc': d.to_dict(), 'data': Notebook(document=d).get_data()}
-              for d in Document2.objects.filter(qfilter, owner=request.user, type='query', is_history=False)[:50]]
+    queries = [
+        {'doc': d.to_dict(), 'data': Notebook(document=d).get_data()}
+        for d in Document2.objects.filter(qfilter, owner=request.user, type='query', is_history=False)[:50]
+    ]
     response['status'] = 0
     response['queries'] = queries
   except Exception, ex:
@@ -195,7 +217,9 @@ def table_queries(request, database, table):
 Table Views
 """
 def show_tables(request, database=None):
-  db = _get_db(user=request.user)
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+
+  db = _get_db(user=request.user, cluster=cluster)
 
   if database is None:
     database = 'default' # Assume always 'default'
@@ -229,7 +253,9 @@ def show_tables(request, database=None):
         'search_filter': search_filter
     })
   else:
+    apps_list = _get_apps(request.user, '')
     resp = render("metastore.mako", request, {
+    'apps': apps_list,
     'breadcrumbs': [],
     'database': None,
     'partitions': [],
@@ -246,7 +272,10 @@ def show_tables(request, database=None):
 
 
 def get_table_metadata(request, database, table):
-  db = _get_db(user=request.user)
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+  source_type = request.POST.get('source_type')
+
+  db = _get_db(user=request.user, source_type=source_type, cluster=cluster)
   response = {'status': -1, 'data': ''}
   try:
     table_metadata = db.get_table(database, table)
@@ -267,9 +296,9 @@ def get_table_metadata(request, database, table):
 
 def describe_table(request, database, table):
   app_name = get_app_name(request)
-  cluster = request.GET.get('cluster')
-
-  db = _get_db(user=request.user, cluster=cluster)
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+  source_type = request.POST.get('source_type')
+  db = _get_db(user=request.user, source_type=source_type, cluster=cluster)
 
   try:
     table = db.get_table(database, table)
@@ -277,7 +306,7 @@ def describe_table(request, database, table):
     LOG.exception("Describe table error")
     raise PopupException(_("DB Error"), detail=e.message if hasattr(e, 'message') and e.message else e)
 
-  if request.GET.get("format", "html") == "json":
+  if request.POST.get("format", "html") == "json":
     return JsonResponse({
         'status': 0,
         'name': table.name,
@@ -293,6 +322,7 @@ def describe_table(request, database, table):
     })
   else:  # Render HTML
     renderable = "metastore.mako"
+    apps_list = _get_apps(request.user, '')
 
     partitions = None
     if app_name != 'impala' and table.partition_keys:
@@ -302,6 +332,7 @@ def describe_table(request, database, table):
         LOG.exception('Table partitions could not be retrieved')
 
     return render(renderable, request, {
+      'apps': apps_list,
       'breadcrumbs': [{
           'name': database,
           'url': reverse('metastore:show_tables', kwargs={'database': database})
@@ -329,7 +360,9 @@ def alter_table(request, database, table):
   response = {'status': -1, 'data': ''}
 
   source_type = request.POST.get('source_type', 'hive')
-  db = _get_db(user=request.user, source_type=source_type)
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+
+  db = _get_db(user=request.user, source_type=source_type, cluster=cluster)
 
   try:
     new_table_name = request.POST.get('new_table_name', None)
@@ -362,7 +395,9 @@ def alter_column(request, database, table):
   response = {'status': -1, 'message': ''}
 
   source_type = request.POST.get('source_type', 'hive')
-  db = _get_db(user=request.user, source_type=source_type)
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+
+  db = _get_db(user=request.user, source_type=source_type, cluster=cluster)
 
   try:
     column = request.POST.get('column', None)
@@ -397,13 +432,17 @@ def alter_column(request, database, table):
 @check_has_write_access_permission
 def drop_table(request, database):
   source_type = request.POST.get('source_type', 'hive')
-  db = _get_db(user=request.user, source_type=source_type)
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+
+  db = _get_db(user=request.user, source_type=source_type, cluster=cluster)
 
   if request.method == 'POST':
     try:
       tables = request.POST.getlist('table_selection')
       tables_objects = [db.get_table(database, table) for table in tables]
       skip_trash = request.POST.get('skip_trash') == 'on'
+      cluster = json.loads(request.POST.get('cluster', '{}'))
+      namespace = request.POST.get('namespace')
 
       if request.POST.get('is_embeddable'):
         last_executed = json.loads(request.POST.get('start_time'), '-1')
@@ -414,6 +453,8 @@ def drop_table(request, database):
             statement=sql.strip(),
             status='ready',
             database=database,
+            namespace=namespace,
+            compute=cluster,
             on_success_url='assist.db.refresh',
             is_task=True,
             last_executed=last_executed
@@ -436,8 +477,9 @@ def drop_table(request, database):
 
 # Deprecated
 def read_table(request, database, table):
-  db = dbms.get(request.user)
+  cluster = json.loads(request.POST.get('cluster', '{}'))
 
+  db = dbms.get(request.user, cluster=cluster)
   table = db.get_table(database, table)
 
   try:
@@ -452,7 +494,9 @@ def load_table(request, database, table):
   response = {'status': -1, 'data': 'None'}
 
   source_type = request.POST.get('source_type', 'hive')
-  db = _get_db(user=request.user, source_type=source_type)
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+
+  db = _get_db(user=request.user, source_type=source_type, cluster=cluster)
 
   table = db.get_table(database, table)
 
@@ -510,8 +554,9 @@ def load_table(request, database, table):
 
 
 def describe_partitions(request, database, table):
-  db = _get_db(user=request.user)
+  cluster = json.loads(request.POST.get('cluster', '{}'))
 
+  db = _get_db(user=request.user, cluster=cluster)
   table_obj = db.get_table(database, table)
 
   if not table_obj.partition_keys:
@@ -541,7 +586,9 @@ def describe_partitions(request, database, table):
       'partition_values_json': massaged_partitions,
     })
   else:
+    apps_list = _get_apps(request.user, '')
     return render("metastore.mako", request, {
+      'apps': apps_list,
       'breadcrumbs': [{
             'name': database,
             'url': reverse('metastore:show_tables', kwargs={'database': database})
@@ -592,7 +639,9 @@ def _massage_partition(database, table, partition):
 
 
 def browse_partition(request, database, table, partition_spec):
-  db = _get_db(user=request.user)
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+
+  db = _get_db(user=request.user, cluster=cluster)
   try:
     decoded_spec = urllib.unquote(partition_spec)
     partition_table = db.describe_partition(database, table, decoded_spec)
@@ -607,7 +656,9 @@ def browse_partition(request, database, table, partition_spec):
 
 # Deprecated
 def read_partition(request, database, table, partition_spec):
-  db = dbms.get(request.user)
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+
+  db = dbms.get(request.user, cluster=cluster)
   try:
     decoded_spec = urllib.unquote(partition_spec)
     query = db.get_partition(database, table, decoded_spec)
@@ -621,7 +672,9 @@ def read_partition(request, database, table, partition_spec):
 @check_has_write_access_permission
 def drop_partition(request, database, table):
   source_type = request.POST.get('source_type', 'hive')
-  db = _get_db(user=request.user, source_type=source_type)
+  cluster = json.loads(request.POST.get('cluster', '{}'))
+
+  db = _get_db(user=request.user, source_type=source_type, cluster=cluster)
 
   if request.method == 'POST':
     partition_specs = request.POST.getlist('partition_selection')
@@ -676,4 +729,4 @@ def _get_db(user, source_type=None, cluster=None):
 
 
 def _get_servername(db):
-  return 'hive' if db.server_name == 'beeswax' else db.server_name
+  return 'hive' if db.server_name == 'beeswax' else db.server_name.rsplit('-', 1)[0]

+ 2 - 1
apps/oozie/src/oozie/management/commands/oozie_setup.py

@@ -363,7 +363,8 @@ class Command(BaseCommand):
     LOG.info(_("Installing examples..."))
 
     if ENABLE_V2.get():
-      management.call_command('loaddata', 'initial_oozie_examples.json', verbosity=2)
+      with transaction.atomic():
+        management.call_command('loaddata', 'initial_oozie_examples.json', verbosity=2, commit=False)
 
     if IS_HUE_4.get():
       # Install editor oozie examples without doc1 link

+ 74 - 10
apps/oozie/src/oozie/models2.py

@@ -205,9 +205,11 @@ class WorkflowConfiguration(object):
     }
   ]
 
+
 class WorkflowDepthReached(Exception):
   pass
 
+
 class Workflow(Job):
   XML_FILE_NAME = 'workflow.xml'
   PROPERTY_APP_PATH = 'oozie.wf.application.path'
@@ -457,7 +459,8 @@ class Workflow(Job):
     node_mapping = dict([(node.id, node) for node in nodes])
     sub_wfs_ids = [node.data['properties']['workflow'] for node in nodes if node.data['type'] == 'subworkflow']
     workflow_mapping = dict(
-      [(workflow.uuid, Workflow(document=workflow, user=self.user)) for workflow in Document2.objects.filter(uuid__in=sub_wfs_ids)])
+        [(workflow.uuid, Workflow(document=workflow, user=self.user)) for workflow in Document2.objects.filter(uuid__in=sub_wfs_ids)]
+    )
 
     xml = re.sub(re.compile('>\s*\n+', re.MULTILINE), '>\n', django_mako.render_to_string(tmpl, {
       'wf': self,
@@ -554,6 +557,7 @@ def _to_lowercase(node_list):
       if hasattr(node[key], 'lower'):
         node[key] = node[key].lower()
 
+
 def _update_adj_list(adj_list):
   uuids = {}
   id = 1
@@ -589,6 +593,7 @@ def _update_adj_list(adj_list):
     id += 1
   return adj_list
 
+
 def _dig_nodes(nodes, adj_list, user, wf_nodes, nodes_uuid_set):
   for node in nodes:
     if type(node) != list:
@@ -659,6 +664,7 @@ def _dig_nodes(nodes, adj_list, user, wf_nodes, nodes_uuid_set):
     else:
       _dig_nodes(node, adj_list, user, wf_nodes, nodes_uuid_set)
 
+
 def _create_workflow_layout(nodes, adj_list, nodes_uuid_set, size=12):
   wf_rows = []
   for node in nodes:
@@ -666,11 +672,11 @@ def _create_workflow_layout(nodes, adj_list, nodes_uuid_set, size=12):
       node = node[0]
     if type(node) != list:
       _append_to_wf_rows(wf_rows, nodes_uuid_set, row_id=adj_list[node]['uuid'],
-        row={"widgets":[{"size":size, "name": adj_list[node]['node_type'], "id":  adj_list[node]['uuid'], "widgetType": _get_widget_type(adj_list[node]['node_type']), "properties":{}, "offset":0, "isLoading":False, "klass":"card card-widget span%s" % size, "columns":[]}]})
+        row = {"widgets":[{"size":size, "name": adj_list[node]['node_type'], "id":  adj_list[node]['uuid'], "widgetType": _get_widget_type(adj_list[node]['node_type']), "properties":{}, "offset":0, "isLoading":False, "klass":"card card-widget span%s" % size, "columns":[]}]})
     else:
       if adj_list[node[0]]['node_type'] in ('fork', 'decision'):
         _append_to_wf_rows(wf_rows, nodes_uuid_set, row_id=adj_list[node[0]]['uuid'],
-          row={"widgets":[{"size":size, "name": adj_list[node[0]]['name'], "id":  adj_list[node[0]]['uuid'], "widgetType": _get_widget_type(adj_list[node[0]]['node_type']), "properties":{}, "offset":0, "isLoading":False, "klass":"card card-widget span%s" % size, "columns":[]}]})
+          row = {"widgets":[{"size":size, "name": adj_list[node[0]]['name'], "id":  adj_list[node[0]]['uuid'], "widgetType": _get_widget_type(adj_list[node[0]]['node_type']), "properties":{}, "offset":0, "isLoading":False, "klass":"card card-widget span%s" % size, "columns":[]}]})
 
         wf_rows.append({
           "id": str(uuid.uuid4()),
@@ -703,12 +709,14 @@ def _get_widget_type(node_type):
   widget_name = "%s-widget" % node_type
   return widget_name if widget_name in NODES.keys() else 'generic-widget'
 
+
 # Prevent duplicate nodes in graph layout
 def _append_to_wf_rows(wf_rows, nodes_uuid_set, row_id, row):
   if row['widgets'][0]['id'] not in nodes_uuid_set:
     nodes_uuid_set.add(row['widgets'][0]['id'])
     wf_rows.append(row)
 
+
 def _get_hierarchy_from_adj_list(adj_list, curr_node, node_hierarchy):
 
   _get_hierarchy_from_adj_list_helper(adj_list, curr_node, node_hierarchy, WORKFLOW_DEPTH_LIMIT)
@@ -767,6 +775,7 @@ def _create_graph_adjaceny_list(nodes):
 
 
 class Node():
+
   def __init__(self, data, user=None):
     self.data = data
     self.user = user
@@ -784,7 +793,6 @@ class Node():
     if self.data['type'] in ('hive2', 'hive-document') and not self.data['properties']['jdbc_url']:
       self.data['properties']['jdbc_url'] = _get_hiveserver2_url()
 
-
     if self.data['type'] == 'fork':
       links = [link for link in self.data['children'] if link['to'] in node_mapping]
       if len(links) != len(self.data['children']):
@@ -792,7 +800,19 @@ class Node():
                  % (len(links), len(self.data['children']), links, self.data['children']))
         self.data['children'] = links
 
-    if self.data['type'] == JavaDocumentAction.TYPE:
+    if self.data['type'] == AltusAction.TYPE or \
+          (('altus' in mapping.get('cluster', '') and (self.data['type'] == SparkDocumentAction.TYPE or self.data['type'] == 'spark-document'))) or \
+          mapping.get('auto-cluster'):
+      shell_command_name = self.data['name'] + '.sh'
+      self.data['properties']['shell_command'] = shell_command_name
+      self.data['properties']['env_var'] = []
+      self.data['properties']['arguments'] = []
+      self.data['properties']['job_properties'] = []
+      self.data['properties']['capture_output'] = True
+      self.data['properties']['files'] = [{'value': shell_command_name}, {'value': 'altus.py'}]
+      self.data['properties']['archives'] = []
+
+    elif self.data['type'] == JavaDocumentAction.TYPE:
       notebook = Notebook(document=Document2.objects.get_by_uuid(user=self.user, uuid=self.data['properties']['uuid']))
       properties = notebook.get_data()['snippets'][0]['properties']
 
@@ -855,7 +875,7 @@ class Node():
       self.data['properties']['source_path'] = action['properties']['source_path']
       self.data['properties']['destination_path'] = action['properties']['destination_path']
 
-    elif self.data['type'] == ShellDocumentAction.TYPE:
+    elif self.data['type'] == ShellAction.TYPE or self.data['type'] == ShellDocumentAction.TYPE:
       if self.data['properties'].get('uuid'):
         notebook = Notebook(document=Document2.objects.get_by_uuid(user=self.user, uuid=self.data['properties']['uuid']))
         action = notebook.get_data()['snippets'][0]
@@ -866,9 +886,16 @@ class Node():
         self.data['properties']['capture_output'] = action['properties']['capture_output']
         self.data['properties']['arguments'] = [{'value': prop} for prop in action['properties']['arguments']]
 
-        self.data['properties']['files'] = ([{'value': action['properties']['command_path']}] if not action['properties'].get('command_path', '').startswith('/') else []) + [{'value': prop.get('path', prop)} for prop in action['properties']['files']]
+        self.data['properties']['files'] = [{'value': prop.get('path', prop)} for prop in action['properties']['files']]
         self.data['properties']['archives'] = [{'value': prop} for prop in action['properties']['archives']]
 
+      # Auto ship the script if it was forgotten
+      shell_command = self.data['properties']['shell_command']
+      if '/' in shell_command and not [f for f in self.data['properties']['files'] if shell_command in f['value']]:
+        self.data['properties']['files'].append({'value': shell_command})
+        self.data['properties']['shell_command'] = Hdfs.basename(shell_command)
+
+
     elif self.data['type'] == MapReduceDocumentAction.TYPE:
       notebook = Notebook(document=Document2.objects.get_by_uuid(user=self.user, uuid=self.data['properties']['uuid']))
       action = notebook.get_data()['snippets'][0]
@@ -919,6 +946,7 @@ class Node():
       self.data['properties']['archives'] = []
 
 
+
     data = {
       'node': self.data,
       'mapping': mapping,
@@ -926,7 +954,32 @@ class Node():
       'workflow_mapping': workflow_mapping
     }
 
-    if mapping.get('send_email'):
+    if mapping.get('auto-cluster'):
+      pass
+#       if self.data['type'] == StartNode.TYPE:
+#         self.data['altus_action'] = {
+#           'properties': {
+#             'credentials': {},
+#             'retry_max': {},
+#             'retry_interval': {},
+#             'prepares': {},
+#             'job_xml': {},
+#             'job_properties': {},
+#             'shell_command': '',
+#             'arguments': [],
+#             'env_var': [],
+#             'files': [],
+#             'archives': [],
+#             'capture_output': True
+#             #       <ok to="${ node_mapping[node['children'][0]['to']].name }"/>
+# 
+#             #  Node(dict(AltusAction().get_fields()))
+#           }
+#         }
+#         self.data['properties']['auto-cluster'] = mapping['auto-cluster']
+#       if self.data['type'] == EndNode.TYPE or self.data['type'] == KillAction.TYPE:
+#         self.data['properties']['auto-cluster'] = mapping['auto-cluster']
+    elif mapping.get('send_email'):
       if self.data['type'] == KillAction.TYPE and not self.data['properties'].get('enableMail'):
         self.data['properties']['enableMail'] = True
         self.data['properties']['to'] = self.user.email
@@ -942,7 +995,7 @@ class Node():
         if self.data['type'] == EndNode.TYPE:
           self.data['properties']['body'] = 'View result file at %(send_result_browse_url)s' % mapping
 
-    return django_mako.render_to_string(self.get_template_name(), data)
+    return django_mako.render_to_string(self.get_template_name(mapping), data)
 
   @property
   def id(self):
@@ -985,7 +1038,10 @@ class Node():
     # Backward compatibility
     _upgrade_older_node(self.data)
 
-  def get_template_name(self):
+  def get_template_name(self, mapping=None):
+    if mapping is None:
+      mapping = {}
+
     node_type = self.data['type']
     if self.data['type'] == JavaDocumentAction.TYPE:
       node_type = JavaAction.TYPE
@@ -993,6 +1049,10 @@ class Node():
       node_type = ShellAction.TYPE
     elif self.data['type'] == AltusAction.TYPE:
       node_type = ShellAction.TYPE
+    elif mapping.get('cluster') and 'document' in node_type: # Workflow
+      node_type = ShellAction.TYPE
+    elif mapping.get('auto-cluster') and 'document' in node_type: # Scheduled workflow
+      node_type = ShellAction.TYPE
 
     return 'editor2/gen/workflow-%s.xml.mako' % node_type
 
@@ -4064,6 +4124,9 @@ class WorkflowBuilder():
 
     node['properties']['uuid'] = document.uuid
 
+    notebook = Notebook(document=document)
+    node['properties']['capture_output'] = notebook.get_data()['snippets'][0]['properties']['capture_output']
+
     return node
 
   def get_shell_snippet_node(self, snippet):
@@ -4074,6 +4137,7 @@ class WorkflowBuilder():
     node['properties']['archives'] = snippet['properties'].get('archives')
     node['properties']['files'] = snippet['properties'].get('files')
     node['properties']['env_var'] = snippet['properties'].get('env_var')
+    node['properties']['capture_output'] = snippet['properties'].get('capture_output')
 
     return node
 

+ 5 - 3
apps/oozie/src/oozie/templates/editor2/common_workflow.mako

@@ -643,7 +643,9 @@
         <i class="fa fa-spinner fa-spin muted"></i>
       <!-- /ko -->
       <!-- ko with: associatedDocument -->
-        <a data-bind="hueLink: absoluteUrl"><span data-bind='text: name'></span></a>
+        <a data-bind="documentContextPopover: { uuid: absoluteUrl.split('=')[1], orientation: 'right', offset: { top: 5 } }" href="javascript: void(0);" title="${ _('Preview document') }">
+          <span data-bind="text: name"></span> <i class="fa fa-info"></i>
+        </a>
         <br/>
         <span data-bind='text: description' class="muted"></span>
       <!-- /ko -->
@@ -655,7 +657,7 @@
           <select placeholder="${ _('Search your documents...') }" data-bind="documentChooser: { loading: associatedDocumentLoading, value: associatedDocumentUuid, document: associatedDocument, type: type }"></select>
         </div>
         <!-- ko if: associatedDocument -->
-          <a class="pointer" data-bind="hueLink: associatedDocument().absoluteUrl" title="${ _('Open') }">
+          <a data-bind="documentContextPopover: { uuid: associatedDocument().absoluteUrl.split('=')[1], orientation: 'right', offset: { top: 5 } }" href="javascript: void(0);" title="${ _('Preview document') }">
             <i class="fa fa-external-link-square"></i>
           </a>
           <div class="clearfix"></div>
@@ -1707,7 +1709,7 @@
   <div class="row-fluid" data-bind="with: $root.workflow.getNodeById(id())" style="padding: 10px">
     <div data-bind="visible: $root.isEditing">
       <div data-bind="visible: ! $parent.ooziePropertiesExpanded()" class="nowrap">
-        <input type="text" data-bind="value: properties.shell_command" validate="nonempty"/>
+        <input type="text" data-bind="value: properties.shell_command" validate="nospace"/>
         <span data-bind='template: { name: "common-fs-link", data: {path: properties.shell_command(), with_label: false} }'></span>
 
         <div class="row-fluid">

+ 39 - 0
apps/oozie/src/oozie/templates/editor2/gen/workflow-start.xml.mako

@@ -15,4 +15,43 @@
 ## See the License for the specific language governing permissions and
 ## limitations under the License.
 
+<%namespace name="common" file="workflow-common.xml.mako" />
+
+
+%if node['properties'].get('auto-cluster'):
+  <start to="${ node_mapping[node['children'][0]['to']].name }-start"/>
+
+  <action name="${ node['name'] }-start"${ common.credentials(node['altus_action']['properties']['credentials']) }${ common.retry_max(node['altus_action']['properties']['retry_max']) }${ common.retry_interval(node['altus_action']['properties']['retry_interval']) }>
+      <shell xmlns="uri:oozie:shell-action:0.1">
+          <job-tracker>${'${'}jobTracker}</job-tracker>
+          <name-node>${'${'}nameNode}</name-node>
+
+          ${ common.prepares(node['altus_action']['properties']['prepares']) }
+          % if node['altus_action']['properties']['job_xml']:
+            <job-xml>${ node['altus_action']['properties']['job_xml'] }</job-xml>
+          % endif
+          ${ common.configuration(node['altus_action']['properties']['job_properties']) }
+
+          <exec>${ node['altus_action']['properties']['shell_command'] }</exec>
+
+          % for param in node['altus_action']['properties']['arguments']:
+            <argument>${ param['value'] }</argument>
+          % endfor
+          
+          % for param in node['altus_action']['properties']['env_var']:
+            <env-var>${ param['value'] }</env-var>
+          % endfor            
+
+          ${ common.distributed_cache(node['altus_action']['properties']['files'], node['altus_action']['properties']['archives']) }
+
+          % if node['altus_action']['properties']['capture_output']:
+            <capture-output/>
+          % endif
+      </shell>
+      <ok to="${ node_mapping[node['children'][0]['to']].name }"/>
+      ##<error to="${ node_mapping[node['children'][1]['error']].name }"/>
+      ${ common.sla(node) }
+  </action>
+%else:
     <start to="${ node_mapping[node['children'][0]['to']].name }"/>
+%endif

+ 5 - 1
apps/oozie/src/oozie/templates/editor2/workflow_editor.mako

@@ -830,7 +830,11 @@ ${ utils.submit_popup_event() }
   function validateFields() {
     var _hasErrors = false;
     $("[validate]").each(function () {
-      if ($(this).attr("validate") == "nonempty" && $.trim($(this).val()) == "") {
+      if ($(this).attr("validate") == "nospace" && ($(this).val().indexOf(' ') >= 0 || $.trim($(this).val()) == "")) {
+        $(this).addClass("with-errors");
+        _hasErrors = true;
+      }
+      else if ($(this).attr("validate") == "nonempty" && $.trim($(this).val()) == "") {
         $(this).addClass("with-errors");
         _hasErrors = true;
       }

+ 56 - 6
apps/oozie/src/oozie/views/editor2.py

@@ -23,7 +23,7 @@ from django.forms.formsets import formset_factory
 from django.shortcuts import redirect
 from django.utils.translation import ugettext as _
 
-from desktop.conf import USE_NEW_EDITOR
+from desktop.conf import USE_NEW_EDITOR, IS_MULTICLUSTER_ONLY, has_multi_cluster
 from desktop.lib import django_mako
 from desktop.lib.django_util import JsonResponse, render
 from desktop.lib.exceptions_renderable import PopupException
@@ -35,6 +35,7 @@ from desktop.models import Document, Document2
 from liboozie.credentials import Credentials
 from liboozie.oozie_api import get_oozie
 from liboozie.submission2 import Submission
+from metadata.conf import DEFAULT_PUBLIC_KEY
 from notebook.connectors.base import Notebook
 
 from oozie.decorators import check_document_access_permission, check_document_modify_permission,\
@@ -405,11 +406,8 @@ def _submit_workflow_helper(request, workflow, submit_action):
       if '/submit_single_action/' in submit_action:
         mapping['submit_single_action'] = True
 
-      if cluster.get('type') == 'altus-de':
-        notebook = {}
-        snippet = {'statement': 'SELECT 1'}
-        handle = DataEngApi(user=request.user, request=request, cluster_name=cluster.get('name')).execute(notebook, snippet)
-        return JsonResponse({'status': 0, 'job_id': handle.get('id'), 'type': 'workflow'}, safe=False)
+      if 'altus' in cluster.get('type', ''):
+        mapping['cluster'] = cluster.get('id')
 
       try:
         job_id = _submit_workflow(request.user, request.fs, request.jt, workflow, mapping)
@@ -724,6 +722,58 @@ def submit_coordinator(request, doc_id):
 def _submit_coordinator(request, coordinator, mapping):
   try:
     wf = coordinator.workflow
+    if IS_MULTICLUSTER_ONLY.get() and has_multi_cluster():
+      mapping['auto-cluster'] = {
+        u'additionalClusterResourceTags': [],
+        u'automaticTerminationCondition': u'EMPTY_JOB_QUEUE', #'u'NONE',
+        u'cdhVersion': u'CDH514',
+        u'clouderaManagerPassword': u'guest',
+        u'clouderaManagerUsername': u'guest',
+        u'clusterName': u'analytics4', # Add time variable
+        u'computeWorkersConfiguration': {
+          u'bidUSDPerHr': 0,
+          u'groupSize': 0,
+          u'useSpot': False
+        },
+        u'environmentName': u'crn:altus:environments:us-west-1:12a0079b-1591-4ca0-b721-a446bda74e67:environment:analytics/236ebdda-18bd-428a-9d2b-cd6973d42946',
+        u'instanceBootstrapScript': u'',
+        u'instanceType': u'm4.xlarge',
+        u'jobSubmissionGroupName': u'',
+        u'jobs': [{
+            u'failureAction': u'INTERRUPT_JOB_QUEUE',
+            u'name': u'a87e20d7-5c0d-49ee-ab37-625fa2803d51',
+            u'sparkJob': {
+              u'applicationArguments': ['5'],
+              u'jars': [u's3a://datawarehouse-customer360/ETL/spark-examples.jar'],
+              u'mainClass': u'org.apache.spark.examples.SparkPi'
+            }
+          },
+  #         {
+  #           u'failureAction': u'INTERRUPT_JOB_QUEUE',
+  #           u'name': u'a87e20d7-5c0d-49ee-ab37-625fa2803d51',
+  #           u'sparkJob': {
+  #             u'applicationArguments': ['10'],
+  #             u'jars': [u's3a://datawarehouse-customer360/ETL/spark-examples.jar'],
+  #             u'mainClass': u'org.apache.spark.examples.SparkPi'
+  #           }
+  #         },
+  #         {
+  #           u'failureAction': u'INTERRUPT_JOB_QUEUE',
+  #           u'name': u'a87e20d7-5c0d-49ee-ab37-625fa2803d51',
+  #           u'sparkJob': {
+  #             u'applicationArguments': [u'filesystems3.conf'],
+  #             u'jars': [u's3a://datawarehouse-customer360/ETL/envelope-0.6.0-SNAPSHOT-c6.jar'],
+  #             u'mainClass': u'com.cloudera.labs.envelope.EnvelopeMain',
+  #             u'sparkArguments': u'--archives=s3a://datawarehouse-customer360/ETL/filesystems3.conf'
+  #           }
+  #         }
+        ],
+        u'namespaceName': u'crn:altus:sdx:us-west-1:12a0079b-1591-4ca0-b721-a446bda74e67:namespace:analytics/7ea35fe5-dbc9-4b17-92b1-97a1ab32e410',
+        u'publicKey': DEFAULT_PUBLIC_KEY.get(),
+        u'serviceType': u'SPARK',
+        u'workersConfiguration': {},
+        u'workersGroupSize': u'3'
+      }
     wf_dir = Submission(request.user, wf, request.fs, request.jt, mapping, local_tz=coordinator.data['properties']['timezone']).deploy()
 
     properties = {'wf_application_path': request.fs.get_hdfs_path(wf_dir)}

+ 3 - 3
apps/pig/src/pig/management/commands/pig_setup.py

@@ -129,9 +129,9 @@ STORE upper_case INTO '$output';
     else:
       # Install old pig script fixture
       LOG.info("Using Hue 3, will install pig script fixture.")
-
-      management.call_command('loaddata', 'initial_pig_examples.json', verbosity=2)
-      Document.objects.sync()
+      with transaction.atomic():
+        management.call_command('loaddata', 'initial_pig_examples.json', verbosity=2, commit=False)
+        Document.objects.sync()
 
     if USE_NEW_EDITOR.get():
       # Get or create sample user directories

+ 4 - 2
apps/search/src/search/management/commands/search_setup.py

@@ -19,6 +19,7 @@ import logging
 
 from django.core import management
 from django.core.management.base import BaseCommand
+from django.db import transaction
 
 from desktop.models import Directory, Document, Document2, Document2Permission, SAMPLE_USER_OWNERS
 from useradmin.models import get_default_user_group, install_sample_user
@@ -41,8 +42,9 @@ class Command(BaseCommand):
     )
 
     if not Document2.objects.filter(type='search-dashboard', owner__username__in=SAMPLE_USER_OWNERS).exists():
-      management.call_command('loaddata', 'initial_search_examples.json', verbosity=2)
-      Document.objects.sync()
+      with transaction.atomic():
+        management.call_command('loaddata', 'initial_search_examples.json', verbosity=2, commit=False)
+        Document.objects.sync()
 
       Document2.objects.filter(type='search-dashboard', owner__username__in=SAMPLE_USER_OWNERS).update(parent_directory=examples_dir)
     else:

+ 1 - 1
apps/security/src/security/static/security/js/hive.ko.js

@@ -829,7 +829,7 @@ var HiveViewModel = (function () {
     self.availableActions = function(scope) {
       var actions = ['SELECT', 'INSERT', 'ALL'];
       var databaseActions = ['CREATE'];
-      var tableActions = ['REFRESH', 'ALTER', 'DROP'];
+      var tableActions = ['REFRESH']; //, 'ALTER', 'DROP'];
       switch (scope) {
         case 'SERVER':
         case 'DATABASE':

+ 1 - 1
apps/security/src/security/static/security/js/sentry.ko.js

@@ -874,7 +874,7 @@ var SentryViewModel = (function () {
       self.availableActions = function (authorizables) {
         var actions = ['SELECT', 'INSERT', 'ALL'];
         var databaseActions = ['CREATE'];
-        var tableActions = ['REFRESH', 'ALTER', 'DROP'];
+        var tableActions = ['REFRESH']; // 'ALTER', 'DROP'
         if (authorizables.length < 2) { // server and database
           actions = actions.concat(databaseActions).concat(tableActions);
         }

+ 3 - 1
apps/security/src/security/templates/hdfs.mako

@@ -384,7 +384,9 @@ ${ tree.import_templates(itemClick='$root.assist.setPath', iconClick='$root.assi
 
       huePubSub.subscribe('app.gained.focus', function (app) {
         if (app === 'security_hdfs') {
-          window.location.hash = viewModel.lastHash;
+          window.setTimeout(function () {
+            window.location.hash = viewModel.lastHash;
+          }, 0);
         }
       }, 'security_hdfs');
 

+ 4 - 2
apps/security/src/security/templates/hive.mako

@@ -779,8 +779,10 @@ ${ tree.import_templates(itemClick='$root.assist.setPath', iconClick='$root.assi
 
       huePubSub.subscribe('app.gained.focus', function (app) {
         if (app === 'security_hive') {
-          window.location.hash = viewModel.lastHash;
-          showMainSection(viewModel.getSectionHash());
+          window.setTimeout(function () {
+            window.location.hash = viewModel.lastHash;
+            showMainSection(viewModel.getSectionHash());
+          }, 0);
         }
       }, 'security_hive');
     });

+ 1 - 1
apps/useradmin/src/useradmin/forms.py

@@ -152,7 +152,7 @@ class UserChangeForm(django.contrib.auth.forms.UserChangeForm):
       User._default_manager.get(username=username)
     except User.DoesNotExist:
       return username
-    raise forms.ValidationError(self.GENERIC_VALIDATION_ERROR, code='duplicate_username')
+    raise forms.ValidationError(_("Username already exists."), code='duplicate_username')
 
   def clean_password(self):
     return self.cleaned_data["password"]

+ 5 - 1
apps/useradmin/src/useradmin/middleware.py

@@ -91,7 +91,11 @@ class LastActivityMiddleware(object):
       logout = True
 
     # Save last activity for user except when polling
-    if not (request.path.strip('/') == 'jobbrowser/api/jobs') and not (request.path.strip('/') == 'jobbrowser/jobs' and request.POST.get('format') == 'json') and not (request.path == '/desktop/debug/is_idle'):
+    if not (request.path.strip('/') == 'notebook/api/check_status') \
+        and not (request.path.strip('/').startswith('jobbrowser/api/job')) \
+        and not (request.path.strip('/') == 'jobbrowser/jobs' and request.POST.get('format') == 'json') \
+        and not (request.path.strip('/') == 'desktop/debug/is_idle') \
+        and not (request.path.strip('/').startswith('oozie/list_oozie_')):
       try:
         profile.last_activity = datetime.now()
         profile.save()

+ 1 - 1
apps/useradmin/src/useradmin/old_migrations/0001_permissions_and_profiles.py

@@ -32,7 +32,7 @@ class Migration(DataMigration):
             # LDAP == 1
             # HUE == 0
             if up.creation_method == '1':
-              up.creation_method = UserProfile.CreationMethod.EXTERNAL
+              up.creation_method = UserProfile.CreationMethod.EXTERNAL.name
             elif up.creation_method == '0':
               up.creation_method = UserProfile.CreationMethod.HUE
             up.save()

+ 1 - 1
apps/useradmin/src/useradmin/tests.py

@@ -656,7 +656,7 @@ class TestUserAdmin(BaseUserAdminTests):
 
       # Create a new regular user (duplicate name)
       response = c.post('/useradmin/users/new', dict(username="test", password1="test", password2="test"))
-      assert_equal({ 'username': [UserChangeForm.GENERIC_VALIDATION_ERROR]}, response.context[0]["form"].errors)
+      assert_equal({ 'username': ['Username already exists.']}, response.context[0]["form"].errors)
 
       # Create a new regular user (for real)
       response = c.post('/useradmin/users/new', dict(username=FUNNY_NAME,

+ 1 - 0
desktop/Makefile

@@ -44,6 +44,7 @@ APPS := core \
 	libs/azure \
 	libs/hadoop \
 	libs/indexer \
+	libs/libanalyze \
 	libs/liboauth \
 	libs/liboozie \
 	libs/libopenid \

+ 24 - 11
desktop/conf.dist/hue.ini

@@ -109,7 +109,7 @@
   # Hue will try to get the actual host of the Service, even if it resides behind a load balancer.
   # This will enable an automatic configuration of the service without requiring custom configuration of the service load balancer.
   # This is available for the Impala service only currently. It is highly recommended to only point to a series of coordinator-only nodes only.
-  # default=false
+  # enable_smart_thrift_pool=false
 
   # Filename of SSL Certificate
   ## ssl_certificate=
@@ -795,7 +795,7 @@
   # show_notebooks=true
 
   ## Flag to enable the selection of queries from files, saved queries into the editor or as snippet.
-  # enable_external_statements=true
+  # enable_external_statements=false
 
   ## Flag to enable the bulk submission of queries as a background task through Oozie.
   # enable_batch_execute=true
@@ -1263,10 +1263,10 @@
   ## archive_upload_tempdir=/tmp
 
   # Show Download Button for HDFS file browser.
-  ## show_download_button=false
+  ## show_download_button=true
 
   # Show Upload Button for HDFS file browser.
-  ## show_upload_button=false
+  ## show_upload_button=true
 
   # Flag to enable the extraction of a uploaded archive in HDFS.
   ## enable_extract_uploaded_archive=true
@@ -1334,9 +1334,12 @@
   # Hard limit of rows or columns per row fetched before truncating.
   ## truncate_limit = 500
 
-  # 'framed' is used to chunk up responses, which is useful when used in conjunction with the nonblocking server in Thrift.
-  # 'buffered' used to be the default of the HBase Thrift Server.
-  ## thrift_transport=framed
+  # Should come from hbase-site.xml, do not set. 'framed' is used to chunk up responses, used with the nonblocking server in Thrift but is not supported in Hue.
+  # 'buffered' used to be the default of the HBase Thrift Server. Default is buffered when not set in hbase-site.xml.
+  ## thrift_transport=buffered
+
+  # Choose whether Hue should validate certificates received from the server.
+  ## ssl_cert_ca_verify=true
 
   # Choose whether Hue should validate certificates received from the server.
   ## ssl_cert_ca_verify=true
@@ -1377,12 +1380,18 @@
 
 [indexer]
 
-  # Flag to turn on the Morphline Solr indexer.
-  ## enable_scalable_indexer=true
-
-  # Oozie workspace template for indexing.
+  # Filesystem directory containing Solr Morphline indexing libs.
   ## config_indexer_libs_path=/tmp/smart_indexer_lib
 
+  # Filesystem directory containing JDBC libs.
+  ## config_jdbc_libs_path=/user/oozie/libext/jdbc_drivers
+
+  # Filesystem directory containing jar libs.
+  ## config_jars_libs_path=/user/oozie/libext/libs
+
+  # Flag to turn on the Solr Morphline indexer.
+  ## enable_scalable_indexer=true
+
   # Flag to turn on Sqoop ingest.
   ## enable_sqoop=true
 
@@ -1892,3 +1901,7 @@
 
     # If metadata search is enabled, also show the search box in the left assist.
     ## enable_file_search=false
+
+  [[prometheus]]
+    # Configuration options for Prometheus API.
+    ## api_url=http://localhost:9090/api

+ 24 - 11
desktop/conf/pseudo-distributed.ini.tmpl

@@ -113,7 +113,7 @@
   # Hue will try to get the actual host of the Service, even if it resides behind a load balancer.
   # This will enable an automatic configuration of the service without requiring custom configuration of the service load balancer.
   # This is available for the Impala service only currently. It is highly recommended to only point to a series of coordinator-only nodes only.
-  # default=false
+  # enable_smart_thrift_pool=false
 
   # Filename of SSL Certificate
   ## ssl_certificate=
@@ -797,7 +797,7 @@
   # show_notebooks=true
 
   ## Flag to enable the selection of queries from files, saved queries into the editor or as snippet.
-  # enable_external_statements=true
+  # enable_external_statements=false
 
   ## Flag to enable the bulk submission of queries as a background task through Oozie.
   # enable_batch_execute=false
@@ -1265,10 +1265,10 @@
   ## archive_upload_tempdir=/tmp
 
   # Show Download Button for HDFS file browser.
-  ## show_download_button=false
+  ## show_download_button=true
 
   # Show Upload Button for HDFS file browser.
-  ## show_upload_button=false
+  ## show_upload_button=true
 
   # Flag to enable the extraction of a uploaded archive in HDFS.
   ## enable_extract_uploaded_archive=true
@@ -1336,9 +1336,12 @@
   # Hard limit of rows or columns per row fetched before truncating.
   ## truncate_limit = 500
 
-  # 'framed' is used to chunk up responses, which is useful when used in conjunction with the nonblocking server in Thrift.
-  # 'buffered' used to be the default of the HBase Thrift Server.
-  ## thrift_transport=framed
+  # Should come from hbase-site.xml, do not set. 'framed' is used to chunk up responses, used with the nonblocking server in Thrift but is not supported in Hue.
+  # 'buffered' used to be the default of the HBase Thrift Server. Default is buffered when not set in hbase-site.xml.
+  ## thrift_transport=buffered
+
+  # Choose whether Hue should validate certificates received from the server.
+  ## ssl_cert_ca_verify=true
 
   # Choose whether Hue should validate certificates received from the server.
   ## ssl_cert_ca_verify=true
@@ -1379,12 +1382,18 @@
 
 [indexer]
 
-  # Flag to turn on the Morphline Solr indexer.
-  ## enable_scalable_indexer=true
-
-  # Oozie workspace template for indexing.
+  # Filesystem directory containing Solr Morphline indexing libs.
   ## config_indexer_libs_path=/tmp/smart_indexer_lib
 
+  # Filesystem directory containing JDBC libs.
+  ## config_jdbc_libs_path=/user/oozie/libext/jdbc_drivers
+
+  # Filesystem directory containing jar libs.
+  ## config_jars_libs_path=/user/oozie/libext/libs
+
+  # Flag to turn on the Solr Morphline indexer.
+  ## enable_scalable_indexer=true
+
   # Flag to turn on Sqoop ingest.
   ## enable_sqoop=true
 
@@ -1896,3 +1905,7 @@
 
     # If metadata search is enabled, also show the search box in the left assist.
     ## enable_file_search=false
+
+  [[prometheus]]
+    # Configuration options for Prometheus API.
+    ## api_url=http://localhost:9090/api

+ 1 - 1
desktop/core/ext-py/djangosaml2-0.16.11/djangosaml2/acs_failures.py

@@ -11,7 +11,7 @@ from django.shortcuts import render
 
 def template_failure(request, status=403, **kwargs):
     """ Renders a SAML-specific template with general authentication error description. """
-    return render(request, 'djangosaml2/login_error.html', status=status)
+    return render(request, 'djangosaml2/login_error.html', status=status, using='django')
 
 
 def exception_failure(request, exc_class=PermissionDenied, **kwargs):

+ 6 - 6
desktop/core/ext-py/djangosaml2-0.16.11/djangosaml2/views.py

@@ -133,7 +133,7 @@ def login(request,
             logger.debug('User is already logged in')
             return render(request, authorization_error_template, {
                     'came_from': came_from,
-                    })
+                    }, using='django')
 
     selected_idp = request.GET.get('idp', None)
     conf = get_config(config_loader_path, request)
@@ -145,7 +145,7 @@ def login(request,
         return render(request, wayf_template, {
                 'available_idps': idps.items(),
                 'came_from': came_from,
-                })
+                }, using='django')
 
     # choose a binding to try first
     sign_requests = getattr(conf, '_sp_authn_requests_signed', False)
@@ -213,7 +213,7 @@ def login(request,
                         'SAMLRequest': saml_request,
                         'RelayState': came_from,
                         },
-                    })
+                    }, using='django')
             except TemplateDoesNotExist:
                 pass
 
@@ -349,7 +349,7 @@ def echo_attributes(request,
     except AttributeError:
         return HttpResponse("No active SAML identity found. Are you sure you have logged in via SAML?")
 
-    return render(request, template, {'attributes': identity[0]})
+    return render(request, template, {'attributes': identity[0]}, using='django')
 
 
 @login_required
@@ -443,7 +443,7 @@ def do_logout_service(request, data, binding, config_loader_path=None, next_page
                 'The session does not contain the subject id for user %s. Performing local logout',
                 request.user)
             auth.logout(request)
-            return render(request, logout_error_template, status=403)
+            return render(request, logout_error_template, status=403, using='django')
         else:
             http_info = client.handle_logout_request(
                 data['SAMLRequest'],
@@ -467,7 +467,7 @@ def finish_logout(request, response, next_page=None):
         return django_logout(request, next_page=next_page)
     else:
         logger.error('Unknown error during the logout')
-        return render(request, "djangosaml2/logout_error.html", {})
+        return render(request, "djangosaml2/logout_error.html", {}, using='django')
 
 
 def metadata(request, config_loader_path=None, valid_for=None):

+ 1487 - 0
desktop/core/ext-py/dnspython-1.15.0/ChangeLog

@@ -0,0 +1,1487 @@
+2016-09-29  Bob Halley  <halley@dnspython.org>
+
+	* IDNA 2008 support is now available if the "idna" module has been
+	  installed and IDNA 2008 is requested.  The default IDNA behavior
+	  is still IDNA 2003.  The new IDNA codec mechanism is currently
+	  only useful for direct calls to dns.name.from_text() or
+	  dns.name.from_unicode(), but in future releases it will be
+	  deployed throughout dnspython, e.g. so that you can read a
+	  masterfile with an IDNA 2008 codec in force.
+
+	* By default, dns.name.to_unicode() is not strict about which
+	  version of IDNA the input complies with.  Strictness can be
+	  requested by using one of the strict IDNA codecs.
+
+	* Add AVC RR support.
+
+	* Some problems with newlines in various output modes have been
+	  addressed.
+
+	* dns.name.to_text() now returns text and not bytes on Python 3.x
+
+	* More miscellaneous fixes for the Python 2/3 codeline merge.
+
+2016-05-27  Bob Halley  <halley@dnspython.org>
+
+	* (Version 1.14.0 released)
+
+	* Add CSYNC RR support
+
+	* Fix bug in LOC which destroyed N/S and E/W distinctions within
+	  a degree of the equator or prime merdian respectively.
+
+	* Misc. fixes to deal with fallout from the Python 2 & 3 merge.
+	  [issue #156], [issue #157], [issue #158], [issue #159],
+	  [issue #160].
+
+	* Running with python optimization on caused issues when
+	  stripped docstrings were referenced. [issue #154]
+
+	* dns.zone.from_text() erroneously required the zone to be provided.
+	  [issue #153]
+
+2016-05-13  Bob Halley  <halley@dnspython.org>
+
+	* dns/message.py (make_query): Setting any value which implies
+	  EDNS will turn on EDNS if 'use_edns' has not been specified.
+
+2016-05-12  Bob Halley  <halley@dnspython.org>
+
+	* TSIG signature algorithm setting was broken by the Python 2
+	  and Python 3 code line merge.  Fixed.
+
+2016-05-10  Bob Halley  <halley@dnspython.org>
+
+	* (Version 1.13.0 released)
+
+2016-05-10  Bob Halley  <halley@dnspython.org>
+
+	* Dropped support for Python 2.4 and 2.5.
+
+	* Zone origin can be specified as a string.
+
+	* Support string representation for all DNSExceptions.
+
+	* Use setuptools not distutils
+
+	* A number of Unicode name bug fixes.
+
+	* Added support for CAA, CDS, CDNSKEY, EUI48, EUI64, and URI RR
+	  types.
+
+	* Names now support the pickle protocol.
+
+	* NameDicts now keep the max-depth value correct, and update
+	  properly.
+
+	* resolv.conf processing rejects lines with too few tokens.
+
+	* Ports can be specified per-nameserver in the stub resolver.
+
+2016-05-03  Arthur Gautier
+
+        * Single source support for python 2.6+ and 3.3+
+
+2014-09-04  Bob Halley  <halley@dnspython.org>
+
+	* Comparing two rdata is now always done by comparing the binary
+	  data of the DNSSEC digestable forms.  This corrects a number of
+	  errors where dnspython's rdata comparison order was not the
+	  DNSSEC order.
+
+	* Add CAA implementation.  Thanks to Brian Wellington for the
+	  patch.
+
+2014-09-01  Bob Halley  <halley@dnspython.org>
+
+	* (Version 1.12.0 released)
+
+2014-08-31  Bob Halley  <halley@dnspython.org>
+
+	* The test system can now run the tests without requiring dnspython
+	  to be installed.
+
+2014-07-24  Bob Halley  <halley@dnspython.org>
+
+	* The 64-bit version of Python on Windows has sys.maxint set to
+	  2^31-1, yet passes 2^63-1 as the "unspecified bound" value in
+	  slices.  This is a bug in Python as the documentation says the
+	  unspecified bound value should be sys.maxint.  We now cope with
+	  this.  Thanks to Matthäus Wander for reporting the problem.
+
+2014-06-21  Bob Halley  <halley@dnspython.org>
+
+	* When reading from a masterfile, if the first content line
+	  started with leading whitespace, we raised an ugly exception
+	  instead of doing the right thing, namely using the zone origin as
+	  the name. [#73]  Thanks to Tassatux for reporting the issue.
+
+	* Added dns.zone.to_text() convenience method.  Thanks to Brandon
+	  Whaley <redkrieg@gmail.com> for the patch.
+
+	* The /etc/resolv.conf setting "options rotate" is now understood
+	  by the resolver.  If present, the resolver will shuffle the
+	  nameserver list each time dns.resolver.query() is called.  Thanks
+	  to underrun for the patch.  Note that you don't want to add
+	  "options rotate" to your /etc/resolv.conf if your system's
+	  resolver library does not understand it.  In this case, just set
+	  resolver.rotate = True by hand.
+
+2014-06-19  Bob Halley  <halley@dnspython.org>
+
+	* Escaping of Unicode has been corrected.  Previously we escaped
+	  and then converted to Unicode, but the right thing to do is
+	  convert to Unicode, then escape.  Also, characters > 0x7f should
+	  NOT be escaped in Unicode mode.  Thanks to Martin Basti for the
+	  patch.
+
+	* dns.rdtypes.ANY.DNSKEY now has helpers functions to convert
+	  between the numeric form of the flags and a set of human-friendly
+	  strings.  Thanks to Petr Spacek for the patch.
+
+	* RRSIGs did not respect relativization settings in to_text().
+	  Thanks to Brian Smith for reporting the bug and submitting a
+	  (slightly different) patch.
+
+2014-06-18  Bob Halley  <halley@dnspython.org>
+
+	* dns/rdtypes/IN/APL.py: The APL from_wire() method did not accept an
+	  rdata length of 0 as valid.  Thanks to salzmdan for reporting the
+	  problem.
+
+2014-05-31  Bob Halley  <halley@dnspython.org>
+
+	* dns/ipv6.py: Add is_mapped()
+
+	* dns/reversename.py: Lookup IPv6 mapped IPv4 addresses in the v4
+	  reverse namespace.  Thanks to Devin Bayer.  Yes, I finally fixed
+	  this one :)
+
+2014-04-11  Bob Halley  <halley@dnspython.org>
+
+	* dns/zone.py: Do not put back an unescaped token.  This was
+	  causing escape processing for domain names to break.  Thanks to
+	  connormclaud for reporting the problem.
+
+2014-04-04  Bob Halley  <halley@dnspython.org>
+
+	* dns/message.py: Making a response didn't work correctly if the
+	  query was signed with TSIG and we knew the key.  Thanks to Jeffrey
+	  Stiles for reporting the problem.
+
+2013-12-11  Bob Halley  <halley@dnspython.org>
+
+	* dns/query.py: Fix problems with the IXFR state machine which caused
+	  long diffs to fail.  Thanks to James Raftery for the fix and the
+	  repeated prodding to get it applied :)
+
+2013-09-02  Bob Halley  <halley@dnspython.org>
+
+	* (Version 1.11.1 released)
+
+2013-09-01  Bob Halley  <halley@dnspython.org>
+
+	* dns/tsigkeyring.py (to_text): we want keyname.to_text(), not
+	  dns.name.to_text(keyname).  Thangs to wangwang for the fix.
+
+2013-08-26  Bob Halley  <halley@dnspython.org>
+
+	* dns/tsig.py (sign): multi-message TSIGs were broken for
+	  algorithms other than HMAC-MD5 because we weren't passing the
+	  right digest module to the HMAC code.  Thanks to salzmdan for
+	  reporting the bug.
+
+2013-08-09  Bob Halley  <halley@dnspython.org>
+
+	* dns/dnssec.py (_find_candidate_keys): we tried to extract the
+	  key from the wrong variable name.  Thanks to Andrei Fokau for the
+	  fix.
+
+2013-07-08  Bob Halley  <halley@dnspython.org>
+
+	* dns/resolver.py: we want 'self.retry_servfail' not just
+	  retry_servfail.  Reported by many, thanks!  Thanks to
+	  Jeffrey C. Ollie for the fix.
+
+2013-07-08  Bob Halley  <halley@dnspython.org>
+
+	* tests/grange.py: fix tests to use older-style print formatting
+	  for backwards compatibility with python 2.4.  Thanks to
+	  Jeffrey C. Ollie for the fix.
+
+2013-07-01  Bob Halley  <halley@dnspython.org>
+
+	* (Version 1.11.0 released)
+
+2013-04-28  Bob Halley  <halley@dnspython.org>
+
+	* dns/name.py (Name.to_wire): Do not add items with offsets >= 2^14
+	  to the compression table.  Thanks to Casey Deccio for discovering
+	  this bug.
+
+2013-04-26  Bob Halley  <halley@dnspython.org>
+
+	* dns/ipv6.py (inet_ntoa): We now comply with RFC 5952 section
+	  5.2.2, by *not* using the :: syntax to shorten just one 16-bit
+	  field.  Thanks to David Waitzman for reporting the bug and
+	  suggesting the fix.
+
+2013-03-31  Bob Halley  <halley@dnspython.org>
+
+	* lock caches in case they are shared
+
+	* raise YXDOMAIN if we see one
+
+	* do not print empty rdatasets
+
+	* Add contributed $GENERATE support (thanks uberj)
+
+	* Remove DNSKEY keytag uniqueness assumption (RFC 4034, section 8)
+	  (thanks James Dempsey)
+
+2012-09-25  Sean Leach
+
+	* added set_flags() method to dns.resolver.Resolver
+
+2012-09-25  Pieter Lexis
+
+	* added support for TLSA RR
+
+2012-08-28  Bob Halley  <halley@dnspython.org>
+
+	* dns/rdtypes/ANY/NSEC3.py (NSEC3.from_text): The NSEC3 from_text()
+	  method could erroneously emit empty bitmap windows (i.e. windows
+	  with a count of 0 bytes); such bitmaps are illegal.
+
+2012-04-08  Bob Halley  <halley@dnspython.org>
+
+	* (Version 1.10.0 released)
+
+2012-04-08  Bob Halley  <halley@dnspython.org>
+
+	* dns/message.py (make_query): All EDNS values may now be
+	  specified when calling make_query()
+
+	* dns/query.py: Specifying source_port had no effect if source was
+	  not specified.  We now use the appropriate wildcard source in
+	  that case.
+
+	* dns/resolver.py (Resolver.query): source_port may now be
+	  specified.
+
+	* dns/resolver.py (Resolver.query): Switch to TCP when a UDP
+	  response is truncated.  Handle nameservers that serve on UDP
+	  but not TCP.
+
+2012-04-07  Bob Halley  <halley@dnspython.org>
+
+	* dns/zone.py (from_xfr): dns.zone.from_xfr() now takes a
+	  'check_origin' parameter which defaults to True.  If set to
+	  False, then dnspython will not make origin checks on the zone.
+	  Thanks to Carlos Perez for the report.
+
+	* dns/rdtypes/ANY/SSHFP.py (SSHFP.from_text): Allow whitespace in
+	  the text string.  Thanks to Jan Andres for the report and the
+	  patch.
+
+	* dns/message.py (from_wire): dns.message.from_wire() now takes
+	  an 'ignore_trailing' parameter which defaults to False.  If set
+	  to True, then trailing junk will be ignored instead of causing
+	  TrailingJunk to be raised.  Thanks to Shane Huntley for
+	  contributing the patch.
+
+2011-08-22  Bob Halley  <halley@dnspython.org>
+
+	* dns/resolver.py: Added LRUCache.  In this cache implementation,
+	  the cache size is limited to a user-specified number of nodes, and
+	  when adding a new node to a full cache the least-recently used
+	  node is removed.
+
+2011-07-13  Bob Halley  <halley@dnspython.org>
+
+	* dns/resolver.py: dns.resolver.override_system_resolver()
+  	  overrides the socket module's versions of getaddrinfo(),
+	  getnameinfo(), getfqdn(), gethostbyname(), gethostbyname_ex() and
+	  gethostbyaddr() with an implementation which uses a dnspython stub
+	  resolver instead of the system's stub resolver.  This can be
+	  useful in testing situations where you want to control the
+	  resolution behavior of python code without having to change the
+	  system's resolver settings (e.g. /etc/resolv.conf).
+	  dns.resolver.restore_system_resolver() undoes the change.
+
+2011-07-08  Bob Halley  <halley@dnspython.org>
+
+	* dns/ipv4.py: dnspython now provides its own, stricter, versions
+	  of IPv4 inet_ntoa() and inet_aton() instead of using the OS's
+	  versions.
+
+	* dns/ipv6.py: inet_aton() now bounds checks embedded IPv4 addresses
+	  more strictly.  Also, now only dns.exception.SyntaxError can be
+	  raised on bad input.
+
+2011-04-05  Bob Halley  <halley@dnspython.org>
+
+	* Old DNSSEC types (KEY, NXT, and SIG) have been removed.
+
+	* Bounds checking of slices in rdata wire processing is now more
+	  strict, and bounds errors (e.g. we got less data than was
+	  expected) now raise dns.exception.FormError rather than
+	  IndexError.
+
+2011-03-28  Bob Halley  <halley@dnspython.org>
+
+	* (Version 1.9.4 released)
+
+2011-03-24  Bob Halley  <halley@dnspython.org>
+
+	* dns/rdata.py (Rdata._wire_cmp): We need to specify no
+	  compression and an origin to _wire_cmp() in case names in the
+	  rdata are relative names.
+
+	* dns/rdtypes/ANY/SIG.py (SIG._cmp): Add missing 'import struct'.
+	  Thanks to Arfrever Frehtes Taifersar Arahesis for reporting the
+	  problem.
+
+2011-03-24  Bob Halley  <halley@dnspython.org>
+
+	* (Version 1.9.3 released)
+
+2011-03-22  Bob Halley  <halley@dnspython.org>
+
+	* dns/resolver.py: a boolean parameter, 'raise_on_no_answer', has
+	  been added to the query() methods.  In no-error, no-data
+	  situations, this parameter determines whether NoAnswer should be
+	  raised or not.  If True, NoAnswer is raised.  If False, then an
+	  Answer() object with a None rrset will be returned.
+
+	* dns/resolver.py: Answer() objects now have a canonical_name field.
+
+2011-01-11  Bob Halley  <halley@dnspython.org>
+
+	* Dnspython was erroneously doing case-insensitive comparisons
+	  of the names in NSEC and RRSIG RRs.  Thanks to Casey Deccio for
+	  reporting this bug.
+
+2010-12-17  Bob Halley  <halley@dnspython.org>
+
+	* dns/message.py (_WireReader._get_section): use "is" and not "=="
+	  when testing what section an RR is in.  Thanks to James Raftery
+	  for reporting this bug.
+
+2010-12-10  Bob Halley  <halley@dnspython.org>
+
+	* dns/resolver.py (Resolver.query): disallow metaqueries.
+
+	* dns/rdata.py (Rdata.__hash__): Added a __hash__ method for rdata.
+
+2010-11-23  Bob Halley  <halley@dnspython.org>
+
+	* (Version 1.9.2 released)
+
+2010-11-23  Bob Halley  <halley@dnspython.org>
+
+	* dns/dnssec.py (_need_pycrypto): DSA and RSA are modules, not
+	  functions, and I didn't notice because the test suite masked
+	  the bug!  *sigh*
+
+2010-11-22  Bob Halley  <halley@dnspython.org>
+
+	* (Version 1.9.1 released)
+
+2010-11-22  Bob Halley  <halley@dnspython.org>
+
+	* dns/dnssec.py: the "from" style import used to get DSA from
+	  PyCrypto trashed a DSA constant.  Now a normal import is used
+	  to avoid namespace contamination.
+
+2010-11-20  Bob Halley  <halley@dnspython.org>
+
+	* (Version 1.9.0 released)
+
+2010-11-07  Bob Halley  <halley@dnspython.org>
+
+	* dns/dnssec.py: Added validate() to do basic DNSSEC validation
+	  (requires PyCrypto). Thanks to Brian Wellington for the patch.
+
+	* dns/hash.py: Hash compatibility handling is now its own module.
+
+2010-10-31  Bob Halley  <halley@dnspython.org>
+
+	* dns/resolver.py (zone_for_name): A query name resulting in a
+	  CNAME or DNAME response to a node which had an SOA was incorrectly
+	  treated as a zone origin.  In these cases, we should just look
+	  higher.  Thanks to Gert Berger for reporting this problem.
+
+	* Added zonediff.py to examples.  This program compares two zones
+	  and shows the differences either in diff-like plain text, or
+	  HTML.  Thanks to Dennis Kaarsemaker for contributing this
+	  useful program.
+
+2010-10-27  Bob Halley  <halley@dnspython.org>
+
+	* Incorporate a patch to use poll() instead of select() by
+	  default on platforms which support it.  Thanks to
+	  Peter Schüller and Spotify for the contribution.
+
+2010-10-17  Bob Halley  <halley@dnspython.org>
+
+	* Python prior to 2.5.2 doesn't compute the correct values for
+	  HMAC-SHA384 and HMAC-SHA512.  We now detect attempts to use
+	  them and raise NotImplemented if the Python version is too old.
+	  Thanks to Kevin Chen for reporting the problem.
+
+	* Various routines that took the string forms of rdata types and
+	  classes did not permit the strings to be Unicode strings.
+	  Thanks to Ryan Workman for reporting the issue.
+
+	* dns/tsig.py: Added symbolic constants for the algorithm strings.
+	  E.g. you can now say dns.tsig.HMAC_MD5 instead of
+	  "HMAC-MD5.SIG-ALG.REG.INT".  Thanks to Cillian Sharkey for
+	  suggesting this improvement.
+
+	* dns/tsig.py (get_algorithm): fix hashlib compatibility; thanks to
+	  Kevin Chen for the patch.
+
+	* dns/dnssec.py: Added key_id() and make_ds().
+
+	* dns/message.py: message.py needs to import dns.edns since it uses
+	  it.
+
+2010-05-04  Bob Halley  <halley@dnspython.org>
+
+	* dns/rrset.py (RRset.__init__): "covers" was not passed to the
+	  superclass __init__().  Thanks to Shanmuga Rajan for reporting
+	  the problem.
+
+2010-03-10  Bob Halley  <halley@dnspython.org>
+
+	* The TSIG algorithm value was passed to use_tsig() incorrectly
+	  in some cases.  Thanks to 'ducciovigolo' for reporting the problem.
+
+2010-01-26  Bob Halley  <halley@dnspython.org>
+
+	* (Version 1.8.0 released)
+
+2010-01-13  Bob Halley  <halley@dnspython.org>
+
+	* dns/dnssec.py: Added RSASHA256 and RSASHA512 codepoints; added
+	  other missing codepoints to _algorithm_by_text.
+
+2010-01-12  Bob Halley  <halley@dnspython.org>
+
+	* Escapes in masterfiles now work correctly.  Previously they were
+	  only working correctly when the text involved was part of a domain
+	  name.
+
+	* dns/tokenizer.py: The tokenizer's get() method now returns Token
+	  objects, not (type, text) tuples.
+
+2009-11-13  Bob Halley  <halley@dnspython.org>
+
+	* Support has been added for hmac-sha1, hmac-sha224, hmac-sha256,
+	  hmac-sha384 and hmac-sha512.  Thanks to Kevin Chen for a
+	  thoughtful, high quality patch.
+
+	* dns/update.py (Update::present): A zero TTL was not added if
+	  present() was called with a single rdata, causing _add() to be
+	  unhappy.  Thanks to Eugene Kim for reporting the problem and
+	  submitting a patch.
+
+	* dns/entropy.py: Use os.urandom() if present.  Don't seed until
+	  someone wants randomness.
+
+2009-09-16  Bob Halley  <halley@dnspython.org>
+
+	* dns/entropy.py: The entropy module needs locking in order to be
+	  used safely in a multithreaded environment.  Thanks to Beda Kosata
+	  for reporting the problem.
+
+2009-07-27  Bob Halley  <halley@dnspython.org>
+
+	* dns/query.py (xfr): The socket was not set to nonblocking mode.
+	  Thanks to Erik Romijn for reporting this problem.
+
+2009-07-23  Bob Halley  <halley@dnspython.org>
+
+	* dns/rdtypes/IN/SRV.py (SRV._cmp): SRV records were compared
+	  incorrectly due to a cut-and-paste error.  Thanks to Tommie
+	  Gannert for reporting this bug.
+
+	* dns/e164.py (query): The resolver parameter was not used.
+	  Thanks to Matías Bellone for reporting this bug.
+
+2009-06-23  Bob Halley  <halley@dnspython.org>
+
+	* dns/entropy.py (EntropyPool.__init__): open /dev/random unbuffered;
+	  there's no need to consume more randomness than we need.  Thanks
+	  to Brian Wellington for the patch.
+
+2009-06-19  Bob Halley  <halley@dnspython.org>
+
+	* (Version 1.7.1 released)
+
+2009-06-19  Bob Halley  <halley@dnspython.org>
+
+	* DLV.py was omitted from the kit
+
+	* Negative prerequisites were not handled correctly in _get_section().
+
+2009-06-19  Bob Halley  <halley@dnspython.org>
+
+	* (Version 1.7.0 released)
+
+2009-06-19  Bob Halley  <halley@dnspython.org>
+
+	* On Windows, the resolver set the domain incorrectly.  Thanks
+	  to Brandon Carpenter for reporting this bug.
+
+        * Added a to_digestable() method to rdata classes; it returns the
+	  digestable form (i.e. DNSSEC canonical form) of the rdata.  For
+	  most rdata types this is the same uncompressed wire form.  For
+	  certain older DNS RR types, however, domain names in the rdata
+	  are downcased.
+
+       * Added support for the HIP RR type.
+
+2009-06-18  Bob Halley  <halley@dnspython.org>
+
+       * Added support for the DLV RR type.
+
+       * Added various DNSSEC related constants (e.g. algorithm identifiers,
+         flag values).
+
+       * dns/tsig.py: Added support for BADTRUNC result code.
+
+       * dns/query.py (udp): When checking that addresses are the same,
+         use the binary form of the address in the comparison.  This
+         ensures that we don't treat addresses as different if they have
+         equivalent but differing textual representations.  E.g. "1:00::1"
+         and "1::1" represent the same address but are not textually equal.
+         Thanks to Kim Davies for reporting this bug.
+
+       * The resolver's query() method now has an optional 'source' parameter,
+         allowing the source IP address to be specified.  Thanks to
+         Alexander Lind for suggesting the change and sending a patch.
+
+       * Added NSEC3 and NSEC3PARAM support.
+
+2009-06-17  Bob Halley  <halley@dnspython.org>
+
+        * Fixed NSEC.to_text(), which was only printing the last window.
+          Thanks to Brian Wellington for finding the problem and fixing it.
+
+2009-03-30  Bob Halley  <halley@dnspython.org>
+
+        * dns/query.py (xfr): Allow UDP IXFRs.  Use "one_rr_per_rrset" mode when
+          doing IXFR.
+
+2009-03-30  Bob Halley  <halley@dnspython.org>
+
+        * Add "one_rr_per_rrset" mode switch to methods which parse
+          messages from wire format (e.g. dns.message.from_wire(),
+          dns.query.udp(), dns.query.tcp()).  If set, each RR read is
+          placed in its own RRset (instead of being coalesced).
+
+2009-03-30  Bob Halley  <halley@dnspython.org>
+
+        * Added EDNS option support.
+
+2008-10-16  Bob Halley  <halley@dnspython.org>
+
+        * dns/rdtypes/ANY/DS.py: The from_text() parser for DS RRs did not
+          allow multiple Base64 chunks.  Thanks to Rakesh Banka for
+          finding this bug and submitting a patch.
+
+2008-10-08  Bob Halley  <halley@dnspython.org>
+
+        * Add entropy module.
+
+        * When validating TSIGs, we need to use the absolute name.
+
+2008-06-03  Bob Halley  <halley@dnspython.org>
+
+        * dns/message.py (Message.set_rcode): The mask used preserved the
+          extended rcode, instead of everything else in ednsflags.
+
+        * dns/message.py (Message.use_edns): ednsflags was not kept
+          coherent with the specified edns version.
+
+2008-02-06  Bob Halley  <halley@dnspython.org>
+
+        * dns/ipv6.py (inet_aton):  We could raise an exception other than
+          dns.exception.SyntaxError in some cases.
+
+        * dns/tsig.py: Raise an exception when the peer has set a non-zero
+          TSIG error.
+
+2007-11-25  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.6.0 released)
+
+2007-11-25  Bob Halley  <halley@dnspython.org>
+
+        * dns/query.py (_wait_for): if select() raises an exception due to
+          EINTR, we should just select() again.
+
+2007-06-13  Bob Halley  <halley@dnspython.org>
+
+        * dns/inet.py: Added is_multicast().
+
+        * dns/query.py (udp):  If the queried address is a multicast address, then
+          don't check that the address of the response is the same as the address
+          queried.
+
+2007-05-24  Bob Halley  <halley@dnspython.org>
+
+        * dns/rdtypes/IN/NAPTR.py: NAPTR comparisons didn't compare the
+          preference field due to a typo.
+
+2007-02-07  Bob Halley  <halley@dnspython.org>
+
+        * dns/resolver.py: Integrate code submitted by Paul Marks to
+          determine whether a Windows NIC is enabled.  The way dnspython
+          used to do this does not work on Windows Vista.
+
+2006-12-10  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.5.0 released)
+
+2006-11-03  Bob Halley  <halley@dnspython.org>
+
+        * dns/rdtypes/IN/DHCID.py: Added support for the DHCID RR type.
+
+2006-11-02  Bob Halley  <halley@dnspython.org>
+
+        * dns/query.py (udp): Messages from unexpected sources can now be
+          ignored by setting ignore_unexpected to True.
+
+2006-10-31  Bob Halley  <halley@dnspython.org>
+
+        * dns/query.py (udp): When raising UnexpectedSource, add more
+          detail about what went wrong to the exception.
+
+2006-09-22  Bob Halley  <halley@dnspython.org>
+
+        * dns/message.py (Message.use_edns): add reasonable defaults for
+          the ednsflags, payload, and request_payload parameters.
+
+        * dns/message.py (Message.want_dnssec): add a convenience method for
+          enabling/disabling the "DNSSEC desired" flag in requests.
+
+        * dns/message.py (make_query): add "use_edns" and "want_dnssec"
+          parameters.
+
+2006-08-17  Bob Halley  <halley@dnspython.org>
+
+        * dns/resolver.py (Resolver.read_resolv_conf): If /etc/resolv.conf
+          doesn't exist, just use the default resolver configuration (i.e.
+          the same thing we would have used if resolv.conf had existed and
+          been empty).
+
+2006-07-26  Bob Halley  <halley@dnspython.org>
+
+        * dns/resolver.py (Resolver._config_win32_fromkey): fix
+          cut-and-paste error where we passed the wrong variable to
+          self._config_win32_search().  Thanks to David Arnold for finding
+          the bug and submitting a patch.
+
+2006-07-20  Bob Halley  <halley@dnspython.org>
+
+        * dns/resolver.py (Answer): Add more support for the sequence
+          protocol, forwarding requests to the answer object's rrset.
+          E.g. "for a in answer" is equivalent to "for a in answer.rrset",
+          "answer[i]" is equivalent to "answer.rrset[i]", and
+          "answer[i:j]" is equivalent to "answer.rrset[i:j]".
+
+2006-07-19  Bob Halley  <halley@dnspython.org>
+
+        * dns/query.py (xfr): Add IXFR support.
+
+2006-06-22  Bob Halley  <halley@dnspython.org>
+
+        * dns/rdtypes/IN/IPSECKEY.py: Added support for the IPSECKEY RR type.
+
+2006-06-21  Bob Halley  <halley@dnspython.org>
+
+        * dns/rdtypes/ANY/SPF.py: Added support for the SPF RR type.
+
+2006-06-02  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.4.0 released)
+
+2006-04-25  Bob Halley  <halley@dnspython.org>
+
+        * dns/rrset.py (RRset.to_rdataset): Added a convenience method
+          to convert an rrset into an rdataset.
+
+2006-03-27  Bob Halley  <halley@dnspython.org>
+
+        * Added dns.e164.query().  This function can be used to look for
+          NAPTR RRs for a specified number in several domains, e.g.:
+
+                dns.e164.query('16505551212',
+                               ['e164.dnspython.org.', 'e164.arpa.'])
+
+2006-03-26  Bob Halley  <halley@dnspython.org>
+
+        * dns/resolver.py (Resolver.query): The resolver deleted from
+          a list while iterating it, which makes the iterator unhappy.
+
+2006-03-17  Bob Halley  <halley@dnspython.org>
+
+        * dns/resolver.py (Resolver.query): The resolver needlessly
+          delayed responses for successful queries.
+
+2006-01-18  Bob Halley  <halley@dnspython.org>
+
+        * dns/rdata.py: added a validate() method to the rdata class.  If
+          you change an rdata by assigning to its fields, it is a good
+          idea to call validate() when you are done making changes.
+          For example, if 'r' is an MX record and then you execute:
+
+                r.preference = 100000   # invalid, because > 65535
+                r.validate()
+
+          The validation will fail and an exception will be raised.
+
+2006-01-11  Bob Halley  <halley@dnspython.org>
+
+        * dns/ttl.py: TTLs are now bounds checked to be within the closed
+          interval [0, 2^31 - 1].
+
+        * The BIND 8 TTL syntax is now accepted in the SOA refresh, retry,
+          expire, and minimum fields, and in the original_ttl field of
+          SIG and RRSIG records.
+
+2006-01-04  Bob Halley  <halley@dnspython.org>
+
+        * dns/resolver.py: The windows registry irritatingly changes the
+          list element delimiter in between ' ' and ',' (and vice-versa)
+          in various versions of windows.  We now cope by always looking
+          for either one (' ' first).
+
+2005-12-27  Bob Halley  <halley@dnspython.org>
+
+        * dns/e164.py: Added routines to convert between E.164 numbers and
+          their ENUM domain name equivalents.
+
+        * dns/reversename.py: Added routines to convert between IPv4 and
+          IPv6 addresses and their DNS reverse-map equivalents.
+
+2005-12-18  Bob Halley  <halley@dnspython.org>
+
+        * dns/rdtypes/ANY/LOC.py (_tuple_to_float): The sign was lost when
+          converting a tuple into a float, which broke conversions of
+          south latitudes and west longitudes.
+
+2005-11-17  Bob Halley  <halley@dnspython.org>
+
+        * dns/zone.py: The 'origin' parameter to from_text() and from_file()
+          is now optional.  If not specified, dnspython will use the
+          first $ORIGIN in the text as the zone's origin.
+
+        * dns/zone.py: Sanity checks of the zone's origin node can now
+          be disabled.
+
+2005-11-12  Bob Halley  <halley@dnspython.org>
+
+        * dns/name.py: Preliminary Unicode support has been added for
+          domain names.  Running dns.name.from_text() on a Unicode string
+          will now encode each label using the IDN ACE encoding.  The
+          to_unicode() method may be used to convert a dns.name.Name with
+          IDN ACE labels back into a Unicode string.  This functionality
+          requires Python 2.3 or greater.
+
+2005-10-31  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.3.5 released)
+
+2005-10-12  Bob Halley  <halley@dnspython.org>
+
+        * dns/zone.py: Zone.iterate_rdatasets() and Zone.iterate_rdatas()
+          did not have a default rdtype of dns.rdatatype.ANY as their
+          docstrings said they did.  They do now.
+
+2005-10-06  Bob Halley  <halley@dnspython.org>
+
+        * dns/name.py: Added the parent() method, which returns the
+          parent of a name.
+
+2005-10-01  Bob Halley  <halley@dnspython.org>
+
+        * dns/resolver.py: Added zone_for_name() helper, which returns
+          the name of the zone which contains the specified name.
+
+        * dns/resolver.py: Added get_default_resolver(), which returns
+          the default resolver, initializing it if necessary.
+
+2005-09-29  Bob Halley  <halley@dnspython.org>
+
+        * dns/resolver.py (Resolver._compute_timeout): If time goes
+          backwards a little bit, ignore it.
+
+2005-07-31  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.3.4 released)
+
+2005-07-31  Bob Halley  <halley@dnspython.org>
+
+        * dns/message.py (make_response): Trying to respond to a response
+          threw a NameError while trying to throw a FormErr since it used
+          the wrong name for the FormErr exception.
+
+        * dns/query.py (_connect): We needed to ignore EALREADY too.
+
+        * dns/query.py: Optional "source" and "source_port" parameters
+          have been added to udp(), tcp(), and xfr().  Thanks to Ralf
+          Weber for suggesting the change and providing a patch.
+
+2005-06-05  Bob Halley  <halley@dnspython.org>
+
+        * dns/query.py: The requirement that the "where" parameter be
+          an IPv4 or IPv6 address is now documented.
+
+2005-06-04  Bob Halley  <halley@dnspython.org>
+
+        * dns/resolver.py: The resolver now does exponential backoff
+          each time it runs through all of the nameservers.
+
+        * dns/resolver.py: rcodes which indicate a nameserver is likely
+          to be a "permanent failure" for a query cause the nameserver
+          to be removed from the mix for that query.
+
+2005-01-30  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.3.3 released)
+
+2004-10-25  Bob Halley  <halley@dnspython.org>
+
+        * dns/rdtypes/ANY/TXT.py (TXT.from_text): The masterfile parser
+        incorrectly rejected TXT records where a value was not quoted.
+
+2004-10-11  Bob Halley  <halley@dnspython.org>
+
+        * dns/message.py: Added make_response(), which creates a skeletal
+        response for the specified query.  Added opcode() and set_opcode()
+        convenience methods to the Message class.  Added the request_payload
+        attribute to the Message class.
+
+2004-10-10  Bob Halley  <halley@dnspython.org>
+
+        * dns/zone.py (from_xfr): dns.zone.from_xfr() in relativization
+        mode incorrectly set zone.origin to the empty name.
+
+2004-09-02  Bob Halley  <halley@dnspython.org>
+
+        * dns/name.py (Name.to_wire): The 'file' parameter to
+        Name.to_wire() is now optional; if omitted, the wire form will
+        be returned as the value of the function.
+
+2004-08-14  Bob Halley  <halley@dnspython.org>
+
+        * dns/message.py (Message.find_rrset): find_rrset() now uses an
+        index, vastly improving the from_wire() performance of large
+        messages such as zone transfers.
+
+2004-08-07  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.3.2 released)
+
+2004-08-04  Bob Halley  <halley@dnspython.org>
+
+        * dns/query.py: sending queries to a nameserver via IPv6 now
+        works.
+
+        * dns/inet.py (af_for_address): Add af_for_address(), which looks
+        at a textual-form address and attempts to determine which address
+        family it is.
+
+        * dns/query.py: the default for the 'af' parameter of the udp(),
+        tcp(), and xfr() functions has been changed from AF_INET to None,
+        which causes dns.inet.af_for_address() to be used to determine the
+        address family.  If dns.inet.af_for_address() can't figure it out,
+        we fall back to AF_INET and hope for the best.
+
+2004-07-31  Bob Halley  <halley@dnspython.org>
+
+        * dns/rdtypes/ANY/NSEC.py (NSEC.from_text): The NSEC text format
+        does not allow specifying types by number, so we shouldn't either.
+
+        * dns/renderer.py: the renderer module didn't import random,
+        causing an exception to be raised if a query id wasn't provided
+        when a Renderer was created.
+
+        * dns/resolver.py (Resolver.query): the resolver wasn't catching
+        dns.exception.Timeout, so a timeout erroneously caused the whole
+        resolution to fail instead of just going on to the next server.
+
+2004-06-16  Bob Halley  <halley@dnspython.org>
+
+        * dns/rdtypes/ANY/LOC.py (LOC.from_text): LOC milliseconds values
+        were converted incorrectly if the length of the milliseconds
+        string was less than 3.
+
+2004-06-06  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.3.1 released)
+
+2004-05-22  Bob Halley  <halley@dnspython.org>
+
+        * dns/update.py (Update.delete): We erroneously specified a
+        "deleting" value of dns.rdatatype.NONE instead of
+        dns.rdataclass.NONE when the thing being deleted was either an
+        Rdataset instance or an Rdata instance.
+
+        * dns/rdtypes/ANY/SSHFP.py: Added support for the proposed SSHFP
+        RR type.
+
+2004-05-14  Bob Halley  <halley@dnspython.org>
+
+        * dns/rdata.py (from_text): The masterfile reader did not
+        accept the unknown RR syntax when used with a known RR type.
+
+2004-05-08  Bob Halley  <halley@dnspython.org>
+
+        * dns/name.py (from_text): dns.name.from_text() did not raise
+        an exception if a backslash escape ended prematurely.
+
+2004-04-09  Bob Halley  <halley@dnspython.org>
+
+        * dns/zone.py (_MasterReader._rr_line): The masterfile reader
+        erroneously treated lines starting with leading whitespace but
+        not having any RR definition as an error.  It now treats
+        them like a blank line (which is not an error).
+
+2004-04-01  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.3.0 released)
+
+2004-03-19  Bob Halley  <halley@dnspython.org>
+
+        * Added support for new DNSSEC types RRSIG, NSEC, and DNSKEY.
+
+2004-01-16  Bob Halley  <halley@dnspython.org>
+
+        * dns/query.py (_connect): Windows returns EWOULDBLOCK instead
+        of EINPROGRESS when trying to connect a nonblocking socket.
+
+2003-11-13  Bob Halley  <halley@dnspython.org>
+
+        * dns/rdtypes/ANY/LOC.py (LOC.to_wire): We encoded and decoded LOC
+        incorrectly, since we were interpreting the values of altitude,
+        size, hprec, and vprec in meters instead of centimeters.
+
+        * dns/rdtypes/IN/WKS.py (WKS.from_wire): The WKS protocol value is
+        encoded with just one octet, not two!
+
+2003-11-09  Bob Halley  <halley@dnspython.org>
+
+        * dns/resolver.py (Cache.maybe_clean): The cleaner deleted items
+        from the dictionary while iterating it, causing a RuntimeError
+        to be raised.  Thanks to Mark R. Levinson for the bug report,
+        regression test, and fix.
+
+2003-11-07  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.2.0 released)
+
+2003-11-03  Bob Halley  <halley@dnspython.org>
+
+        * dns/zone.py (_MasterReader.read): The saved_state now includes
+        the default TTL.
+
+2003-11-01  Bob Halley  <halley@dnspython.org>
+
+        * dns/tokenizer.py (Tokenizer.get): The tokenizer didn't
+        handle escaped delimiters.
+
+2003-10-27  Bob Halley  <halley@dnspython.org>
+
+        * dns/resolver.py (Resolver.read_resolv_conf): If no nameservers
+        are configured in /etc/resolv.conf, the default nameserver
+        list should be ['127.0.0.1'].
+
+2003-09-08  Bob Halley  <halley@dnspython.org>
+
+        * dns/resolver.py (Resolver._config_win32_fromkey): We didn't
+        catch WindowsError, which can happen if a key is not defined
+        in the registry.
+
+2003-09-06  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.2.0b1 released)
+
+2003-09-05  Bob Halley  <halley@dnspython.org>
+
+        * dns/query.py: Timeout support has been overhauled to provide
+        timeouts under Python 2.2 as well as 2.3, and to provide more
+        accurate expiration.
+
+2003-08-30  Bob Halley  <halley@dnspython.org>
+
+        * dns/zone.py: dns.exception.SyntaxError is raised for unknown
+        master file directives.
+
+2003-08-28  Bob Halley  <halley@dnspython.org>
+
+        * dns/zone.py: $INCLUDE processing is now enabled/disabled using
+        the allow_include parameter.  The default is to process $INCLUDE
+        for from_file(), and to disallow $INCLUDE for from_text().  The
+        master reader now calls zone.check_origin_node() by default after
+        the zone has been read.  find_rdataset() called get_node() instead
+        of find_node(), which result in an incorrect exception.  The
+        relativization state of a zone is now remembered and applied
+        consistently when looking up names.  from_xfr() now supports
+        relativization like the _MasterReader.
+
+2003-08-22  Bob Halley  <halley@dnspython.org>
+
+        * dns/zone.py: The _MasterReader now understands $INCLUDE.
+
+2003-08-12  Bob Halley  <halley@dnspython.org>
+
+        * dns/zone.py: The _MasterReader now specifies the file and line
+        number when a syntax error occurs.  The BIND 8 TTL format is now
+        understood when loading a zone, though it will never be emitted.
+        The from_file() function didn't pass the zone_factory parameter
+        to from_text().
+
+2003-08-10  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.1.0 released)
+
+2003-08-07  Bob Halley  <halley@dnspython.org>
+
+        * dns/update.py (Update._add): A typo meant that _add would
+        fail if the thing being added was an Rdata object (as
+        opposed to an Rdataset or the textual form of an Rdata).
+
+2003-08-05  Bob Halley  <halley@dnspython.org>
+
+        * dns/set.py: the simple Set class has been moved to its
+        own module, and augmented to support more set operations.
+
+2003-08-04  Bob Halley  <halley@dnspython.org>
+
+        * Node and all rdata types have been "slotted".  This speeds
+        things up a little and reduces memory usage noticeably.
+
+2003-08-02  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.1.0c1 released)
+
+2003-08-02  Bob Halley  <halley@dnspython.org>
+
+        * dns/rdataset.py: SimpleSets now support more set options.
+
+        * dns/message.py: Added the get_rrset() method.  from_file() now
+        allows Unicode filenames and turns on universal newline support if
+        it opens the file itself.
+
+        * dns/node.py: Added the delete_rdataset() and replace_rdataset()
+        methods.
+
+        * dns/zone.py: Added the delete_node(), delete_rdataset(), and
+        replace_rdataset() methods.  from_file() now allows Unicode
+        filenames and turns on universal newline support if it opens the
+        file itself.  Added a to_file() method.
+
+2003-08-01  Bob Halley  <halley@dnspython.org>
+
+        * dns/opcode.py: Opcode from/to text converters now understand
+        numeric opcodes.  The to_text() method will return a numeric opcode
+        string if it doesn't know a text name for the opcode.
+
+        * dns/message.py: Added set_rcode().  Fixed code where ednsflags
+        wasn't treated as a long.
+
+        * dns/rcode.py: ednsflags wasn't treated as a long.  Rcode from/to
+        text converters now understand numeric rcodes.  The to_text()
+        method will return a numeric rcode string if it doesn't know
+        a text name for the rcode.
+
+        * examples/reverse.py: Added a new example program that builds a
+        reverse (address-to-name) mapping table from the name-to-address
+        mapping specified by A RRs in zone files.
+
+        * dns/node.py: Added get_rdataset() method.
+
+        * dns/zone.py: Added get_rdataset() and get_rrset() methods.  Added
+        iterate_rdatas().
+
+2003-07-31  Bob Halley  <halley@dnspython.org>
+
+        * dns/zone.py: Added the iterate_rdatasets() method which returns
+        a generator which yields (name, rdataset) tuples for all the
+        rdatasets in the zone matching the specified rdatatype.
+
+2003-07-30  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.1.0b2 released)
+
+2003-07-30  Bob Halley  <halley@dnspython.org>
+
+        * dns/zone.py: Added find_rrset() and find_rdataset() convenience
+        methods.  They let you retrieve rdata with the specified name
+        and type in one call.
+
+        * dns/node.py: Nodes no longer have names; owner names are
+        associated with nodes in the Zone object's nodes dictionary.
+
+        * dns/zone.py: Zone objects now implement more of the standard
+        mapping interface.  __iter__ has been changed to iterate the keys
+        rather than values to match the standard mapping interface's
+        behavior.
+
+2003-07-20  Bob Halley  <halley@dnspython.org>
+
+        * dns/ipv6.py (inet_ntoa): Handle embedded IPv4 addresses.
+
+2003-07-19  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.1.0b1 released)
+
+2003-07-18  Bob Halley  <halley@dnspython.org>
+
+        * dns/tsig.py: The TSIG validation of TCP streams where not
+        every message is signed now works correctly.
+
+        * dns/zone.py: Zones can now be compared for equality and
+        inequality.  If the other object in the comparison is also
+        a zone, then "the right thing" happens; i.e. the zones are
+        equal iff.: they have the same rdclass, origin, and nodes.
+
+2003-07-17  Bob Halley  <halley@dnspython.org>
+
+        * dns/message.py (Message.use_tsig): The method now allows for
+        greater control over the various fields in the generated signature
+        (e.g. fudge).
+        (_WireReader._get_section): UnknownTSIGKey is now raised if an
+        unknown key is encountered, or if a signed message has no keyring.
+
+2003-07-16  Bob Halley  <halley@dnspython.org>
+
+        * dns/tokenizer.py (Tokenizer._get_char): get_char and unget_char
+        have been renamed to _get_char and _unget_char since they are not
+        useful to clients of the tokenizer.
+
+2003-07-15  Bob Halley  <halley@dnspython.org>
+
+        * dns/zone.py (_MasterReader._rr_line): owner names were being
+        unconditionally relativized; it makes much more sense for them
+        to be relativized according to the relativization setting of
+        the reader.
+
+2003-07-12  Bob Halley  <halley@dnspython.org>
+
+        * dns/resolver.py (Resolver.read_resolv_conf): The resolv.conf
+        parser did not allow blank / whitespace-only lines, nor did it
+        allow comments.  Both are now supported.
+
+2003-07-11  Bob Halley  <halley@dnspython.org>
+
+        * dns/name.py (Name.to_digestable): to_digestable() now
+        requires an origin to be specified if the name is relative.
+        It will raise NeedAbsoluteNameOrOrigin if the name is
+        relative and there is either no origin or the origin is
+        itself relative.
+        (Name.split): returned the wrong answer if depth was 0 or depth
+        was the length of the name.  split() now does bounds checking
+        on depth, and raises ValueError if depth < 0 or depth > the length
+        of the name.
+
+2003-07-10  Bob Halley  <halley@dnspython.org>
+
+        * dns/ipv6.py (inet_ntoa): The routine now minimizes its output
+        strings.  E.g. the IPv6 address
+        "0000:0000:0000:0000:0000:0000:0000:0001" is minimized to "::1".
+        We do not, however, make any effort to display embedded IPv4
+        addresses in the dot-quad notation.
+
+2003-07-09  Bob Halley  <halley@dnspython.org>
+
+        * dns/inet.py: We now supply our own AF_INET and AF_INET6
+        constants since AF_INET6 may not always be available.  If the
+        socket module has AF_INET6, we will use it.  If not, we will
+        use our own value for the constant.
+
+        * dns/query.py: the functions now take an optional af argument
+        specifying the address family to use when creating the socket.
+
+        * dns/rdatatype.py (is_metatype): a typo caused the function
+        return true only for type OPT.
+
+        * dns/message.py: message section list elements are now RRsets
+        instead of Nodes.  This API change makes processing messages
+        easier for many applications.
+
+2003-07-07  Bob Halley  <halley@dnspython.org>
+
+        * dns/rrset.py: added.  An RRset is a named rdataset.
+
+        * dns/rdataset.py (Rdataset.__eq__): rdatasets may now be compared
+        for equality and inequality with other objects.  Rdataset instance
+        variables are now slotted.
+
+        * dns/message.py: The wire format and text format readers are now
+        classes.  Variables related to reader state have been moved out
+        of the message class.
+
+2003-07-06  Bob Halley  <halley@dnspython.org>
+
+        * dns/name.py (from_text): '@' was not interpreted as the empty
+        name.
+
+        * dns/zone.py: the master file reader derelativized names in rdata
+        relative to the zone's origin, not relative to the current origin.
+        The reader now deals with relativization in two steps.  The rdata
+        is read and derelativized using the current origin.  The rdata's
+        relativity is then chosen using the zone origin and the relativize
+        boolean.  Here's an example.
+
+                $ORIGIN foo.example.
+                $TTL 300
+                bar MX 0 blaz
+
+        If the zone origin is example., and relativization is on, then
+        This fragment will become:
+
+                bar.foo.example. 300 IN MX 0 blaz.foo.example.
+
+        after the first step (derelativization to current origin), and
+
+                bar.foo 300 IN MX 0 blaz.foo
+
+        after the second step (relativization to zone origin).
+
+        * dns/namedict.py: added.
+
+        * dns/zone.py: The master file reader has been made into its
+        own class.  Reader-related instance variables have been moved
+        form the zone class into the reader class.
+
+        * dns/zone.py: Add node_factory class attribute.  An application
+        can now subclass Zone and Node and have a zone whose nodes are of
+        the subclassed Node type.  The from_text(), from_file(), and
+        from_xfr() algorithms now take an optional zone_factory argument.
+        This allows the algorithms to be used to create zones whose class
+        is a subclass of Zone.
+
+
+2003-07-04  Bob Halley  <halley@dnspython.org>
+
+        * dns/renderer.py: added new wire format rendering module and
+        converted message.py to use it.  Applications which want
+        fine-grained control over the conversion to wire format may call
+        the renderer directly, instead of having it called on their behalf
+        by the message code.
+
+2003-07-02  Bob Halley  <halley@dnspython.org>
+
+        * dns/name.py (_validate_labels): The NameTooLong test was
+        incorrect.
+
+        * dns/message.py (Message.to_wire): dns.exception.TooBig is
+        now raised if the wire encoding exceeds the specified
+        maximum size.
+
+2003-07-01  Bob Halley  <halley@dnspython.org>
+
+        * dns/message.py: EDNS encoding was broken.  from_text()
+        didn't parse rcodes, flags, or eflags correctly.  Comparing
+        messages with other types of objects didn't work.
+
+2003-06-30  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.0.0 released)
+
+2003-06-30  Bob Halley  <halley@dnspython.org>
+
+        * dns/rdata.py: Rdatas now implement rich comparisons instead of
+        __cmp__.
+
+        * dns/name.py: Names now implement rich comparisons instead of
+        __cmp__.
+
+        * dns/inet.py (inet_ntop): Always use our code, since the code
+        in the socket module doesn't support AF_INET6 conversions if
+        IPv6 sockets are not available on the system.
+
+        * dns/resolver.py (Answer.__init__): A dangling CNAME chain was
+        not raising NoAnswer.
+
+        * Added a simple resolver Cache class.
+
+        * Added an expiration attribute to answer instances.
+
+2003-06-24  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.0.0b3 released)
+
+2003-06-24  Bob Halley  <halley@dnspython.org>
+
+        * Renamed module "DNS" to "dns" to avoid conflicting with
+        PyDNS.
+
+2003-06-23  Bob Halley  <halley@dnspython.org>
+
+        * The from_text() relativization controls now work the same way as
+        the to_text() controls.
+
+        * DNS/rdata.py: The parsing of generic rdata was broken.
+
+2003-06-21  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.0.0b2 released)
+
+2003-06-21  Bob Halley  <halley@dnspython.org>
+
+        * The Python 2.2 socket.inet_aton() doesn't seem to like
+        '255.255.255.255'.  We work around this.
+
+        * Fixed bugs in rdata to_wire() and from_wire() routines of a few
+        types.  These bugs were discovered by running the tests/zone.py
+        Torture1 test.
+
+        * Added implementation of type APL.
+
+2003-06-20  Bob Halley  <halley@dnspython.org>
+
+        * DNS/rdtypes/IN/AAAA.py: Use our own versions of inet_ntop and
+        inet_pton if the socket module doesn't provide them for us.
+
+        * The resolver now does a better job handling exceptions.  In
+        particular, it no longer eats all exceptions; rather it handles
+        those exceptions it understands, and leaves the rest uncaught.
+
+        * Exceptions have been pulled into their own module.  Almost all
+        exceptions raised by the code are now subclasses of
+        DNS.exception.DNSException.  All form errors are subclasses of
+        DNS.exception.FormError (which is itself a subclass of
+        DNS.exception.DNSException).
+
+2003-06-19  Bob Halley  <halley@dnspython.org>
+
+        * Added implementations of types DS, NXT, SIG, and WKS.
+
+        * __cmp__ for type A and AAAA could produce incorrect results.
+
+2003-06-18  Bob Halley  <halley@dnspython.org>
+
+        * Started test suites for zone.py and tokenizer.py.
+
+        * Added implementation of type KEY.
+
+        * DNS/rdata.py(_base64ify): \n could be emitted erroneously.
+
+        * DNS/rdtypes/ANY/SOA.py (SOA.from_text): The SOA RNAME field could
+        be set to the value of MNAME in common cases.
+
+        * DNS/rdtypes/ANY/X25.py: __init__ was broken.
+
+        * DNS/zone.py (from_text): $TTL handling erroneously caused the
+        next line to be eaten.
+
+        * DNS/tokenizer.py (Tokenizer.get): parsing was broken for empty
+        quoted strings.  Quoted strings didn't handle \ddd escapes.  Such
+        escapes are appear not to comply with RFC 1035, but BIND allows
+        them and they seem useful, so we allow them too.
+
+        * DNS/rdtypes/ANY/ISDN.py (ISDN.from_text): parsing was
+        broken for ISDN RRs without subaddresses.
+
+        * DNS/zone.py (from_file): from_file() didn't work because
+        some required parameters were not passed to from_text().
+
+2003-06-17  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.0.0b1 released)
+
+2003-06-17  Bob Halley  <halley@dnspython.org>
+
+        * Added implementation of type PX.
+
+2003-06-16  Bob Halley  <halley@dnspython.org>
+
+        * Added implementation of types CERT, GPOS, LOC, NSAP, NSAP-PTR.
+
+        * DNS/rdatatype.py (_by_value): A cut-and-paste error had broken
+        NSAP and NSAP-PTR.
+
+2003-06-12  Bob Halley  <halley@dnspython.org>
+
+        * Created a tests directory and started adding tests.
+
+        * Added "and its documentation" to the permission grant in the
+        license.
+
+2003-06-12  Bob Halley  <halley@dnspython.org>
+
+        * DNS/name.py (Name.is_wild): is_wild() erroneously raised IndexError
+        if the name was empty.
+
+2003-06-10  Bob Halley  <halley@dnspython.org>
+
+        * Added implementations of types AFSDB, X25, and ISDN.
+
+        * The documentation associated with the various rdata types has been
+        improved.  In particular, instance variables are now described.
+
+2003-06-09  Bob Halley  <halley@dnspython.org>
+
+        * Added implementations of types HINFO, RP, and RT.
+
+        * DNS/message.py (make_query): Document that make_query() sets
+        flags to DNS.flags.RD, and chooses a random query id.
+
+2003-06-05  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.0.0a2 released)
+
+2003-06-05  Bob Halley  <halley@dnspython.org>
+
+        * DNS/node.py: removed __getitem__ and __setitem__, since
+        they are not used by the codebase and were not useful in
+        general either.
+
+        * DNS/message.py (from_file): from_file() now allows a
+        filename to be specified instead of a file object.
+
+        * DNS/rdataset.py: The is_compatible() method of the
+        DNS.rdataset.Rdataset class was deleted.
+
+2003-06-04  Bob Halley  <halley@dnspython.org>
+
+        * DNS/name.py (class Name): Names are now immutable.
+
+        * DNS/name.py: the is_comparable() method has been removed, since
+        names are always comparable.
+
+        * DNS/resolver.py (Resolver.query): A query could run for up
+        to the lifetime + the timeout.  This has been corrected and the
+        query will now only run up to the lifetime.
+
+2003-06-03  Bob Halley  <halley@dnspython.org>
+
+        * DNS/resolver.py: removed the 'new' function since it is not the
+        style of the library to have such a function.  Call
+        DNS.resolver.Resolver() to make a new resolver.
+
+2003-06-03  Bob Halley  <halley@dnspython.org>
+
+        * DNS/resolver.py (Resolver._config_win32_fromkey): The DhcpServer
+        list is space separated, not comma separated.
+
+2003-06-03  Bob Halley  <halley@dnspython.org>
+
+        * DNS/update.py: Added an update module to make generating updates
+        easier.
+
+2003-06-03  Bob Halley  <halley@dnspython.org>
+
+        * Commas were missing in some of the __all__ entries in various
+        __init__.py files.
+
+2003-05-30  Bob Halley  <halley@dnspython.org>
+
+        * (Version 1.0.0a1 released)

+ 16 - 0
desktop/core/ext-py/dnspython-1.15.0/LICENSE

@@ -0,0 +1,16 @@
+ISC License
+
+Copyright (C) 2001-2003 Nominum, Inc.
+
+Permission to use, copy, modify, and distribute this software and its
+documentation for any purpose with or without fee is hereby granted,
+provided that the above copyright notice and this permission notice
+appear in all copies.
+
+THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

+ 3 - 0
desktop/core/ext-py/dnspython-1.15.0/MANIFEST.in

@@ -0,0 +1,3 @@
+include LICENSE ChangeLog TODO
+recursive-include examples *.txt *.py
+recursive-include tests *.txt *.py Makefile *.good example

+ 35 - 0
desktop/core/ext-py/dnspython-1.15.0/PKG-INFO

@@ -0,0 +1,35 @@
+Metadata-Version: 1.1
+Name: dnspython
+Version: 1.15.0
+Summary: DNS toolkit
+Home-page: http://www.dnspython.org
+Author: Bob Halley
+Author-email: halley@dnspython.org
+License: BSD-like
+Download-URL: http://www.dnspython.org/kits/1.15.0/dnspython-1.15.0.tar.gz
+Description: dnspython is a DNS toolkit for Python. It supports almost all
+        record types. It can be used for queries, zone transfers, and dynamic
+        updates.  It supports TSIG authenticated messages and EDNS0.
+        
+        dnspython provides both high and low level access to DNS. The high
+        level classes perform queries for data of a given name, type, and
+        class, and return an answer set.  The low level classes allow
+        direct manipulation of DNS zones, messages, names, and records.
+Platform: UNKNOWN
+Classifier: Development Status :: 5 - Production/Stable
+Classifier: Intended Audience :: Developers
+Classifier: Intended Audience :: System Administrators
+Classifier: License :: Freeware
+Classifier: Operating System :: Microsoft :: Windows :: Windows 95/98/2000
+Classifier: Operating System :: POSIX
+Classifier: Programming Language :: Python
+Classifier: Topic :: Internet :: Name Service (DNS)
+Classifier: Topic :: Software Development :: Libraries :: Python Modules
+Classifier: Programming Language :: Python :: 2
+Classifier: Programming Language :: Python :: 2.6
+Classifier: Programming Language :: Python :: 2.7
+Classifier: Programming Language :: Python :: 3
+Classifier: Programming Language :: Python :: 3.3
+Classifier: Programming Language :: Python :: 3.4
+Classifier: Programming Language :: Python :: 3.5
+Provides: dns

+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/__init__.py → desktop/core/ext-py/dnspython-1.15.0/dns/__init__.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/_compat.py → desktop/core/ext-py/dnspython-1.15.0/dns/_compat.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/dnssec.py → desktop/core/ext-py/dnspython-1.15.0/dns/dnssec.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/e164.py → desktop/core/ext-py/dnspython-1.15.0/dns/e164.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/edns.py → desktop/core/ext-py/dnspython-1.15.0/dns/edns.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/entropy.py → desktop/core/ext-py/dnspython-1.15.0/dns/entropy.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/exception.py → desktop/core/ext-py/dnspython-1.15.0/dns/exception.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/flags.py → desktop/core/ext-py/dnspython-1.15.0/dns/flags.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/grange.py → desktop/core/ext-py/dnspython-1.15.0/dns/grange.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/hash.py → desktop/core/ext-py/dnspython-1.15.0/dns/hash.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/inet.py → desktop/core/ext-py/dnspython-1.15.0/dns/inet.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/ipv4.py → desktop/core/ext-py/dnspython-1.15.0/dns/ipv4.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/ipv6.py → desktop/core/ext-py/dnspython-1.15.0/dns/ipv6.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/message.py → desktop/core/ext-py/dnspython-1.15.0/dns/message.py


+ 59 - 21
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/name.py → desktop/core/ext-py/dnspython-1.15.0/dns/name.py

@@ -35,7 +35,7 @@ except ImportError:
 import dns.exception
 import dns.wiredata
 
-from ._compat import long, binary_type, text_type, unichr
+from ._compat import long, binary_type, text_type, unichr, maybe_decode
 
 try:
     maxint = sys.maxint
@@ -104,7 +104,7 @@ class NoIDNA2008(dns.exception.DNSException):
 
 class IDNAException(dns.exception.DNSException):
 
-    """IDNA 2008 processing raised an exception."""
+    """IDNA processing raised an exception."""
 
     supp_kwargs = set(['idna_exception'])
     fmt = "IDNA processing exception: {idna_exception}"
@@ -113,16 +113,38 @@ class IDNACodec(object):
 
     """Abstract base class for IDNA encoder/decoders."""
 
+    def __init__(self):
+        pass
+
     def encode(self, label):
         raise NotImplementedError
 
     def decode(self, label):
-        raise NotImplementedError
+        # We do not apply any IDNA policy on decode; we just
+        downcased = label.lower()
+        if downcased.startswith(b'xn--'):
+            try:
+                label = downcased[4:].decode('punycode')
+            except Exception as e:
+                raise IDNAException(idna_exception=e)
+        else:
+            label = maybe_decode(label)
+        return _escapify(label, True)
 
 class IDNA2003Codec(IDNACodec):
 
     """IDNA 2003 encoder/decoder."""
 
+    def __init__(self, strict_decode=False):
+        """Initialize the IDNA 2003 encoder/decoder.
+        @param strict_decode: If True, then IDNA2003 checking is done when
+        decoding.  This can cause failures if the name was encoded with
+        IDNA2008.  The default is False.
+        @type strict_decode: bool
+        """
+        super(IDNA2003Codec, self).__init__()
+        self.strict_decode = strict_decode
+
     def encode(self, label):
         if label == '':
             return b''
@@ -132,16 +154,21 @@ class IDNA2003Codec(IDNACodec):
             raise LabelTooLong
 
     def decode(self, label):
+        if not self.strict_decode:
+            return super(IDNA2003Codec, self).decode(label)
         if label == b'':
             return u''
-        return _escapify(encodings.idna.ToUnicode(label), True)
+        try:
+            return _escapify(encodings.idna.ToUnicode(label), True)
+        except Exception as e:
+            raise IDNAException(idna_exception=e)
 
 class IDNA2008Codec(IDNACodec):
 
     """IDNA 2008 encoder/decoder."""
 
     def __init__(self, uts_46=False, transitional=False,
-                 allow_pure_ascii=False):
+                 allow_pure_ascii=False, strict_decode=False):
         """Initialize the IDNA 2008 encoder/decoder.
         @param uts_46: If True, apply Unicode IDNA compatibility processing
         as described in Unicode Technical Standard #46
@@ -159,10 +186,16 @@ class IDNA2008Codec(IDNACodec):
         e.g. a name with starting with "_sip._tcp." and ending in an IDN
         suffixm which would otherwise be disallowed.  The default is False
         @type allow_pure_ascii: bool
+        @param strict_decode: If True, then IDNA2008 checking is done when
+        decoding.  This can cause failures if the name was encoded with
+        IDNA2003.  The default is False.
+        @type strict_decode: bool
         """
+        super(IDNA2008Codec, self).__init__()
         self.uts_46 = uts_46
         self.transitional = transitional
         self.allow_pure_ascii = allow_pure_ascii
+        self.strict_decode = strict_decode
 
     def is_all_ascii(self, label):
         for c in label:
@@ -185,6 +218,8 @@ class IDNA2008Codec(IDNACodec):
             raise IDNAException(idna_exception=e)
 
     def decode(self, label):
+        if not self.strict_decode:
+            return super(IDNA2008Codec, self).decode(label)
         if label == b'':
             return u''
         if not have_idna_2008:
@@ -196,14 +231,15 @@ class IDNA2008Codec(IDNACodec):
         except idna.IDNAError as e:
             raise IDNAException(idna_exception=e)
 
-
 _escaped = bytearray(b'"().;\\@$')
 
-IDNA_2003 = IDNA2003Codec()
-IDNA_2008_Practical = IDNA2008Codec(True, False, True)
-IDNA_2008_UTS_46 = IDNA2008Codec(True, False, False)
-IDNA_2008_Strict = IDNA2008Codec(False, False, False)
-IDNA_2008_Transitional = IDNA2008Codec(True, True, False)
+IDNA_2003_Practical = IDNA2003Codec(False)
+IDNA_2003_Strict = IDNA2003Codec(True)
+IDNA_2003 = IDNA_2003_Practical
+IDNA_2008_Practical = IDNA2008Codec(True, False, True, False)
+IDNA_2008_UTS_46 = IDNA2008Codec(True, False, False, False)
+IDNA_2008_Strict = IDNA2008Codec(False, False, False, True)
+IDNA_2008_Transitional = IDNA2008Codec(True, True, False, False)
 IDNA_2008 = IDNA_2008_Practical
 
 def _escapify(label, unicode_mode=False):
@@ -466,7 +502,7 @@ class Name(object):
         return '<DNS name ' + self.__str__() + '>'
 
     def __str__(self):
-        return self.to_text(False).decode()
+        return self.to_text(False)
 
     def to_text(self, omit_final_dot=False):
         """Convert name to text format.
@@ -476,15 +512,15 @@ class Name(object):
         """
 
         if len(self.labels) == 0:
-            return b'@'
+            return maybe_decode(b'@')
         if len(self.labels) == 1 and self.labels[0] == b'':
-            return b'.'
+            return maybe_decode(b'.')
         if omit_final_dot and self.is_absolute():
             l = self.labels[:-1]
         else:
             l = self.labels
         s = b'.'.join(map(_escapify, l))
-        return s
+        return maybe_decode(s)
 
     def to_unicode(self, omit_final_dot=False, idna_codec=None):
         """Convert name to Unicode text format.
@@ -494,9 +530,11 @@ class Name(object):
         @param omit_final_dot: If True, don't emit the final dot (denoting the
         root label) for absolute names.  The default is False.
         @type omit_final_dot: bool
-        @param: idna_codec: IDNA encoder/decoder.  If None, the default IDNA
-        2003
-        encoder/decoder is used.
+        @param idna_codec: IDNA encoder/decoder.  If None, the
+        IDNA_2003_Practical encoder/decoder is used.  The IDNA_2003_Practical
+        decoder does not impose any policy, it just decodes punycode, so if
+        you don't want checking for compliance, you can use this decoder for
+        IDNA2008 as well.
         @type idna_codec: dns.name.IDNA
         @rtype: string
         """
@@ -510,7 +548,7 @@ class Name(object):
         else:
             l = self.labels
         if idna_codec is None:
-            idna_codec = IDNA_2003
+            idna_codec = IDNA_2003_Practical
         return u'.'.join([idna_codec.decode(x) for x in l])
 
     def to_digestable(self, origin=None):
@@ -705,7 +743,7 @@ def from_unicode(text, origin=root, idna_codec=None):
     @type text: Unicode string
     @param origin: The origin to append to non-absolute names.
     @type origin: dns.name.Name
-    @param: idna_codec: IDNA encoder/decoder.  If None, the default IDNA 2003
+    @param idna_codec: IDNA encoder/decoder.  If None, the default IDNA 2003
     encoder/decoder is used.
     @type idna_codec: dns.name.IDNA
     @rtype: dns.name.Name object
@@ -775,7 +813,7 @@ def from_text(text, origin=root, idna_codec=None):
     @type text: string
     @param origin: The origin to append to non-absolute names.
     @type origin: dns.name.Name
-    @param: idna_codec: IDNA encoder/decoder.  If None, the default IDNA 2003
+    @param idna_codec: IDNA encoder/decoder.  If None, the default IDNA 2003
     encoder/decoder is used.
     @type idna_codec: dns.name.IDNA
     @rtype: dns.name.Name object

+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/namedict.py → desktop/core/ext-py/dnspython-1.15.0/dns/namedict.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/node.py → desktop/core/ext-py/dnspython-1.15.0/dns/node.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/opcode.py → desktop/core/ext-py/dnspython-1.15.0/dns/opcode.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/query.py → desktop/core/ext-py/dnspython-1.15.0/dns/query.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/rcode.py → desktop/core/ext-py/dnspython-1.15.0/dns/rcode.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/rdata.py → desktop/core/ext-py/dnspython-1.15.0/dns/rdata.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/rdataclass.py → desktop/core/ext-py/dnspython-1.15.0/dns/rdataclass.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/rdataset.py → desktop/core/ext-py/dnspython-1.15.0/dns/rdataset.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/rdatatype.py → desktop/core/ext-py/dnspython-1.15.0/dns/rdatatype.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/rdtypes/ANY/AFSDB.py → desktop/core/ext-py/dnspython-1.15.0/dns/rdtypes/ANY/AFSDB.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/rdtypes/ANY/AVC.py → desktop/core/ext-py/dnspython-1.15.0/dns/rdtypes/ANY/AVC.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/rdtypes/ANY/CAA.py → desktop/core/ext-py/dnspython-1.15.0/dns/rdtypes/ANY/CAA.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/rdtypes/ANY/CDNSKEY.py → desktop/core/ext-py/dnspython-1.15.0/dns/rdtypes/ANY/CDNSKEY.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/rdtypes/ANY/CDS.py → desktop/core/ext-py/dnspython-1.15.0/dns/rdtypes/ANY/CDS.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/rdtypes/ANY/CERT.py → desktop/core/ext-py/dnspython-1.15.0/dns/rdtypes/ANY/CERT.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/rdtypes/ANY/CNAME.py → desktop/core/ext-py/dnspython-1.15.0/dns/rdtypes/ANY/CNAME.py


+ 0 - 0
desktop/core/ext-py/eventlet-0.21.0/eventlet/support/dns/rdtypes/ANY/CSYNC.py → desktop/core/ext-py/dnspython-1.15.0/dns/rdtypes/ANY/CSYNC.py


Einige Dateien werden nicht angezeigt, da zu viele Dateien in diesem Diff geändert wurden.