Sfoglia il codice sorgente

[libs] Fix the pydruid lib to compile with Python 2

Romain 5 anni fa
parent
commit
b68f53160c
28 ha cambiato i file con 1787 aggiunte e 1895 eliminazioni
  1. 267 0
      desktop/core/ext-py/pydruid-0.5.11/CHANGELOG.md
  2. 13 0
      desktop/core/ext-py/pydruid-0.5.11/LICENSE
  3. 5 0
      desktop/core/ext-py/pydruid-0.5.11/MANIFEST.in
  4. 303 0
      desktop/core/ext-py/pydruid-0.5.11/PKG-INFO
  5. 278 0
      desktop/core/ext-py/pydruid-0.5.11/README.md
  6. 23 0
      desktop/core/ext-py/pydruid-0.5.11/RELEASE.md
  7. 16 16
      desktop/core/ext-py/pydruid-0.5.11/pydruid/db/__init__.py
  8. 164 60
      desktop/core/ext-py/pydruid-0.5.11/pydruid/db/api.py
  9. 0 0
      desktop/core/ext-py/pydruid-0.5.11/pydruid/db/exceptions.py
  10. 64 66
      desktop/core/ext-py/pydruid-0.5.11/pydruid/db/sqlalchemy.py
  11. 0 0
      desktop/core/ext-py/pydruid-0.5.11/pydruid/utils/__init__.py
  12. 14 5
      desktop/core/ext-py/pydruid-0.5.11/pydruid/utils/aggregators.py
  13. 50 41
      desktop/core/ext-py/pydruid-0.5.11/pydruid/utils/dimensions.py
  14. 293 0
      desktop/core/ext-py/pydruid-0.5.11/pydruid/utils/filters.py
  15. 111 0
      desktop/core/ext-py/pydruid-0.5.11/pydruid/utils/having.py
  16. 77 62
      desktop/core/ext-py/pydruid-0.5.11/pydruid/utils/postaggregator.py
  17. 3 3
      desktop/core/ext-py/pydruid-0.5.11/pydruid/utils/query_utils.py
  18. 7 0
      desktop/core/ext-py/pydruid-0.5.11/requirements-dev.txt
  19. 23 0
      desktop/core/ext-py/pydruid-0.5.11/requirements.txt
  20. 15 0
      desktop/core/ext-py/pydruid-0.5.11/setup.cfg
  21. 61 0
      desktop/core/ext-py/pydruid-0.5.11/setup.py
  22. 0 162
      desktop/core/ext-py/pydruid/async_client.py
  23. 0 560
      desktop/core/ext-py/pydruid/client.py
  24. 0 202
      desktop/core/ext-py/pydruid/console.py
  25. 0 446
      desktop/core/ext-py/pydruid/query.py
  26. 0 0
      desktop/core/ext-py/pydruid/utils/__init__.py
  27. 0 187
      desktop/core/ext-py/pydruid/utils/filters.py
  28. 0 85
      desktop/core/ext-py/pydruid/utils/having.py

+ 267 - 0
desktop/core/ext-py/pydruid-0.5.11/CHANGELOG.md

@@ -0,0 +1,267 @@
+## Change Log
+
+### 0.5.8 (2020/01/10 19:52 +00:00)
+- [#180](https://github.com/druid-io/pydruid/pull/180) [dbapi] Added ssl certificate (#180) (@TechGeekD)
+
+### 0.5.7 (2019/10/07 20:15 +00:00)
+- [#172](https://github.com/druid-io/pydruid/pull/172) [dbapi] Fixing type ordering (#172) (@john-bodley)
+- [#174](https://github.com/druid-io/pydruid/pull/174) [parameters] Fix empty parameter check (#174) (@john-bodley)
+- [#178](https://github.com/druid-io/pydruid/pull/178) [api] Remove duplicate line (#178) (@john-bodley)
+- [#176](https://github.com/druid-io/pydruid/pull/176) Black + various pre-commit hooks (#176) (@mistercrunch)
+- [#170](https://github.com/druid-io/pydruid/pull/170) Updated bound filter to latest Druid API specs (#170) (@wjdecorte)
+
+### 0.5.6 (2019/07/04 05:54 +00:00)
+- [#171](https://github.com/druid-io/pydruid/pull/171) [dbapi] Fixing pyformat parameters (#171) (@john-bodley)
+- [#163](https://github.com/druid-io/pydruid/pull/163) Added http_client parameter to AsyncPyDruid class init to specify which client to use (#163) (@wjdecorte)
+- [#168](https://github.com/druid-io/pydruid/pull/168) [dbapi] Fixing header description (#168) (@john-bodley)
+- [#167](https://github.com/druid-io/pydruid/pull/167) Add README.md to pypi (#167) (@mistercrunch)
+
+### 0.5.4 (2019/06/10 01:40 +00:00)
+- [#146](https://github.com/druid-io/pydruid/pull/146) Add proxies support to BaseDruidClient (#146) (@jobar)
+- [#165](https://github.com/druid-io/pydruid/pull/165) CHANGELOG 0.5.0 to 0.5.3 (#165) (@mistercrunch)
+- [#159](https://github.com/druid-io/pydruid/pull/159) F timeout issue 140 (#159) (@wjdecorte)
+- [#161](https://github.com/druid-io/pydruid/pull/161) Parse actual response body for HTTP error (#161) (@haltwise)
+- [#139](https://github.com/druid-io/pydruid/pull/139) SubQueries Support (#139) (@pantlavanya)
+- [#164](https://github.com/druid-io/pydruid/pull/164) Add Trove classifiers for supported Python versions. (#164) (@jezdez)
+
+### 0.5.3 (2019/05/29 21:23 +00:00)
+- [#153](https://github.com/druid-io/pydruid/pull/153) Registered lookups (#153) (@srggrs)
+- [#156](https://github.com/druid-io/pydruid/pull/156) Support for search and like filters (#156) (@Makesh-Gmak)
+- [#149](https://github.com/druid-io/pydruid/pull/149) Add support for Druid Basic Auth to SQLAlchemy (#149) (@donbowman)
+- [#155](https://github.com/druid-io/pydruid/pull/155) [api] Adding support for headers (#155) (@john-bodley)
+
+### pydruid-0.5.2 (2019/03/08 01:16 +00:00)
+- [#150](https://github.com/druid-io/pydruid/pull/150) Improve error message (#150) (@betodealmeida)
+
+### pydruid-0.5.1 (2019/03/05 17:11 +00:00)
+- [#152](https://github.com/druid-io/pydruid/pull/152) Pass context to Druid (#152) (@betodealmeida)
+- [#148](https://github.com/druid-io/pydruid/pull/148) scan query: use columns instead of dimensions (#148) (@adelcast)
+- [#147](https://github.com/druid-io/pydruid/pull/147) Update console.py (#147) (@john-bodley)
+- [#145](https://github.com/druid-io/pydruid/pull/145) Dummy version number for master branch + RELEASE.md docs (#145) (@mistercrunch)
+
+### 0.5.0 (2018/11/28 06:16 +00:00)
+- [#144](https://github.com/druid-io/pydruid/pull/144) [db-api] Performance improvements (#144) (@john-bodley)
+- [#142](https://github.com/druid-io/pydruid/pull/142) [prompt_toolkit] Enforcing pre-2.0 (#142) (@john-bodley)
+- [#141](https://github.com/druid-io/pydruid/pull/141) [console] Updating filters/keywords (#141) (@john-bodley)
+- [#143](https://github.com/druid-io/pydruid/pull/143) [travis] Pinning flake8 (#143) (@john-bodley)
+- [#138](https://github.com/druid-io/pydruid/pull/138) Reserve original column sequence in SQL when reading data (#138) (@xqliu)
+- [#132](https://github.com/druid-io/pydruid/pull/132) [cli] add an quit/exit/bye commands (#132) (@mistercrunch)
+
+### pydruid-0.4.4 (2018/06/21 18:25 +00:00)
+- [#133](https://github.com/druid-io/pydruid/pull/133) Fix empty result (#133) (@betodealmeida)
+
+### pydruid-0.4.3 (2018/05/18 18:22 +00:00)
+- [#131](https://github.com/druid-io/pydruid/pull/131) Support hyperUnique type (#131) (@betodealmeida)
+- [#127](https://github.com/druid-io/pydruid/pull/127) Implement Filtered DimensionSpecs (#127) (@jeffreythewang)
+
+### 0.4.2 (2018/04/03 00:23 +00:00)
+- [#121](https://github.com/druid-io/pydruid/pull/121) Surface HTML errors (#121) (@betodealmeida)
+- [#76](https://github.com/druid-io/pydruid/pull/76) Filters extraction function (#76) (@gaetano-guerriero)
+- [#119](https://github.com/druid-io/pydruid/pull/119) fix access to class name when raising connection error (#119) (@danfrankj)
+
+### pydruid-0.4.1 (2018/02/08 00:20 +00:00)
+- [#120](https://github.com/druid-io/pydruid/pull/120) Remove 'enum' package requirements (#120) (@mistercrunch)
+- [#118](https://github.com/druid-io/pydruid/pull/118) Add shortcuts to CLI (#118) (@betodealmeida)
+- [#117](https://github.com/druid-io/pydruid/pull/117) Small fixes (#117) (@betodealmeida)
+- [#116](https://github.com/druid-io/pydruid/pull/116) Get b64encoding correct for python 2 & 3 (#116) (@boorad)
+
+### pydruid-0.4.0 (2018/01/30 18:59 +00:00)
+- [#100](https://github.com/druid-io/pydruid/pull/100) thetaSketchEstimate fix py2.* (#100) (@Dylan1312)
+- [#80](https://github.com/druid-io/pydruid/pull/80) Methode for http basic auth with username and password added (#80) (@DPiontek)
+- [#113](https://github.com/druid-io/pydruid/pull/113) Check if connection is closed before execute (#113) (@betodealmeida)
+- [#108](https://github.com/druid-io/pydruid/pull/108) Add support for 'scan' queries (#108) (@mistercrunch)
+- [#111](https://github.com/druid-io/pydruid/pull/111) Linting with flake8 (#111) (@mistercrunch)
+- [#112](https://github.com/druid-io/pydruid/pull/112) Fix reserved keyword (#112) (@betodealmeida)
+- [#110](https://github.com/druid-io/pydruid/pull/110) Merge druiddb into pydruid (#110) (@betodealmeida)
+- [#107](https://github.com/druid-io/pydruid/pull/107) Implement pandas export for 'select' queries (#107) (@mistercrunch)
+- [#106](https://github.com/druid-io/pydruid/pull/106) [py2] fix str check in parse_datasource (#106) (@mistercrunch)
+- [#103](https://github.com/druid-io/pydruid/pull/103) Add a way to add context->queryid. (#103) (@lionaneesh)
+- [#87](https://github.com/druid-io/pydruid/pull/87) Support for 'interval' filter (#87) (@var23rav)
+- [#74](https://github.com/druid-io/pydruid/pull/74) Support for Union datasource (#74) (@RichRadics)
+- [#72](https://github.com/druid-io/pydruid/pull/72) thetaSketch support (#72) (@RichRadics)
+- [#77](https://github.com/druid-io/pydruid/pull/77) Add LICENSE to MANIFEST.in (#77) (@pmlandwehr)
+- [#66](https://github.com/druid-io/pydruid/pull/66) __str__ returns dict (#66) (@onesuper)
+- [#82](https://github.com/druid-io/pydruid/pull/82) Add columnComparison filter support (#82) (@erikdubbelboer)
+- [#73](https://github.com/druid-io/pydruid/pull/73) Add new Greatest and Least post aggregators (#73) (@erikdubbelboer)
+- [#84](https://github.com/druid-io/pydruid/pull/84) Add min and max aggregators for long and double (#84) (@azymnis)
+
+### pydruid-0.3.1 (2016/12/22 21:55 +00:00)
+- [#70](https://github.com/druid-io/pydruid/pull/70) Prepare for 0.3.1 release. (#70) (@gianm)
+- [#68](https://github.com/druid-io/pydruid/pull/68) add quantile and quantiles post aggregators support (#68) (@hexchain)
+- [#69](https://github.com/druid-io/pydruid/pull/69) query: add support for search query (#69) (@hexchain)
+- [#60](https://github.com/druid-io/pydruid/pull/60) Add support for bound filter (#60) (@psalaberria002)
+- [#67](https://github.com/druid-io/pydruid/pull/67) Add merge option to segment_metadata (#67) (@noppanit)
+- [#62](https://github.com/druid-io/pydruid/pull/62) Bugfix when building `not` filter multiple times (#62) (@dakra)
+- [#58](https://github.com/druid-io/pydruid/pull/58) Don't raise exception when filter/having/dimension is None (#58) (@dakra)
+- [#53](https://github.com/druid-io/pydruid/pull/53) only import pandas when `export_pandas` gets called (#53) (@dakra)
+- [#54](https://github.com/druid-io/pydruid/pull/54) Adds support for "in" filter (#54) (@se7entyse7en)
+- [#57](https://github.com/druid-io/pydruid/pull/57) Add `analysisTypes` to segment metadata query (#57) (@drcrallen)
+- [#55](https://github.com/druid-io/pydruid/pull/55) allow `descending` attribute in timeseries query (#55) (@dakra)
+
+### pydruid-0.3.0 (2016/05/24 17:09 +00:00)
+- [9a802a3](https://github.com/druid-io/pydruid/commit/9a802a3c45a1126fcb7961a32ef41f74543d06b3) bump version to 0.3.0 (@xvrl)
+- [#50](https://github.com/druid-io/pydruid/pull/50) Add JavaScript aggregator support (#50) (@sologoub)
+- [#51](https://github.com/druid-io/pydruid/pull/51) bugfix nested `and`/`or` filters inside `not` (#51) (@dakra)
+- [#52](https://github.com/druid-io/pydruid/pull/52) Adding support for regex filter (#52) (@mistercrunch)
+- [d0763f1](https://github.com/druid-io/pydruid/commit/d0763f1a17b324ef91d2fe6b20b2264500185e47) add `NamespaceLookupExtraction` (@dakra)
+- [1bf68ae](https://github.com/druid-io/pydruid/commit/1bf68ae9f9bd99bbcbe4625c5a3a5d7bb78691f0) add `TimeFormatExtraction` (@dakra)
+- [#45](https://github.com/druid-io/pydruid/pull/45) Merge pull request #45 from dakra/nested-filter-aggregates (@dakra)
+- [#39](https://github.com/druid-io/pydruid/pull/39) Merge pull request #39 from DreamLab/async_support (@DreamLab)
+- [166983e](https://github.com/druid-io/pydruid/commit/166983ececa9f6ea4ab242dbf39619731388886e) * added support for asynchronous client (@turu)
+- [#47](https://github.com/druid-io/pydruid/pull/47) Merge pull request #47 from dakra/ne-dimension (@dakra)
+- [#46](https://github.com/druid-io/pydruid/pull/46) Merge pull request #46 from dakra/or-filter (@dakra)
+- [5290823](https://github.com/druid-io/pydruid/commit/5290823cbcad223fd56300487b7051835acb5be5) add __ne__ to `Dimension` so you can `filter = Dimension('dim') != val` (@dakra)
+- [415d954](https://github.com/druid-io/pydruid/commit/415d954d68c9e58cb28ab393e43d92bbdf0517f9) add support for `and`,`or` filters with more then 2 values. (@dakra)
+- [8b08a91](https://github.com/druid-io/pydruid/commit/8b08a91816e67724b7497fba998db9a4b89f1446) add support for nested filtered aggregators (@dakra)
+- [#40](https://github.com/druid-io/pydruid/pull/40) Merge pull request #40 from se7entyse7en/dimensions_specs (@se7entyse7en)
+- [#41](https://github.com/druid-io/pydruid/pull/41) Merge pull request #41 from se7entyse7en/hyperuniquecardinality_postaggregator (@se7entyse7en)
+- [9b3aaad](https://github.com/druid-io/pydruid/commit/9b3aaad624e6de6185997c3c2e2660b8c21807b8) Added support for HyperUniqueCardinality (@se7entyse7en)
+- [9e1ef9e](https://github.com/druid-io/pydruid/commit/9e1ef9eff193a65cce7ff53c4a96bc1930150a69) Added tests for dimensions module (@se7entyse7en)
+- [382d610](https://github.com/druid-io/pydruid/commit/382d610aa18daf297ae14c918fc2f72729e2b0d1) Handled dimensions building in PyDruid client (@se7entyse7en)
+- [fe0eaac](https://github.com/druid-io/pydruid/commit/fe0eaac2cffcccbe0043aa9aee2f5baf50fe73de) Added dimensions specs and some extraction functions (@se7entyse7en)
+- [#38](https://github.com/druid-io/pydruid/pull/38) Merge pull request #38 from nmckoy/js-filter (@nmckoy)
+- [10a8340](https://github.com/druid-io/pydruid/commit/10a8340af24eddbab955acbb304e18a24c418e1a) camel case JavaScript (@nmckoy)
+- [e1604d6](https://github.com/druid-io/pydruid/commit/e1604d6219d293530e0d7bc0a869fe41f90f79a7) support javascript filter (@nmckoy)
+- [#37](https://github.com/druid-io/pydruid/pull/37) Merge pull request #37 from gianm/cardinality (@gianm)
+- [fa2ecf9](https://github.com/druid-io/pydruid/commit/fa2ecf9f83d0bfb01324dad6c58281e131919216) Bump version to 0.2.4 (@gianm)
+- [c37b7d4](https://github.com/druid-io/pydruid/commit/c37b7d4d2620c1eccdc7d525a772dbb6869a81a0) Cardinality aggregator (@gianm)
+
+### pydruid-0.2.3 (2015/10/25 17:23 +00:00)
+- [e9d6648](https://github.com/druid-io/pydruid/commit/e9d6648bb2ac1fbbda07151f01a4ccf070a4ea14) version 0.2.3 (@xvrl)
+- [#36](https://github.com/druid-io/pydruid/pull/36) Merge pull request #36 from druid-io/update-links (@druid-io)
+- [4481ac5](https://github.com/druid-io/pydruid/commit/4481ac5087fff1005823a1b99805d741c63dc625) update links (@xvrl)
+- [0300447](https://github.com/druid-io/pydruid/commit/03004471d0fd8b586bf6be3fff3232fa84361350) clean up .gitignore (@xvrl)
+- [af4d5ec](https://github.com/druid-io/pydruid/commit/af4d5ec6cdacbb2f6e3bcd9b04df028977614d56) s/bard/broker/ (@xvrl)
+- [#35](https://github.com/druid-io/pydruid/pull/35) Merge pull request #35 from se7entyse7en/filters_tests (@se7entyse7en)
+- [e629e8b](https://github.com/druid-io/pydruid/commit/e629e8ba1586f443b2b4efa635ec3f808ebedddb) Fixed error raised in Filter class for invalid type (@se7entyse7en)
+- [74435d2](https://github.com/druid-io/pydruid/commit/74435d223d776f6b34e267dff2e3bfe933d0f00e) Added tests for filters module (@se7entyse7en)
+- [#28](https://github.com/druid-io/pydruid/pull/28) Merge pull request #28 from se7entyse7en/filtered_aggregation (@se7entyse7en)
+- [8b8b138](https://github.com/druid-io/pydruid/commit/8b8b1387900521040ba166dd735f8a50174a87d4) Added tests for filtered aggregation (@se7entyse7en)
+- [85bc393](https://github.com/druid-io/pydruid/commit/85bc39361390f414ff505459a4048553e24db337) Added support for filtered aggregator (@se7entyse7en)
+- [#34](https://github.com/druid-io/pydruid/pull/34) Merge pull request #34 from se7entyse7en/travis_integration (@se7entyse7en)
+- [#32](https://github.com/druid-io/pydruid/pull/32) Merge pull request #32 from se7entyse7en/some_tests (@se7entyse7en)
+- [5b8e9fe](https://github.com/druid-io/pydruid/commit/5b8e9fefb7b5acbb1dcc224b4932dd6b6901bb8e) Added travis configuration file (@se7entyse7en)
+- [261efdb](https://github.com/druid-io/pydruid/commit/261efdb6cf93eda51566e66d563414c50db8f49d) Added test_aggregators (@se7entyse7en)
+- [00bb0b0](https://github.com/druid-io/pydruid/commit/00bb0b0da45ac1462ada2bfde32a4cd6f0092218) Removed unused import in test_query_utils (@se7entyse7en)
+- [d6af01d](https://github.com/druid-io/pydruid/commit/d6af01db9a265ae477e143bc7f30cf4814815dd6) Fixed flake8 F403 errors in test_query_utils (@se7entyse7en)
+- [deb7299](https://github.com/druid-io/pydruid/commit/deb729999bc98433f00e3b5cc171ff881f285ad7) Fixed pep8 E302 errors in test_query_utils (@se7entyse7en)
+- [3e9a100](https://github.com/druid-io/pydruid/commit/3e9a1009370c00ed9dbcbe623dfb932ef8bb867f) Fixed flake8 F403 errors in test_client (@se7entyse7en)
+- [c169777](https://github.com/druid-io/pydruid/commit/c169777397d3dee7d4b26e761f485523351eaacd) Added some items to .gitignore for emacs (@se7entyse7en)
+- [41c1190](https://github.com/druid-io/pydruid/commit/41c1190214ed7cc5289846ace7957f514d236b98) Fixed pep8 E711 errors in test_client (@se7entyse7en)
+- [c009bca](https://github.com/druid-io/pydruid/commit/c009bca263083499853fed1efd1946623fe2722d) Fixed test_client so that it doesn't fail if pandas is not installed given that it is optional (@se7entyse7en)
+
+### pydruid-0.2.2 (2015/07/24 23:12 +00:00)
+- [5636a9e](https://github.com/druid-io/pydruid/commit/5636a9eb47238ff213c982cce33fec83c8a8e182) version 0.2.2, fix pypi version conflict (@xvrl)
+
+### pydruid-0.2.1 (2015/07/24 18:00 +00:00)
+- [41ea841](https://github.com/druid-io/pydruid/commit/41ea841eb54f5b30776c6ba0bd7f414adcc002e4) update version to 0.2.1 and fix license string (@xvrl)
+- [#24](https://github.com/druid-io/pydruid/pull/24) Merge pull request #24 from mistercrunch/limit_spec (@mistercrunch)
+- [#21](https://github.com/druid-io/pydruid/pull/21) Merge pull request #21 from griffy/master (@griffy)
+- [616a93e](https://github.com/druid-io/pydruid/commit/616a93e81488503967501ae6b799ec7f0b113005) Fix regressions introduced by support for Python 3 and add initial test coverage (@griffy)
+- [97397d3](https://github.com/druid-io/pydruid/commit/97397d3ca84e1f9903752f04c06ad0375d5767c1) Adding limitSpec support to groupby query (@mistercrunch)
+- [#19](https://github.com/druid-io/pydruid/pull/19) Merge pull request #19 from graphaelli/simplejson-optional (@graphaelli)
+- [faccbc0](https://github.com/druid-io/pydruid/commit/faccbc00dac4cd3e9da1b27febb69d0f21e28f40) make simplejson optional, except on python < 2.6 (@graphaelli)
+- [#18](https://github.com/druid-io/pydruid/pull/18) Merge pull request #18 from griffy/master (@griffy)
+- [28eeae4](https://github.com/druid-io/pydruid/commit/28eeae465c22b4362dad3e77c8fcc21833b25407) Add support for Python 3 (@griffy)
+- [#17](https://github.com/druid-io/pydruid/pull/17) Merge pull request #17 from mruwnik/allow_setting_of_context_options (@mruwnik)
+- [f2d1a24](https://github.com/druid-io/pydruid/commit/f2d1a24e24a81286973975356c14f7f40a0d667b) Allow the setting of context properties in queries (@mruwnik)
+- [#15](https://github.com/druid-io/pydruid/pull/15) Merge pull request #15 from seanv507/druid_error_output (@seanv507)
+- [#14](https://github.com/druid-io/pydruid/pull/14) Merge pull request #14 from seanv507/having (@seanv507)
+- [93b65be](https://github.com/druid-io/pydruid/commit/93b65bea3c1e592e94d649a97fc1a60557091f31) report druid error (@seanv507)
+- [4226a66](https://github.com/druid-io/pydruid/commit/4226a66c47242348113a2882bf8090ec5e3723e5) fixed aggregation equalTo and simplified and or nesting (@seanv507)
+- [92275bf](https://github.com/druid-io/pydruid/commit/92275bf64d7c3f0b4cbfb6179ecd9cf5bfa0b5d7) added having clause to groupby queries (@seanv507)
+- [#13](https://github.com/druid-io/pydruid/pull/13) Merge pull request #13 from whitehats/new_query_types (@whitehats)
+- [602cc04](https://github.com/druid-io/pydruid/commit/602cc04ca4e527203b34790cbe16f49817ecb4cb) queryType 'select' (@KenjiTakahashi)
+- [#12](https://github.com/druid-io/pydruid/pull/12) Merge pull request #12 from davideanastasia/hyperunique (@davideanastasia)
+- [090eb03](https://github.com/druid-io/pydruid/commit/090eb032884a3652affaeeb18f6ea40cc66f6db9) Add support for HyperUnique aggregator (@davideanastasia)
+
+### pydruid-0.2.0 (2014/04/14 20:07 +00:00)
+- [#10](https://github.com/druid-io/pydruid/pull/10) Merge pull request #10 from metamx/use-relative-paths (@metamx)
+- [ffebcb8](https://github.com/druid-io/pydruid/commit/ffebcb89361a59d79600416ac2d6525ef06d9248) restored updated intro, with restructered heads and info on finding more examples
+- [d5e84a7](https://github.com/druid-io/pydruid/commit/d5e84a7cbc8d4ce966f403d96db2cd488d53d0e4) intro now focused just on PyDruid class
+- [1667132](https://github.com/druid-io/pydruid/commit/1667132f312f7507bebd53caa4e584f16d178e3b) ignoring these generated files
+- [e51391a](https://github.com/druid-io/pydruid/commit/e51391a6df968ab9e43015b4e07191f99d722cb4) rm this intruder, a file specific to Mac OSX
+- [4192010](https://github.com/druid-io/pydruid/commit/4192010223a86f35532c0a33ae71b01f83421b62) rm this intruder, a file specific to Mac OSX
+- [af7c90f](https://github.com/druid-io/pydruid/commit/af7c90f4f600e117a807566b4deae227efb01cf4) substituting relative paths for local paths
+- [752772d](https://github.com/druid-io/pydruid/commit/752772d142ee2c1999562b880b4260f8c5dcf59e) Update README.md (@dganguli)
+- [f91a5dc](https://github.com/druid-io/pydruid/commit/f91a5dca2da29f02e737a793bcec8b7af4e19f50) Update README.md (@dganguli)
+- [#8](https://github.com/druid-io/pydruid/pull/8) Merge pull request #8 from metamx/igalpd (@metamx)
+- [c1b3423](https://github.com/druid-io/pydruid/commit/c1b3423c5f48448c2717b688fcb9fca747e37185) changed 'bard' to 'druid'
+- [c4ba739](https://github.com/druid-io/pydruid/commit/c4ba739734b20ad8693ee185ae351615ad38ed74) Update README.md (@dganguli)
+- [b398f45](https://github.com/druid-io/pydruid/commit/b398f45f896dac31a91bca548499286ecebc9789) Update README.md (@dganguli)
+- [844047e](https://github.com/druid-io/pydruid/commit/844047e057a0e03b950e4775cadf5e54299cbbd7) Update README.md (@dganguli)
+- [#7](https://github.com/druid-io/pydruid/pull/7) Merge pull request #7 from metamx/pyipyi (@metamx)
+- [823c55c](https://github.com/druid-io/pydruid/commit/823c55c5af0881520e85651a02f8192b1ff55758) fix whitespace ftw
+- [1fabb51](https://github.com/druid-io/pydruid/commit/1fabb518a1e14a13f5d5e403fac826e5f3a2086a) setup.cfg to make it easy to upload docs to pythonhosted.org
+- [9471077](https://github.com/druid-io/pydruid/commit/9471077cdd82e9a7940e2bc062299c94d378b6a7) docs: rename pyDruid -> pydruid
+- [bc625b5](https://github.com/druid-io/pydruid/commit/bc625b5acfa50f7f8d6090f22dba53dbac709488) re-name pypi package from pyDruid -> pydruid
+- [d64a56b](https://github.com/druid-io/pydruid/commit/d64a56b255c5f63a9e5539c56b5d4908d16e4f1b) setup.py: upgrade version to 0.2.0, remove extraneous dependencies
+- [5e6dfdb](https://github.com/druid-io/pydruid/commit/5e6dfdbdb1e3a57e78e554a9cddbe400dc718761) Update README.md (@dganguli)
+- [abfbcf2](https://github.com/druid-io/pydruid/commit/abfbcf224feb3edce0ab40fb2f39b7977cf076a5) Update README.md (@dganguli)
+- [d579de1](https://github.com/druid-io/pydruid/commit/d579de1e1d821decdb02549c21c119279018870b) Update README.md (@dganguli)
+- [#6](https://github.com/druid-io/pydruid/pull/6) Merge pull request #6 from metamx/docs (@metamx)
+- [3df572f](https://github.com/druid-io/pydruid/commit/3df572f603e620a4e514bdc2caec9bff7c8fc3aa) Update README.md (@dganguli)
+- [d109749](https://github.com/druid-io/pydruid/commit/d109749aa17a2aadec1bc885ba25975d0f2e244b) Update README.md (@dganguli)
+- [3c1b726](https://github.com/druid-io/pydruid/commit/3c1b7263ceb58f4dd2979bf54889535d3b5d93ed) Merge branch 'docs' of github.com:metamx/pydruid into docs
+- [5da69b1](https://github.com/druid-io/pydruid/commit/5da69b128294cbdf2e2eb904e1faa5d22a931332) twitter graph figure for groupby example
+- [1c97eb3](https://github.com/druid-io/pydruid/commit/1c97eb3b081c8f362583dcebe94be0b9319180ef) Update README.md (@dganguli)
+- [5a19cc1](https://github.com/druid-io/pydruid/commit/5a19cc104440375a09f5cd2a9af18b7356cbb5fe) Update README.md (@dganguli)
+- [32ae0f8](https://github.com/druid-io/pydruid/commit/32ae0f88bfb417e15ff12f837a4704f4235821c3) re-size avg tweet length figure part deux
+- [a39e9bc](https://github.com/druid-io/pydruid/commit/a39e9bc7f9353fc1e49d6e0d607c1bad75851955) re-size avg tweet length figure
+- [8f9b45d](https://github.com/druid-io/pydruid/commit/8f9b45dc41853f9ec52d72b761dd39e225835795) Merge branch 'docs' of github.com:metamx/pydruid into docs
+- [df90783](https://github.com/druid-io/pydruid/commit/df907837782f76bf4d0142c93aff942b5c17c2b7) docs: created figures directory
+- [eb4f4a2](https://github.com/druid-io/pydruid/commit/eb4f4a23d6806d38f498be3f3a9a0bf885eb3db9) Update README.md (@dganguli)
+- [458b82e](https://github.com/druid-io/pydruid/commit/458b82e80d26259ee3654ac8d92293831fcc0596) README back to md from txt
+- [6c18175](https://github.com/druid-io/pydruid/commit/6c18175b10a4d09506b49ec764956db269dcc203) updated built documentation
+- [3b61bbd](https://github.com/druid-io/pydruid/commit/3b61bbd76691b3846ebaf23bc89beb7d6f059612) documented export methods and big fix to export_tsv for topn queries
+- [4961495](https://github.com/druid-io/pydruid/commit/4961495c22417f073a552964736ff3802f6d3c19) fix topn docstring
+- [1b1f00b](https://github.com/druid-io/pydruid/commit/1b1f00b9a104a1ca917e98dfa5a222534fb1235a) Filter and Postaggregator builder methods are static members of their respective classes
+- [2f92ec7](https://github.com/druid-io/pydruid/commit/2f92ec78ce92a117621c53bb1dbf55bb2ec2f491) more sensical docstring for PyDruid
+- [f49265d](https://github.com/druid-io/pydruid/commit/f49265d6fee07685b989197a02b7dd3d4a0969e8) documented PyDruid class
+- [c71844b](https://github.com/druid-io/pydruid/commit/c71844bbd8bb033d5f3ec179269ea20baded5bea) commit initial version of built docs
+- [8d570e4](https://github.com/druid-io/pydruid/commit/8d570e42eda367529f8f95373cccc675865f644a) documented segment_metadata. fixed a bug in it too
+- [1e050bc](https://github.com/druid-io/pydruid/commit/1e050bcb14e8495d2b4dec64de61304a2e039deb) make post and parse private, made build and validate public again
+- [69e9784](https://github.com/druid-io/pydruid/commit/69e9784b58e072e874576039de77641df796d11f) documented time_boundary
+- [9f595df](https://github.com/druid-io/pydruid/commit/9f595dfe35fe27f5ae2e7974156a30e15b066889) re-name kwargs dataSource -> datasource and postAggregations -> post_aggregations
+- [9a995dd](https://github.com/druid-io/pydruid/commit/9a995dd06428cf229ebab6a58bcfe4f463767871) Added documentation for groupby and timeseries queries
+- [7552dcb](https://github.com/druid-io/pydruid/commit/7552dcb42477e28adad1a2dcda8983bcf634d1b8) topN has sphinx compatible example in docstring
+- [59522a7](https://github.com/druid-io/pydruid/commit/59522a7416bf8e6bf2ca1d43af278d3180671311) Ignore .DS_store
+- [01a8f88](https://github.com/druid-io/pydruid/commit/01a8f884ee11c0e718717108e344b1c40f2decbb) build and validate query methods now private
+- [b44a8dc](https://github.com/druid-io/pydruid/commit/b44a8dc1a289199e13860d48f9a26df56d3caf57) sphinx compatible documentation for client.topN
+- [5464a2c](https://github.com/druid-io/pydruid/commit/5464a2c7643bbd25cfaaa6373e8960eafce5f07d) Working documentatin using sphinx-quickstart
+- [3023707](https://github.com/druid-io/pydruid/commit/30237073fe717c3d400a2a31f09911be3e9b3ae2) Each query sets its query type instead of passing it to build_query()
+- [06d00da](https://github.com/druid-io/pydruid/commit/06d00da4bf416f00798cb3c3b10c4554290bc5c9) postaggregator.py: Field and Const postaggregators call super constructor
+- [8605393](https://github.com/druid-io/pydruid/commit/8605393e5e010b4d62afcb98f04285299ce389f1) filters.py: bug fix to show() method
+- [6c9b7d5](https://github.com/druid-io/pydruid/commit/6c9b7d5b9df199ccbf81b40b32ea7255ae823f1a) aggregators.py: re-name doublesum -> sum
+- [51aedd1](https://github.com/druid-io/pydruid/commit/51aedd1c5b79f69dfd614a3106b4c60e1da4e521) client.py: documentatino for topN query
+- [ba18670](https://github.com/druid-io/pydruid/commit/ba18670d5b6a01917ef4b1bb80cab636ae5eab14) export_tsv raises NotImplementedError if necessary
+- [d8bc3a7](https://github.com/druid-io/pydruid/commit/d8bc3a78cc2dc6c49d5a0b1c2c0aa32f38863f2e) client.py:
+- [3b53fec](https://github.com/druid-io/pydruid/commit/3b53fec3ebc5cbc2a4411611723c484e69348dea) Finally read the pep-8 style guide. Fix whitespace ensued
+- [61300c9](https://github.com/druid-io/pydruid/commit/61300c9dc50e8272768b27411cabf4c99185a080) fix whitespace ftw
+- [b3d269a](https://github.com/druid-io/pydruid/commit/b3d269a67e1977208d999132af793aed731fac6e) only parse query results once
+- [0a15aeb](https://github.com/druid-io/pydruid/commit/0a15aeb7749f81fd847d867ae663f3ab2f7ffe89) client.py: bug fix export_tsv for groupBy queries
+- [0d56b23](https://github.com/druid-io/pydruid/commit/0d56b2302ffe87c2ab1003da5f1ff80378f573eb) client.py:
+- [ac74bf6](https://github.com/druid-io/pydruid/commit/ac74bf63f06f0a272b2803a50dc65ab1e32d910d) implemented topN queries. made more informative error messages
+- [b660ba7](https://github.com/druid-io/pydruid/commit/b660ba71166e84e02413aa48d2f190580fd775d0) client.py: check if bard_url ends with a / before constructing full url
+- [27e87cd](https://github.com/druid-io/pydruid/commit/27e87cd6fff3379a9d5939095006b2730c797456) query implementations share more post related code
+- [ed5a9a6](https://github.com/druid-io/pydruid/commit/ed5a9a6d1e08c46d8cea79dbd6f26ea25c6996f0) whitespace ftw
+- [11321e1](https://github.com/druid-io/pydruid/commit/11321e1c3457406d3fa5b3f6ce5c647b274ffda6) Made pyDruid interoperate with PostAggregators objects
+- [c845eb5](https://github.com/druid-io/pydruid/commit/c845eb59244a6ac4198f25ce2732d939d63494d7) Removed extraneous comments and whitespace
+- [3fae2b7](https://github.com/druid-io/pydruid/commit/3fae2b721fdcc6e62bd1489cb360080fe4f2505c) Updated post-aggs to be easier to express and use
+- [71e2753](https://github.com/druid-io/pydruid/commit/71e27530c3faca12a95469ad8088775aa5ac0ee0) Bug fix to export_pandas
+- [9bc5ba6](https://github.com/druid-io/pydruid/commit/9bc5ba62d5311983d03d27a4ff9c3de46c84ace5) Implemented pandas export for groupby queries
+- [229355b](https://github.com/druid-io/pydruid/commit/229355b118cd68a53929846a54c90d6b4ac4dce6) Remove extraneous dependencies on matplotlib
+- [#1](https://github.com/druid-io/pydruid/pull/1) Merge pull request #1 from rjurney/master (@rjurney)
+- [5fb737e](https://github.com/druid-io/pydruid/commit/5fb737ee5c6abc867c18e3ab739e20c5bb3321fe) Working 1.7 (@rjurney)
+- [efa2786](https://github.com/druid-io/pydruid/commit/efa2786debac9a9d7d8a05a9800c205890bbecb0) Added __init__.py to pydruid and updated to version 0.1.5 for release. (@rjurney)
+- [cc99963](https://github.com/druid-io/pydruid/commit/cc99963f15a76073c12492267660238eff8d29ad) Trying to re-create source code. Something weird and terrible went on. (@rjurney)
+- [3c9e1a0](https://github.com/druid-io/pydruid/commit/3c9e1a0fc98a8d9de9076995c6c3ef8579d9d33f) Renaming readme. (@rjurney)
+- [441e7aa](https://github.com/druid-io/pydruid/commit/441e7aa5e4291316ea9b40bb23c8ea1b0897a677) Still trying to make pypi work, but I can't get the build to update on pypi. (@rjurney)
+- [1cf2971](https://github.com/druid-io/pydruid/commit/1cf29718ee53d8a290839d54ddf9a752bd6b3554) Trying to make pip install work. (@rjurney)
+- [4347e69](https://github.com/druid-io/pydruid/commit/4347e69127d7564319beda73dbe5ba53d86cee74) Crap, whats going on? (@rjurney)
+- [978eeaa](https://github.com/druid-io/pydruid/commit/978eeaad75eeed7e79070349730cba2623f6b828) Massive cleanup (@rjurney)
+- [0e6ee14](https://github.com/druid-io/pydruid/commit/0e6ee149483561e4a2fc86c594356436e417bc45) Made the module into a package, some re-org (@rjurney)
+- [b43b95d](https://github.com/druid-io/pydruid/commit/b43b95d3371cc2e9beaa3c7a2d756806bf483469) Making this into a python project (@rjurney)
+- [21f3b2b](https://github.com/druid-io/pydruid/commit/21f3b2b649d14619ed5269ba69ac14a58b301261) First code commit
+- [35cd39e](https://github.com/druid-io/pydruid/commit/35cd39e755cd4136923973ae1594f8b3f4336f63) Add license headers
+- [d00f822](https://github.com/druid-io/pydruid/commit/d00f822f259bc8bf87e8248be2458328ed8aeabf) Update LICENSE (@xvrl)

+ 13 - 0
desktop/core/ext-py/pydruid-0.5.11/LICENSE

@@ -0,0 +1,13 @@
+Copyright 2013 Metamarkets Group Inc.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.

+ 5 - 0
desktop/core/ext-py/pydruid-0.5.11/MANIFEST.in

@@ -0,0 +1,5 @@
+include *.txt
+include *.md
+recursive-include docs
+recursive-include pydruid
+include LICENSE

+ 303 - 0
desktop/core/ext-py/pydruid-0.5.11/PKG-INFO

@@ -0,0 +1,303 @@
+Metadata-Version: 2.1
+Name: pydruid
+Version: 0.5.11
+Summary: A Python connector for Druid.
+Home-page: https://druid.apache.org
+Author: Druid Developers
+Author-email: druid-development@googlegroups.com
+License: Apache License, Version 2.0
+Project-URL: Bug Tracker, https://github.com/druid-io/pydruid/issues
+Project-URL: Documentation, https://pythonhosted.org/pydruid/
+Project-URL: Source Code, https://github.com/druid-io/pydruid
+Description: # pydruid
+        
+        pydruid exposes a simple API to create, execute, and analyze [Druid](http://druid.io/) queries. pydruid can parse query results into [Pandas](http://pandas.pydata.org/) DataFrame objects for subsequent data analysis -- this offers a tight integration between [Druid](http://druid.io/), the [SciPy](http://www.scipy.org/stackspec.html) stack (for scientific computing) and [scikit-learn](http://scikit-learn.org/stable/) (for machine learning). pydruid can export query results into TSV or JSON for further processing with your favorite tool, e.g., R, Julia, Matlab, Excel. It provides both synchronous and asynchronous clients.
+        
+        Additionally, pydruid implements the [Python DB API 2.0](https://www.python.org/dev/peps/pep-0249/), a [SQLAlchemy dialect](http://docs.sqlalchemy.org/en/latest/dialects/), and a provides a command line interface to interact with Druid.
+        
+        To install:
+        ```python
+        pip install pydruid
+        # or, if you intend to use asynchronous client
+        pip install pydruid[async]
+        # or, if you intend to export query results into pandas
+        pip install pydruid[pandas]
+        # or, if you intend to do both
+        pip install pydruid[async, pandas]
+        # or, if you want to use the SQLAlchemy engine
+        pip install pydruid[sqlalchemy]
+        # or, if you want to use the CLI
+        pip install pydruid[cli]
+        ```
+        Documentation: https://pythonhosted.org/pydruid/.
+        
+        # examples
+        
+        The following exampes show how to execute and analyze the results of three types of queries: timeseries, topN, and groupby. We will use these queries to ask simple questions about twitter's public data set.
+        
+        ## timeseries
+        
+        What was the average tweet length, per day, surrounding the 2014 Sochi olympics?
+        
+        ```python
+        from pydruid.client import *
+        from pylab import plt
+        
+        query = PyDruid(druid_url_goes_here, 'druid/v2')
+        
+        ts = query.timeseries(
+            datasource='twitterstream',
+            granularity='day',
+            intervals='2014-02-02/p4w',
+            aggregations={'length': doublesum('tweet_length'), 'count': doublesum('count')},
+            post_aggregations={'avg_tweet_length': (Field('length') / Field('count'))},
+            filter=Dimension('first_hashtag') == 'sochi2014'
+        )
+        df = query.export_pandas()
+        df['timestamp'] = df['timestamp'].map(lambda x: x.split('T')[0])
+        df.plot(x='timestamp', y='avg_tweet_length', ylim=(80, 140), rot=20,
+                title='Sochi 2014')
+        plt.ylabel('avg tweet length (chars)')
+        plt.show()
+        ```
+        
+        ![alt text](https://github.com/metamx/pydruid/raw/master/docs/figures/avg_tweet_length.png "Avg. tweet length")
+        
+        ## topN
+        
+        Who were the top ten mentions (@user_name) during the 2014 Oscars?
+        
+        ```python
+        top = query.topn(
+            datasource='twitterstream',
+            granularity='all',
+            intervals='2014-03-03/p1d',  # utc time of 2014 oscars
+            aggregations={'count': doublesum('count')},
+            dimension='user_mention_name',
+            filter=(Dimension('user_lang') == 'en') & (Dimension('first_hashtag') == 'oscars') &
+                   (Dimension('user_time_zone') == 'Pacific Time (US & Canada)') &
+                   ~(Dimension('user_mention_name') == 'No Mention'),
+            metric='count',
+            threshold=10
+        )
+        
+        df = query.export_pandas()
+        print df
+        
+           count                 timestamp user_mention_name
+        0   1303  2014-03-03T00:00:00.000Z      TheEllenShow
+        1     44  2014-03-03T00:00:00.000Z        TheAcademy
+        2     21  2014-03-03T00:00:00.000Z               MTV
+        3     21  2014-03-03T00:00:00.000Z         peoplemag
+        4     17  2014-03-03T00:00:00.000Z               THR
+        5     16  2014-03-03T00:00:00.000Z      ItsQueenElsa
+        6     16  2014-03-03T00:00:00.000Z           eonline
+        7     15  2014-03-03T00:00:00.000Z       PerezHilton
+        8     14  2014-03-03T00:00:00.000Z     realjohngreen
+        9     12  2014-03-03T00:00:00.000Z       KevinSpacey
+        
+        ```
+        
+        ## groupby
+        
+        What does the social network of users replying to other users look like?
+        
+        ```python
+        from igraph import *
+        from cairo import *
+        from pandas import concat
+        
+        group = query.groupby(
+            datasource='twitterstream',
+            granularity='hour',
+            intervals='2013-10-04/pt12h',
+            dimensions=["user_name", "reply_to_name"],
+            filter=(~(Dimension("reply_to_name") == "Not A Reply")) &
+                   (Dimension("user_location") == "California"),
+            aggregations={"count": doublesum("count")}
+        )
+        
+        df = query.export_pandas()
+        
+        # map names to categorical variables with a lookup table
+        names = concat([df['user_name'], df['reply_to_name']]).unique()
+        nameLookup = dict([pair[::-1] for pair in enumerate(names)])
+        df['user_name_lookup'] = df['user_name'].map(nameLookup.get)
+        df['reply_to_name_lookup'] = df['reply_to_name'].map(nameLookup.get)
+        
+        # create the graph with igraph
+        g = Graph(len(names), directed=False)
+        vertices = zip(df['user_name_lookup'], df['reply_to_name_lookup'])
+        g.vs["name"] = names
+        g.add_edges(vertices)
+        layout = g.layout_fruchterman_reingold()
+        plot(g, "tweets.png", layout=layout, vertex_size=2, bbox=(400, 400), margin=25, edge_width=1, vertex_color="blue")
+        ```
+        
+        ![alt text](https://github.com/metamx/pydruid/raw/master/docs/figures/twitter_graph.png "Social Network")
+        
+        # asynchronous client
+        ```pydruid.async_client.AsyncPyDruid``` implements an asynchronous client. To achieve that, it utilizes an asynchronous
+        HTTP client from ```Tornado``` framework. The asynchronous client is suitable for use with async frameworks such as Tornado
+        and provides much better performance at scale. It lets you serve multiple requests at the same time, without blocking on
+        Druid executing your queries.
+        
+        ## example
+        ```python
+        from tornado import gen
+        from pydruid.async_client import AsyncPyDruid
+        from pydruid.utils.aggregators import longsum
+        from pydruid.utils.filters import Dimension
+        
+        client = AsyncPyDruid(url_to_druid_broker, 'druid/v2')
+        
+        @gen.coroutine
+        def your_asynchronous_method_serving_top10_mentions_for_day(day
+            top_mentions = yield client.topn(
+                datasource='twitterstream',
+                granularity='all',
+                intervals="%s/p1d" % (day, ),
+                aggregations={'count': doublesum('count')},
+                dimension='user_mention_name',
+                filter=(Dimension('user_lang') == 'en') & (Dimension('first_hashtag') == 'oscars') &
+                       (Dimension('user_time_zone') == 'Pacific Time (US & Canada)') &
+                       ~(Dimension('user_mention_name') == 'No Mention'),
+                metric='count',
+                threshold=10)
+        
+            # asynchronously return results
+            # can be simply ```return top_mentions``` in python 3.x
+            raise gen.Return(top_mentions)
+        ```
+        
+        
+        # thetaSketches
+        Theta sketch Post aggregators are built slightly differently to normal Post Aggregators, as they have different operators.
+        Note: you must have the ```druid-datasketches``` extension loaded into your Druid cluster in order to use these.
+        See the [Druid datasketches](http://druid.io/docs/latest/development/extensions-core/datasketches-aggregators.html) documentation for details.
+        
+        ```python
+        from pydruid.client import *
+        from pydruid.utils import aggregators
+        from pydruid.utils import filters
+        from pydruid.utils import postaggregator
+        
+        query = PyDruid(url_to_druid_broker, 'druid/v2')
+        ts = query.groupby(
+            datasource='test_datasource',
+            granularity='all',
+            intervals='2016-09-01/P1M',
+            filter = ( filters.Dimension('product').in_(['product_A', 'product_B'])),
+            aggregations={
+                'product_A_users': aggregators.filtered(
+                    filters.Dimension('product') == 'product_A',
+                    aggregators.thetasketch('user_id')
+                    ),
+                'product_B_users': aggregators.filtered(
+                    filters.Dimension('product') == 'product_B',
+                    aggregators.thetasketch('user_id')
+                    )
+            },
+            post_aggregations={
+                'both_A_and_B': postaggregator.ThetaSketchEstimate(
+                    postaggregator.ThetaSketch('product_A_users') & postaggregator.ThetaSketch('product_B_users')
+                    )
+            }
+        )
+        ```
+        
+        # DB API
+        
+        ```python
+        from pydruid.db import connect
+        
+        conn = connect(host='localhost', port=8082, path='/druid/v2/sql/', scheme='http')
+        curs = conn.cursor()
+        curs.execute("""
+            SELECT place,
+                   CAST(REGEXP_EXTRACT(place, '(.*),', 1) AS FLOAT) AS lat,
+                   CAST(REGEXP_EXTRACT(place, ',(.*)', 1) AS FLOAT) AS lon
+              FROM places
+             LIMIT 10
+        """)
+        for row in curs:
+            print(row)
+        ```
+        
+        # SQLAlchemy
+        
+        ```python
+        from sqlalchemy import *
+        from sqlalchemy.engine import create_engine
+        from sqlalchemy.schema import *
+        
+        engine = create_engine('druid://localhost:8082/druid/v2/sql/')  # uses HTTP by default :(
+        # engine = create_engine('druid+http://localhost:8082/druid/v2/sql/')
+        # engine = create_engine('druid+https://localhost:8082/druid/v2/sql/')
+        
+        places = Table('places', MetaData(bind=engine), autoload=True)
+        print(select([func.count('*')], from_obj=places).scalar())
+        ```
+        
+        
+        ## Column headers
+        
+        In version 0.13.0 Druid SQL added support for including the column names in the
+        response which can be requested via the "header" field in the request. This
+        helps to ensure that the cursor description is defined (which is a requirement
+        for SQLAlchemy query statements) regardless on whether the result set contains
+        any rows. Historically this was problematic for result sets which contained no
+        rows at one could not infer the expected column names.
+        
+        Enabling the header can be configured via the SQLAlchemy URI by using the query
+        parameter, i.e.,
+        
+        ```python
+        engine = create_engine('druid://localhost:8082/druid/v2/sql?header=true')
+        ```
+        
+        Note the current default is `false` to ensure backwards compatibility but should
+        be set to `true` for Druid versions >= 0.13.0.
+        
+        
+        # Command line
+        
+        ```bash
+        $ pydruid http://localhost:8082/druid/v2/sql/
+        > SELECT COUNT(*) AS cnt FROM places
+          cnt
+        -----
+        12345
+        > SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES;
+        TABLE_NAME
+        ----------
+        test_table
+        COLUMNS
+        SCHEMATA
+        TABLES
+        > BYE;
+        GoodBye!
+        ```
+        
+        # Contributing
+        
+        Contributions are welcomed of course. We like to use `black` and `flake8`.
+        
+        ```bash
+        pip install -r requirements-dev.txt  # installs useful dev deps
+        pre-commit install  # installs useful commit hooks
+        ```
+        
+Platform: UNKNOWN
+Classifier: License :: OSI Approved :: Apache Software License
+Classifier: Programming Language :: Python
+Classifier: Programming Language :: Python :: 3
+Classifier: Programming Language :: Python :: 3.5
+Classifier: Programming Language :: Python :: 3.6
+Classifier: Programming Language :: Python :: 3.7
+Classifier: Programming Language :: Python :: 3.8
+Description-Content-Type: text/markdown
+Provides-Extra: pandas
+Provides-Extra: async
+Provides-Extra: sqlalchemy
+Provides-Extra: cli

+ 278 - 0
desktop/core/ext-py/pydruid-0.5.11/README.md

@@ -0,0 +1,278 @@
+# pydruid
+
+pydruid exposes a simple API to create, execute, and analyze [Druid](http://druid.io/) queries. pydruid can parse query results into [Pandas](http://pandas.pydata.org/) DataFrame objects for subsequent data analysis -- this offers a tight integration between [Druid](http://druid.io/), the [SciPy](http://www.scipy.org/stackspec.html) stack (for scientific computing) and [scikit-learn](http://scikit-learn.org/stable/) (for machine learning). pydruid can export query results into TSV or JSON for further processing with your favorite tool, e.g., R, Julia, Matlab, Excel. It provides both synchronous and asynchronous clients.
+
+Additionally, pydruid implements the [Python DB API 2.0](https://www.python.org/dev/peps/pep-0249/), a [SQLAlchemy dialect](http://docs.sqlalchemy.org/en/latest/dialects/), and a provides a command line interface to interact with Druid.
+
+To install:
+```python
+pip install pydruid
+# or, if you intend to use asynchronous client
+pip install pydruid[async]
+# or, if you intend to export query results into pandas
+pip install pydruid[pandas]
+# or, if you intend to do both
+pip install pydruid[async, pandas]
+# or, if you want to use the SQLAlchemy engine
+pip install pydruid[sqlalchemy]
+# or, if you want to use the CLI
+pip install pydruid[cli]
+```
+Documentation: https://pythonhosted.org/pydruid/.
+
+# examples
+
+The following exampes show how to execute and analyze the results of three types of queries: timeseries, topN, and groupby. We will use these queries to ask simple questions about twitter's public data set.
+
+## timeseries
+
+What was the average tweet length, per day, surrounding the 2014 Sochi olympics?
+
+```python
+from pydruid.client import *
+from pylab import plt
+
+query = PyDruid(druid_url_goes_here, 'druid/v2')
+
+ts = query.timeseries(
+    datasource='twitterstream',
+    granularity='day',
+    intervals='2014-02-02/p4w',
+    aggregations={'length': doublesum('tweet_length'), 'count': doublesum('count')},
+    post_aggregations={'avg_tweet_length': (Field('length') / Field('count'))},
+    filter=Dimension('first_hashtag') == 'sochi2014'
+)
+df = query.export_pandas()
+df['timestamp'] = df['timestamp'].map(lambda x: x.split('T')[0])
+df.plot(x='timestamp', y='avg_tweet_length', ylim=(80, 140), rot=20,
+        title='Sochi 2014')
+plt.ylabel('avg tweet length (chars)')
+plt.show()
+```
+
+![alt text](https://github.com/metamx/pydruid/raw/master/docs/figures/avg_tweet_length.png "Avg. tweet length")
+
+## topN
+
+Who were the top ten mentions (@user_name) during the 2014 Oscars?
+
+```python
+top = query.topn(
+    datasource='twitterstream',
+    granularity='all',
+    intervals='2014-03-03/p1d',  # utc time of 2014 oscars
+    aggregations={'count': doublesum('count')},
+    dimension='user_mention_name',
+    filter=(Dimension('user_lang') == 'en') & (Dimension('first_hashtag') == 'oscars') &
+           (Dimension('user_time_zone') == 'Pacific Time (US & Canada)') &
+           ~(Dimension('user_mention_name') == 'No Mention'),
+    metric='count',
+    threshold=10
+)
+
+df = query.export_pandas()
+print df
+
+   count                 timestamp user_mention_name
+0   1303  2014-03-03T00:00:00.000Z      TheEllenShow
+1     44  2014-03-03T00:00:00.000Z        TheAcademy
+2     21  2014-03-03T00:00:00.000Z               MTV
+3     21  2014-03-03T00:00:00.000Z         peoplemag
+4     17  2014-03-03T00:00:00.000Z               THR
+5     16  2014-03-03T00:00:00.000Z      ItsQueenElsa
+6     16  2014-03-03T00:00:00.000Z           eonline
+7     15  2014-03-03T00:00:00.000Z       PerezHilton
+8     14  2014-03-03T00:00:00.000Z     realjohngreen
+9     12  2014-03-03T00:00:00.000Z       KevinSpacey
+
+```
+
+## groupby
+
+What does the social network of users replying to other users look like?
+
+```python
+from igraph import *
+from cairo import *
+from pandas import concat
+
+group = query.groupby(
+    datasource='twitterstream',
+    granularity='hour',
+    intervals='2013-10-04/pt12h',
+    dimensions=["user_name", "reply_to_name"],
+    filter=(~(Dimension("reply_to_name") == "Not A Reply")) &
+           (Dimension("user_location") == "California"),
+    aggregations={"count": doublesum("count")}
+)
+
+df = query.export_pandas()
+
+# map names to categorical variables with a lookup table
+names = concat([df['user_name'], df['reply_to_name']]).unique()
+nameLookup = dict([pair[::-1] for pair in enumerate(names)])
+df['user_name_lookup'] = df['user_name'].map(nameLookup.get)
+df['reply_to_name_lookup'] = df['reply_to_name'].map(nameLookup.get)
+
+# create the graph with igraph
+g = Graph(len(names), directed=False)
+vertices = zip(df['user_name_lookup'], df['reply_to_name_lookup'])
+g.vs["name"] = names
+g.add_edges(vertices)
+layout = g.layout_fruchterman_reingold()
+plot(g, "tweets.png", layout=layout, vertex_size=2, bbox=(400, 400), margin=25, edge_width=1, vertex_color="blue")
+```
+
+![alt text](https://github.com/metamx/pydruid/raw/master/docs/figures/twitter_graph.png "Social Network")
+
+# asynchronous client
+```pydruid.async_client.AsyncPyDruid``` implements an asynchronous client. To achieve that, it utilizes an asynchronous
+HTTP client from ```Tornado``` framework. The asynchronous client is suitable for use with async frameworks such as Tornado
+and provides much better performance at scale. It lets you serve multiple requests at the same time, without blocking on
+Druid executing your queries.
+
+## example
+```python
+from tornado import gen
+from pydruid.async_client import AsyncPyDruid
+from pydruid.utils.aggregators import longsum
+from pydruid.utils.filters import Dimension
+
+client = AsyncPyDruid(url_to_druid_broker, 'druid/v2')
+
+@gen.coroutine
+def your_asynchronous_method_serving_top10_mentions_for_day(day
+    top_mentions = yield client.topn(
+        datasource='twitterstream',
+        granularity='all',
+        intervals="%s/p1d" % (day, ),
+        aggregations={'count': doublesum('count')},
+        dimension='user_mention_name',
+        filter=(Dimension('user_lang') == 'en') & (Dimension('first_hashtag') == 'oscars') &
+               (Dimension('user_time_zone') == 'Pacific Time (US & Canada)') &
+               ~(Dimension('user_mention_name') == 'No Mention'),
+        metric='count',
+        threshold=10)
+
+    # asynchronously return results
+    # can be simply ```return top_mentions``` in python 3.x
+    raise gen.Return(top_mentions)
+```
+
+
+# thetaSketches
+Theta sketch Post aggregators are built slightly differently to normal Post Aggregators, as they have different operators.
+Note: you must have the ```druid-datasketches``` extension loaded into your Druid cluster in order to use these.
+See the [Druid datasketches](http://druid.io/docs/latest/development/extensions-core/datasketches-aggregators.html) documentation for details.
+
+```python
+from pydruid.client import *
+from pydruid.utils import aggregators
+from pydruid.utils import filters
+from pydruid.utils import postaggregator
+
+query = PyDruid(url_to_druid_broker, 'druid/v2')
+ts = query.groupby(
+    datasource='test_datasource',
+    granularity='all',
+    intervals='2016-09-01/P1M',
+    filter = ( filters.Dimension('product').in_(['product_A', 'product_B'])),
+    aggregations={
+        'product_A_users': aggregators.filtered(
+            filters.Dimension('product') == 'product_A',
+            aggregators.thetasketch('user_id')
+            ),
+        'product_B_users': aggregators.filtered(
+            filters.Dimension('product') == 'product_B',
+            aggregators.thetasketch('user_id')
+            )
+    },
+    post_aggregations={
+        'both_A_and_B': postaggregator.ThetaSketchEstimate(
+            postaggregator.ThetaSketch('product_A_users') & postaggregator.ThetaSketch('product_B_users')
+            )
+    }
+)
+```
+
+# DB API
+
+```python
+from pydruid.db import connect
+
+conn = connect(host='localhost', port=8082, path='/druid/v2/sql/', scheme='http')
+curs = conn.cursor()
+curs.execute("""
+    SELECT place,
+           CAST(REGEXP_EXTRACT(place, '(.*),', 1) AS FLOAT) AS lat,
+           CAST(REGEXP_EXTRACT(place, ',(.*)', 1) AS FLOAT) AS lon
+      FROM places
+     LIMIT 10
+""")
+for row in curs:
+    print(row)
+```
+
+# SQLAlchemy
+
+```python
+from sqlalchemy import *
+from sqlalchemy.engine import create_engine
+from sqlalchemy.schema import *
+
+engine = create_engine('druid://localhost:8082/druid/v2/sql/')  # uses HTTP by default :(
+# engine = create_engine('druid+http://localhost:8082/druid/v2/sql/')
+# engine = create_engine('druid+https://localhost:8082/druid/v2/sql/')
+
+places = Table('places', MetaData(bind=engine), autoload=True)
+print(select([func.count('*')], from_obj=places).scalar())
+```
+
+
+## Column headers
+
+In version 0.13.0 Druid SQL added support for including the column names in the
+response which can be requested via the "header" field in the request. This
+helps to ensure that the cursor description is defined (which is a requirement
+for SQLAlchemy query statements) regardless on whether the result set contains
+any rows. Historically this was problematic for result sets which contained no
+rows at one could not infer the expected column names.
+
+Enabling the header can be configured via the SQLAlchemy URI by using the query
+parameter, i.e.,
+
+```python
+engine = create_engine('druid://localhost:8082/druid/v2/sql?header=true')
+```
+
+Note the current default is `false` to ensure backwards compatibility but should
+be set to `true` for Druid versions >= 0.13.0.
+
+
+# Command line
+
+```bash
+$ pydruid http://localhost:8082/druid/v2/sql/
+> SELECT COUNT(*) AS cnt FROM places
+  cnt
+-----
+12345
+> SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES;
+TABLE_NAME
+----------
+test_table
+COLUMNS
+SCHEMATA
+TABLES
+> BYE;
+GoodBye!
+```
+
+# Contributing
+
+Contributions are welcomed of course. We like to use `black` and `flake8`.
+
+```bash
+pip install -r requirements-dev.txt  # installs useful dev deps
+pre-commit install  # installs useful commit hooks
+```

+ 23 - 0
desktop/core/ext-py/pydruid-0.5.11/RELEASE.md

@@ -0,0 +1,23 @@
+# How to craft a PyDruid release and ship to Pypi
+
+Prep
+* `git remote add druid-io git@github.com:druid-io/pydruid.git`
+
+New minor release:
+* branch off of master to minor `git checkout -b 0.X`
+* pick cherries if any
+
+New micro release:
+* checkout existing minor release branch `git checkout 0.X`
+* pick cherries
+
+Finally:
+* run tests
+* update version in `setup.py` to `0.X.N`
+* commit with commit message `0.X.N`
+* `git tag 0.X.N`
+* Push release to repo `git push druid-io 0.X 0.X.N`
+* Push to pypi `python setup.py sdist upload`
+
+Post changelog
+* `./gen_changelog.sh 0.0.0...0.X.N`

+ 16 - 16
desktop/core/ext-py/pydruid/db/__init__.py → desktop/core/ext-py/pydruid-0.5.11/pydruid/db/__init__.py

@@ -14,24 +14,24 @@ from pydruid.db.exceptions import (
 
 
 __all__ = [
-    'connect',
-    'apilevel',
-    'threadsafety',
-    'paramstyle',
-    'DataError',
-    'DatabaseError',
-    'Error',
-    'IntegrityError',
-    'InterfaceError',
-    'InternalError',
-    'NotSupportedError',
-    'OperationalError',
-    'ProgrammingError',
-    'Warning',
+    "connect",
+    "apilevel",
+    "threadsafety",
+    "paramstyle",
+    "DataError",
+    "DatabaseError",
+    "Error",
+    "IntegrityError",
+    "InterfaceError",
+    "InternalError",
+    "NotSupportedError",
+    "OperationalError",
+    "ProgrammingError",
+    "Warning",
 ]
 
 
-apilevel = '2.0'
+apilevel = "2.0"
 # Threads may share the module and connections
 threadsafety = 2
-paramstyle = 'pyformat'
+paramstyle = "pyformat"

+ 164 - 60
desktop/core/ext-py/pydruid/db/api.py → desktop/core/ext-py/pydruid-0.5.11/pydruid/db/api.py

@@ -3,7 +3,7 @@ from __future__ import division
 from __future__ import print_function
 from __future__ import unicode_literals
 
-from collections import namedtuple
+from collections import namedtuple, OrderedDict
 import itertools
 import json
 from six import string_types
@@ -20,7 +20,19 @@ class Type(object):
     BOOLEAN = 3
 
 
-def connect(host='localhost', port=8082, path='/druid/v2/sql/', scheme='http'):
+def connect(
+    host="localhost",
+    port=8082,
+    path="/druid/v2/sql/",
+    scheme="http",
+    user=None,
+    password=None,
+    context=None,
+    header=False,
+    ssl_verify_cert=True,
+    ssl_client_cert=None,
+    proxies=None,
+):  # noqa: E125
     """
     Constructor for creating a connection to the database.
 
@@ -28,7 +40,21 @@ def connect(host='localhost', port=8082, path='/druid/v2/sql/', scheme='http'):
         >>> curs = conn.cursor()
 
     """
-    return Connection(host, port, path, scheme)
+    context = context or {}
+
+    return Connection(
+        host,
+        port,
+        path,
+        scheme,
+        user,
+        password,
+        context,
+        header,
+        ssl_verify_cert,
+        ssl_client_cert,
+        proxies,
+    )
 
 
 def check_closed(f):
@@ -37,8 +63,10 @@ def check_closed(f):
     def g(self, *args, **kwargs):
         if self.closed:
             raise exceptions.Error(
-                '{klass} already closed'.format(klass=self.__class__.__name__))
+                "{klass} already closed".format(klass=self.__class__.__name__)
+            )
         return f(self, *args, **kwargs)
+
     return g
 
 
@@ -47,8 +75,9 @@ def check_result(f):
 
     def g(self, *args, **kwargs):
         if self._results is None:
-            raise exceptions.Error('Called before `execute`')
+            raise exceptions.Error("Called before `execute`")
         return f(self, *args, **kwargs)
+
     return g
 
 
@@ -62,12 +91,12 @@ def get_description_from_row(row):
     """
     return [
         (
-            name,                            # name
-            get_type(value),                 # type_code
-            None,                            # [display_size]
-            None,                            # [internal_size]
-            None,                            # [precision]
-            None,                            # [scale]
+            name,  # name
+            get_type(value),  # type_code
+            None,  # [display_size]
+            None,  # [internal_size]
+            None,  # [precision]
+            None,  # [scale]
             get_type(value) == Type.STRING,  # [null_ok]
         )
         for name, value in row.items()
@@ -75,34 +104,50 @@ def get_description_from_row(row):
 
 
 def get_type(value):
-    """Infer type from value."""
+    """
+    Infer type from value.
+
+    Note that bool is a subclass of int so order of statements matter.
+    """
+
     if isinstance(value, string_types) or value is None:
         return Type.STRING
-    elif isinstance(value, (int, float)):
-        return Type.NUMBER
     elif isinstance(value, bool):
         return Type.BOOLEAN
+    elif isinstance(value, (int, float)):
+        return Type.NUMBER
 
-    raise exceptions.Error(
-        'Value of unknown type: {value}'.format(value=value))
+    raise exceptions.Error("Value of unknown type: {value}".format(value=value))
 
 
 class Connection(object):
-
     """Connection to a Druid database."""
 
     def __init__(
         self,
-        host='localhost',
+        host="localhost",
         port=8082,
-        path='/druid/v2/sql/',
-        scheme='http',
+        path="/druid/v2/sql/",
+        scheme="http",
+        user=None,
+        password=None,
+        context=None,
+        header=False,
+        ssl_verify_cert=True,
+        ssl_client_cert=None,
+        proxies=None,
     ):
-        netloc = '{host}:{port}'.format(host=host, port=port)
-        self.url = parse.urlunparse(
-            (scheme, netloc, path, None, None, None))
+        netloc = "{host}:{port}".format(host=host, port=port)
+        self.url = parse.urlunparse((scheme, netloc, path, None, None, None))
+        self.context = context or {}
         self.closed = False
         self.cursors = []
+        self.header = header
+        self.user = user
+        self.password = password
+        self.ssl_verify_cert = ssl_verify_cert
+        self.ssl_client_cert = ssl_client_cert
+        self.proxies = proxies
 
     @check_closed
     def close(self):
@@ -126,7 +171,18 @@ class Connection(object):
     @check_closed
     def cursor(self):
         """Return a new Cursor Object using the connection."""
-        cursor = Cursor(self.url)
+
+        cursor = Cursor(
+            self.url,
+            self.user,
+            self.password,
+            self.context,
+            self.header,
+            self.ssl_verify_cert,
+            self.ssl_client_cert,
+            self.proxies,
+        )
+
         self.cursors.append(cursor)
 
         return cursor
@@ -144,11 +200,27 @@ class Connection(object):
 
 
 class Cursor(object):
-
     """Connection cursor."""
 
-    def __init__(self, url):
+    def __init__(
+        self,
+        url,
+        user=None,
+        password=None,
+        context=None,
+        header=False,
+        ssl_verify_cert=True,
+        proxies=None,
+        ssl_client_cert=None,
+    ):
         self.url = url
+        self.context = context or {}
+        self.header = header
+        self.user = user
+        self.password = password
+        self.ssl_verify_cert = ssl_verify_cert
+        self.ssl_client_cert = ssl_client_cert
+        self.proxies = proxies
 
         # This read/write attribute specifies the number of rows to fetch at a
         # time with .fetchmany(). It defaults to 1 meaning to fetch a single
@@ -180,15 +252,17 @@ class Cursor(object):
 
     @check_closed
     def execute(self, operation, parameters=None):
-        query = apply_parameters(operation, parameters or {})
-
-        # `_stream_query` returns a generator that produces the rows; we need
-        # to consume the first row so that `description` is properly set, so
-        # let's consume it and insert it back.
+        query = apply_parameters(operation, parameters)
         results = self._stream_query(query)
+
+        # `_stream_query` returns a generator that produces the rows; we need to
+        # consume the first row so that `description` is properly set, so let's
+        # consume it and insert it back if it is not the header.
         try:
             first_row = next(results)
-            self._results = itertools.chain([first_row], results)
+            self._results = (
+                results if self.header else itertools.chain([first_row], results)
+            )
         except StopIteration:
             self._results = iter([])
 
@@ -197,7 +271,8 @@ class Cursor(object):
     @check_closed
     def executemany(self, operation, seq_of_parameters=None):
         raise exceptions.NotSupportedError(
-            '`executemany` is not supported, use `execute` instead')
+            "`executemany` is not supported, use `execute` instead"
+        )
 
     @check_result
     @check_closed
@@ -220,7 +295,7 @@ class Cursor(object):
         no more rows are available.
         """
         size = size or self.arraysize
-        return list(itertools.islice(self, size))
+        return list(itertools.islice(self._results, size))
 
     @check_result
     @check_closed
@@ -230,7 +305,7 @@ class Cursor(object):
         sequence of sequences (e.g. a list of tuples). Note that the cursor's
         arraysize attribute can affect the performance of this operation.
         """
-        return list(self)
+        return list(self._results)
 
     @check_closed
     def setinputsizes(self, sizes):
@@ -261,18 +336,36 @@ class Cursor(object):
         """
         self.description = None
 
-        headers = {'Content-Type': 'application/json'}
-        payload = {'query': query}
-        r = requests.post(self.url, stream=True, headers=headers, json=payload)
-        if r.encoding is None:
-            r.encoding = 'utf-8'
+        headers = {"Content-Type": "application/json"}
 
+        payload = {"query": query, "context": self.context, "header": self.header}
+
+        auth = (
+            requests.auth.HTTPBasicAuth(self.user, self.password) if self.user else None
+        )
+        r = requests.post(
+            self.url,
+            stream=True,
+            headers=headers,
+            json=payload,
+            auth=auth,
+            verify=self.ssl_verify_cert,
+            cert=self.ssl_client_cert,
+            proxies=self.proxies,
+        )
+        if r.encoding is None:
+            r.encoding = "utf-8"
         # raise any error messages
         if r.status_code != 200:
-            payload = r.json()
-            msg = (
-                '{error} ({errorClass}): {errorMessage}'.format(**payload)
-            )
+            try:
+                payload = r.json()
+            except Exception:
+                payload = {
+                    "error": "Unknown error",
+                    "errorClass": "Unknown",
+                    "errorMessage": r.text,
+                }
+            msg = "{error} ({errorClass}): {errorMessage}".format(**payload)
             raise exceptions.ProgrammingError(msg)
 
         # Druid will stream the data in chunks of 8k bytes, splitting the JSON
@@ -283,11 +376,13 @@ class Cursor(object):
         for row in rows_from_chunks(chunks):
             # update description
             if self.description is None:
-                self.description = get_description_from_row(row)
+                self.description = (
+                    list(row.items()) if self.header else get_description_from_row(row)
+                )
 
             # return row in namedtuple
             if Row is None:
-                Row = namedtuple('Row', row.keys(), rename=True)
+                Row = namedtuple("Row", row.keys(), rename=True)
             yield Row(*row.values())
 
 
@@ -299,10 +394,10 @@ def rows_from_chunks(chunks):
     JSON objects. This function will parse all complete rows inside each chunk,
     yielding them as soon as possible.
     """
-    body = ''
+    body = ""
     for chunk in chunks:
         if chunk:
-            body = ''.join((body, chunk))
+            body = "".join((body, chunk))
 
         # find last complete row
         boundary = 0
@@ -312,41 +407,50 @@ def rows_from_chunks(chunks):
             if char == '"':
                 if not in_string:
                     in_string = True
-                elif body[i - 1] != '\\':
+                elif body[i - 1] != "\\":
                     in_string = False
 
             if in_string:
                 continue
 
-            if char == '{':
+            if char == "{":
                 brackets += 1
-            elif char == '}':
+            elif char == "}":
                 brackets -= 1
                 if brackets == 0 and i > boundary:
                     boundary = i + 1
 
-        rows = body[:boundary].lstrip('[,')
+        rows = body[:boundary].lstrip("[,")
         body = body[boundary:]
 
-        for row in json.loads('[{rows}]'.format(rows=rows)):
+        for row in json.loads(
+            "[{rows}]".format(rows=rows), object_pairs_hook=OrderedDict
+        ):
             yield row
 
 
 def apply_parameters(operation, parameters):
-    escaped_parameters = {
-        key: escape(value) for key, value in parameters.items()
-    }
+    if not parameters:
+        return operation
+
+    escaped_parameters = {key: escape(value) for key, value in parameters.items()}
     return operation % escaped_parameters
 
 
 def escape(value):
-    if value == '*':
+    """
+    Escape the parameter value.
+
+    Note that bool is a subclass of int so order of statements matter.
+    """
+
+    if value == "*":
         return value
     elif isinstance(value, string_types):
         return "'{}'".format(value.replace("'", "''"))
+    elif isinstance(value, bool):
+        return "TRUE" if value else "FALSE"
     elif isinstance(value, (int, float)):
         return value
-    elif isinstance(value, bool):
-        return 'TRUE' if value else 'FALSE'
     elif isinstance(value, (list, tuple)):
-        return ', '.join(escape(element) for element in value)
+        return ", ".join(escape(element) for element in value)

+ 0 - 0
desktop/core/ext-py/pydruid/db/exceptions.py → desktop/core/ext-py/pydruid-0.5.11/pydruid/db/exceptions.py


+ 64 - 66
desktop/core/ext-py/pydruid/db/sqlalchemy.py → desktop/core/ext-py/pydruid-0.5.11/pydruid/db/sqlalchemy.py

@@ -11,24 +11,24 @@ import pydruid.db
 from pydruid.db import exceptions
 
 
-RESERVED_SCHEMAS = ['INFORMATION_SCHEMA']
+RESERVED_SCHEMAS = ["INFORMATION_SCHEMA"]
 
 
 type_map = {
-    'char': types.String,
-    'varchar': types.String,
-    'float': types.Float,
-    'decimal': types.Float,
-    'real': types.Float,
-    'double': types.Float,
-    'boolean': types.Boolean,
-    'tinyint': types.BigInteger,
-    'smallint': types.BigInteger,
-    'integer': types.BigInteger,
-    'bigint': types.BigInteger,
-    'timestamp': types.TIMESTAMP,
-    'date': types.DATE,
-    'other': types.BLOB,
+    "char": types.String,
+    "varchar": types.String,
+    "float": types.Float,
+    "decimal": types.Float,
+    "real": types.Float,
+    "double": types.Float,
+    "boolean": types.Boolean,
+    "tinyint": types.BigInteger,
+    "smallint": types.BigInteger,
+    "integer": types.BigInteger,
+    "bigint": types.BigInteger,
+    "timestamp": types.TIMESTAMP,
+    "date": types.DATE,
+    "other": types.BLOB,
 }
 
 
@@ -69,32 +69,34 @@ class DruidTypeCompiler(compiler.GenericTypeCompiler):
     visit_TEXT = visit_CHAR
 
     def visit_DATETIME(self, type_, **kwargs):
-        raise exceptions.NotSupportedError('Type DATETIME is not supported')
+        raise exceptions.NotSupportedError("Type DATETIME is not supported")
 
     def visit_TIME(self, type_, **kwargs):
-        raise exceptions.NotSupportedError('Type TIME is not supported')
+        raise exceptions.NotSupportedError("Type TIME is not supported")
 
     def visit_BINARY(self, type_, **kwargs):
-        raise exceptions.NotSupportedError('Type BINARY is not supported')
+        raise exceptions.NotSupportedError("Type BINARY is not supported")
 
     def visit_VARBINARY(self, type_, **kwargs):
-        raise exceptions.NotSupportedError('Type VARBINARY is not supported')
+        raise exceptions.NotSupportedError("Type VARBINARY is not supported")
 
     def visit_BLOB(self, type_, **kwargs):
-        raise exceptions.NotSupportedError('Type BLOB is not supported')
+        raise exceptions.NotSupportedError("Type BLOB is not supported")
 
     def visit_CLOB(self, type_, **kwargs):
-        raise exceptions.NotSupportedError('Type CBLOB is not supported')
+        raise exceptions.NotSupportedError("Type CBLOB is not supported")
 
     def visit_NCLOB(self, type_, **kwargs):
-        raise exceptions.NotSupportedError('Type NCBLOB is not supported')
+        raise exceptions.NotSupportedError("Type NCBLOB is not supported")
 
 
 class DruidDialect(default.DefaultDialect):
 
-    name = 'druid'
-    scheme = 'http'
-    driver = 'rest'
+    name = "druid"
+    scheme = "http"
+    driver = "rest"
+    user = None
+    password = None
     preparer = DruidIdentifierPreparer
     statement_compiler = DruidCompiler
     type_compiler = DruidTypeCompiler
@@ -108,16 +110,24 @@ class DruidDialect(default.DefaultDialect):
     description_encoding = None
     supports_native_boolean = True
 
+    def __init__(self, context=None, *args, **kwargs):
+        super(DruidDialect, self).__init__(*args, **kwargs)
+        self.context = context or {}
+
     @classmethod
     def dbapi(cls):
         return pydruid.db
 
     def create_connect_args(self, url):
         kwargs = {
-            'host': url.host,
-            'port': url.port or 8082,
-            'path': url.database,
-            'scheme': self.scheme,
+            "host": url.host,
+            "port": url.port or 8082,
+            "user": url.username or None,
+            "password": url.password or None,
+            "path": url.database,
+            "scheme": self.scheme,
+            "context": self.context,
+            "header": url.query.get("header") == "true",
         }
         return ([], kwargs)
 
@@ -126,11 +136,11 @@ class DruidDialect(default.DefaultDialect):
         # is also the default schema, so Druid datasources can be referenced as
         # either druid.dataSourceName or simply dataSourceName.
         result = connection.execute(
-            'SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA')
+            "SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA"
+        )
 
         return [
-            row.SCHEMA_NAME for row in result
-            if row.SCHEMA_NAME not in RESERVED_SCHEMAS
+            row.SCHEMA_NAME for row in result if row.SCHEMA_NAME not in RESERVED_SCHEMAS
         ]
 
     def has_table(self, connection, table_name, schema=None):
@@ -138,7 +148,9 @@ class DruidDialect(default.DefaultDialect):
             SELECT COUNT(*) > 0 AS exists_
               FROM INFORMATION_SCHEMA.TABLES
              WHERE TABLE_NAME = '{table_name}'
-        """.format(table_name=table_name)
+        """.format(
+            table_name=table_name
+        )
 
         result = connection.execute(query)
         return result.fetchone().exists_
@@ -147,7 +159,8 @@ class DruidDialect(default.DefaultDialect):
         query = "SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES"
         if schema:
             query = "{query} WHERE TABLE_SCHEMA = '{schema}'".format(
-                query=query, schema=schema)
+                query=query, schema=schema
+            )
 
         result = connection.execute(query)
         return [row.TABLE_NAME for row in result]
@@ -166,60 +179,45 @@ class DruidDialect(default.DefaultDialect):
                    COLUMN_DEFAULT
               FROM INFORMATION_SCHEMA.COLUMNS
              WHERE TABLE_NAME = '{table_name}'
-        """.format(table_name=table_name)
+        """.format(
+            table_name=table_name
+        )
         if schema:
             query = "{query} AND TABLE_SCHEMA = '{schema}'".format(
-                query=query, schema=schema)
+                query=query, schema=schema
+            )
 
         result = connection.execute(query)
 
         return [
             {
-                'name': row.COLUMN_NAME,
-                'type': type_map[row.DATA_TYPE.lower()],
-                'nullable': get_is_nullable(row.IS_NULLABLE),
-                'default': get_default(row.COLUMN_DEFAULT),
+                "name": row.COLUMN_NAME,
+                "type": type_map[row.DATA_TYPE.lower()],
+                "nullable": get_is_nullable(row.IS_NULLABLE),
+                "default": get_default(row.COLUMN_DEFAULT),
             }
             for row in result
         ]
 
     def get_pk_constraint(self, connection, table_name, schema=None, **kwargs):
-        return {'constrained_columns': [], 'name': None}
+        return {"constrained_columns": [], "name": None}
 
     def get_foreign_keys(self, connection, table_name, schema=None, **kwargs):
         return []
 
-    def get_check_constraints(
-        self,
-        connection,
-        table_name,
-        schema=None,
-        **kwargs
-    ):
+    def get_check_constraints(self, connection, table_name, schema=None, **kwargs):
         return []
 
     def get_table_comment(self, connection, table_name, schema=None, **kwargs):
-        return {'text': ''}
+        return {"text": ""}
 
     def get_indexes(self, connection, table_name, schema=None, **kwargs):
         return []
 
-    def get_unique_constraints(
-        self,
-        connection,
-        table_name,
-        schema=None,
-        **kwargs
-    ):
+    def get_unique_constraints(self, connection, table_name, schema=None, **kwargs):
         return []
 
-    def get_view_definition(
-        self,
-        connection,
-        view_name,
-        schema=None,
-        **kwargs
-    ):
+    def get_view_definition(self, connection, view_name, schema=None, **kwargs):
         pass
 
     def do_rollback(self, dbapi_connection):
@@ -237,14 +235,14 @@ DruidHTTPDialect = DruidDialect
 
 class DruidHTTPSDialect(DruidDialect):
 
-    scheme = 'https'
+    scheme = "https"
 
 
 def get_is_nullable(druid_is_nullable):
     # this should be 'YES' or 'NO'; we default to no
-    return druid_is_nullable.lower() == 'yes'
+    return druid_is_nullable.lower() == "yes"
 
 
 def get_default(druid_column_default):
     # currently unused, returns ''
-    return str(druid_column_default) if druid_column_default != '' else None
+    return str(druid_column_default) if druid_column_default != "" else None

+ 0 - 0
desktop/core/ext-py/pydruid/__init__.py → desktop/core/ext-py/pydruid-0.5.11/pydruid/utils/__init__.py


+ 14 - 5
desktop/core/ext-py/pydruid/utils/aggregators.py → desktop/core/ext-py/pydruid-0.5.11/pydruid/utils/aggregators.py

@@ -80,9 +80,11 @@ def cardinality(raw_column, by_row=False):
 
 
 def filtered(filter, agg):
-    return {"type": "filtered",
-            "filter": Filter.build_filter(filter),
-            "aggregator": agg}
+    return {
+        "type": "filtered",
+        "filter": Filter.build_filter(filter),
+        "aggregator": agg,
+    }
 
 
 def javascript(columns_list, fn_aggregate, fn_combine, fn_reset):
@@ -95,9 +97,16 @@ def javascript(columns_list, fn_aggregate, fn_combine, fn_reset):
     }
 
 
+def stringfirst(raw_metric):
+    return {"type": "stringFirst", "fieldName": raw_metric}
+
+
+def stringlast(raw_metric):
+    return {"type": "stringLast", "fieldName": raw_metric}
+
+
 def build_aggregators(agg_input):
-    return [_build_aggregator(name, kwargs)
-            for (name, kwargs) in iteritems(agg_input)]
+    return [_build_aggregator(name, kwargs) for (name, kwargs) in iteritems(agg_input)]
 
 
 def _build_aggregator(name, kwargs):

+ 50 - 41
desktop/core/ext-py/pydruid/utils/dimensions.py → desktop/core/ext-py/pydruid-0.5.11/pydruid/utils/dimensions.py

@@ -6,9 +6,9 @@ def build_dimension(dim):
 
 
 class DimensionSpec(object):
-
-    def __init__(self, dimension, output_name,
-                 extraction_function=None, filter_spec=None):
+    def __init__(
+        self, dimension, output_name, extraction_function=None, filter_spec=None
+    ):
         self._dimension = dimension
         self._output_name = output_name
         self._extraction_function = extraction_function
@@ -16,14 +16,14 @@ class DimensionSpec(object):
 
     def build(self):
         dimension_spec = {
-            'type': 'default',
-            'dimension': self._dimension,
-            'outputName': self._output_name
+            "type": "default",
+            "dimension": self._dimension,
+            "outputName": self._output_name,
         }
 
         if self._extraction_function is not None:
-            dimension_spec['type'] = 'extraction'
-            dimension_spec['extractionFn'] = self._extraction_function.build()
+            dimension_spec["type"] = "extraction"
+            dimension_spec["extractionFn"] = self._extraction_function.build()
 
         if self._filter_spec is not None:
             dimension_spec = self._filter_spec.build(dimension_spec)
@@ -36,16 +36,13 @@ class FilteredSpec(object):
     filter_type = None
 
     def build(self, delegate):
-        dimension_spec = {
-            'type': self.filter_type,
-            'delegate': delegate,
-        }
+        dimension_spec = {"type": self.filter_type, "delegate": delegate}
         return dimension_spec
 
 
 class ListFilteredSpec(FilteredSpec):
 
-    filter_type = 'listFiltered'
+    filter_type = "listFiltered"
 
     def __init__(self, values, is_whitelist=True):
         self._values = values
@@ -53,24 +50,24 @@ class ListFilteredSpec(FilteredSpec):
 
     def build(self, dimension_spec):
         filtered_dimension_spec = super(ListFilteredSpec, self).build(dimension_spec)
-        filtered_dimension_spec['values'] = self._values
+        filtered_dimension_spec["values"] = self._values
 
         if not self._is_whitelist:
-            filtered_dimension_spec['isWhitelist'] = False
+            filtered_dimension_spec["isWhitelist"] = False
 
         return filtered_dimension_spec
 
 
 class RegexFilteredSpec(FilteredSpec):
 
-    filter_type = 'regexFiltered'
+    filter_type = "regexFiltered"
 
     def __init__(self, pattern):
         self._pattern = pattern
 
     def build(self, dimension_spec):
         filtered_dimension_spec = super(RegexFilteredSpec, self).build(dimension_spec)
-        filtered_dimension_spec['pattern'] = self._pattern
+        filtered_dimension_spec["pattern"] = self._pattern
 
         return filtered_dimension_spec
 
@@ -80,35 +77,34 @@ class ExtractionFunction(object):
     extraction_type = None
 
     def build(self):
-        return {'type': self.extraction_type}
+        return {"type": self.extraction_type}
 
 
 class BaseRegexExtraction(ExtractionFunction):
-
     def __init__(self, expr):
         super(BaseRegexExtraction, self).__init__()
         self._expr = expr
 
     def build(self):
         extractor = super(BaseRegexExtraction, self).build()
-        extractor['expr'] = self._expr
+        extractor["expr"] = self._expr
 
         return extractor
 
 
 class RegexExtraction(BaseRegexExtraction):
 
-    extraction_type = 'regex'
+    extraction_type = "regex"
 
 
 class PartialExtraction(BaseRegexExtraction):
 
-    extraction_type = 'partial'
+    extraction_type = "partial"
 
 
 class JavascriptExtraction(ExtractionFunction):
 
-    extraction_type = 'javascript'
+    extraction_type = "javascript"
 
     def __init__(self, func, injective=False):
         super(JavascriptExtraction, self).__init__()
@@ -117,15 +113,15 @@ class JavascriptExtraction(ExtractionFunction):
 
     def build(self):
         extractor = super(JavascriptExtraction, self).build()
-        extractor['function'] = self._func
-        extractor['injective'] = self._injective
+        extractor["function"] = self._func
+        extractor["injective"] = self._injective
 
         return extractor
 
 
 class TimeFormatExtraction(ExtractionFunction):
 
-    extraction_type = 'timeFormat'
+    extraction_type = "timeFormat"
 
     def __init__(self, format, locale=None, time_zone=None):
         super(TimeFormatExtraction, self).__init__()
@@ -135,22 +131,23 @@ class TimeFormatExtraction(ExtractionFunction):
 
     def build(self):
         extractor = super(TimeFormatExtraction, self).build()
-        extractor['format'] = self._format
+        extractor["format"] = self._format
         if self._locale:
-            extractor['locale'] = self._locale
+            extractor["locale"] = self._locale
         if self._time_zone:
-            extractor['timeZone'] = self._time_zone
+            extractor["timeZone"] = self._time_zone
 
         return extractor
 
 
 class LookupExtraction(ExtractionFunction):
 
-    extraction_type = 'lookup'
+    extraction_type = "lookup"
     lookup_type = None
 
-    def __init__(self, retain_missing_values=False,
-                 replace_missing_values=None, injective=False):
+    def __init__(
+        self, retain_missing_values=False, replace_missing_values=None, injective=False
+    ):
         super(LookupExtraction, self).__init__()
         self._retain_missing_values = retain_missing_values
         self._replace_missing_values = replace_missing_values
@@ -158,20 +155,20 @@ class LookupExtraction(ExtractionFunction):
 
     def build(self):
         extractor = super(LookupExtraction, self).build()
-        extractor['lookup'] = self.build_lookup()
-        extractor['retainMissingValue'] = self._retain_missing_values
-        extractor['replaceMissingValueWith'] = self._replace_missing_values
-        extractor['injective'] = self._injective
+        extractor["lookup"] = self.build_lookup()
+        extractor["retainMissingValue"] = self._retain_missing_values
+        extractor["replaceMissingValueWith"] = self._replace_missing_values
+        extractor["injective"] = self._injective
 
         return extractor
 
     def build_lookup(self):
-        return {'type': self.lookup_type}
+        return {"type": self.lookup_type}
 
 
 class MapLookupExtraction(LookupExtraction):
 
-    lookup_type = 'map'
+    lookup_type = "map"
 
     def __init__(self, mapping, **kwargs):
         super(MapLookupExtraction, self).__init__(**kwargs)
@@ -179,14 +176,14 @@ class MapLookupExtraction(LookupExtraction):
 
     def build_lookup(self):
         lookup = super(MapLookupExtraction, self).build_lookup()
-        lookup['map'] = self._mapping
+        lookup["map"] = self._mapping
 
         return lookup
 
 
 class NamespaceLookupExtraction(LookupExtraction):
 
-    lookup_type = 'namespace'
+    lookup_type = "namespace"
 
     def __init__(self, namespace, **kwargs):
         super(NamespaceLookupExtraction, self).__init__(**kwargs)
@@ -194,6 +191,18 @@ class NamespaceLookupExtraction(LookupExtraction):
 
     def build_lookup(self):
         lookup = super(NamespaceLookupExtraction, self).build_lookup()
-        lookup['namespace'] = self._namespace
+        lookup["namespace"] = self._namespace
 
         return lookup
+
+
+class RegisteredLookupExtraction(LookupExtraction):
+
+    extraction_type = "registeredLookup"
+
+    def __init__(self, reglookup, **kwargs):
+        super(RegisteredLookupExtraction, self).__init__(**kwargs)
+        self._lookup = reglookup
+
+    def build_lookup(self):
+        return self._lookup

+ 293 - 0
desktop/core/ext-py/pydruid-0.5.11/pydruid/utils/filters.py

@@ -0,0 +1,293 @@
+#
+# Copyright 2013 Metamarkets Group Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+try:
+    import simplejson as json
+except ImportError:
+    import json
+
+from .dimensions import build_dimension
+
+
+class Filter:
+
+    # filter types supporting extraction function
+    _FILTERS_WITH_EXTR_FN = (
+        "selector",
+        "regex",
+        "javascript",
+        "in",
+        "bound",
+        "interval",
+        "extraction",
+    )
+
+    def __init__(self, extraction_function=None, ordering="lexicographic", **args):
+
+        type_ = args.get("type", "selector")
+
+        if extraction_function is not None:
+            if type_ not in self._FILTERS_WITH_EXTR_FN:
+                raise ValueError(
+                    "Filter of type {0} doesn't support "
+                    "extraction function".format(type_)
+                )
+        elif type_ == "extraction":
+            raise ValueError(
+                "Filter of type extraction requires extraction " "function"
+            )
+
+        self.extraction_function = extraction_function
+
+        self.filter = {"filter": {"type": type_}}
+
+        if type_ == "selector":
+            self.filter["filter"].update(
+                {"dimension": args["dimension"], "value": args["value"]}
+            )
+        elif type_ == "javascript":
+            self.filter["filter"].update(
+                {"dimension": args["dimension"], "function": args["function"]}
+            )
+        elif type_ == "and":
+            self.filter["filter"].update({"fields": args["fields"]})
+        elif type_ == "or":
+            self.filter["filter"].update({"fields": args["fields"]})
+        elif type_ == "not":
+            self.filter["filter"].update({"field": args["field"]})
+        elif type_ == "in":
+            self.filter["filter"].update(
+                {"dimension": args["dimension"], "values": args["values"]}
+            )
+        elif type_ == "regex":
+            self.filter["filter"].update(
+                {"dimension": args["dimension"], "pattern": args["pattern"]}
+            )
+        elif type_ == "bound":
+            self.filter["filter"].update(
+                {
+                    "dimension": args["dimension"],
+                    "lower": args["lower"],
+                    "lowerStrict": args["lowerStrict"],
+                    "upper": args["upper"],
+                    "upperStrict": args["upperStrict"],
+                    "alphaNumeric": args["alphaNumeric"],
+                    "ordering": ordering,
+                }
+            )
+        elif type_ == "columnComparison":
+            self.filter["filter"].update({"dimensions": args["dimensions"]})
+        elif type_ == "interval":
+            self.filter["filter"].update(
+                {"dimension": args["dimension"], "intervals": args["intervals"]}
+            )
+        elif type_ == "extraction":
+            self.filter["filter"].update(
+                {"dimension": args["dimension"], "value": args["value"]}
+            )
+        elif type_ == "search":
+            self.filter["filter"].update(
+                {
+                    "dimension": args["dimension"],
+                    "query": {
+                        "type": "contains",
+                        "value": args["value"],
+                        "caseSensitive": args.get("caseSensitive", "false"),
+                    },
+                }
+            )
+        elif type_ == "like":
+            self.filter["filter"].update(
+                {"dimension": args["dimension"], "pattern": args["pattern"]}
+            )
+        elif type_ == "spatial":
+            self.filter["filter"].update(
+                {"dimension": args["dimension"], "bound": args["bound"]}
+            )
+        else:
+            raise NotImplementedError("Filter type: {0} does not exist".format(type_))
+
+    def show(self):
+        print(json.dumps(self.filter, indent=4))
+
+    def __and__(self, x):
+        if self.filter["filter"]["type"] == "and":
+            # if `self` is already `and`, don't create a new filter
+            # but just append `x` to the filter fields.
+            self.filter["filter"]["fields"].append(x)
+            return self
+        return Filter(type="and", fields=[self, x])
+
+    def __or__(self, x):
+        if self.filter["filter"]["type"] == "or":
+            # if `self` is already `or`, don't create a new filter
+            # but just append `x` to the filter fields.
+            self.filter["filter"]["fields"].append(x)
+            return self
+        return Filter(type="or", fields=[self, x])
+
+    def __invert__(self):
+        return Filter(type="not", field=self)
+
+    @staticmethod
+    def build_filter(filter_obj):
+        filter = filter_obj.filter["filter"]
+        if filter["type"] in ["and", "or"]:
+            filter = filter.copy()  # make a copy so we don't overwrite `fields`
+            filter["fields"] = [Filter.build_filter(f) for f in filter["fields"]]
+        elif filter["type"] in ["not"]:
+            filter = filter.copy()
+            filter["field"] = Filter.build_filter(filter["field"])
+        elif filter["type"] in ["columnComparison"]:
+            filter = filter.copy()
+            filter["dimensions"] = [build_dimension(d) for d in filter["dimensions"]]
+
+        if filter_obj.extraction_function is not None:
+            if filter is filter_obj.filter["filter"]:  # copy if not yet copied
+                filter = filter.copy()
+            filter["extractionFn"] = filter_obj.extraction_function.build()
+
+        return filter
+
+
+class Dimension:
+    def __init__(self, dim):
+        self.dimension = dim
+
+    def __eq__(self, other):
+        return Filter(dimension=self.dimension, value=other)
+
+    def __ne__(self, other):
+        return ~Filter(dimension=self.dimension, value=other)
+
+
+class JavaScript:
+    def __init__(self, dim):
+        self.dimension = dim
+
+    def __eq__(self, func):
+        return Filter(type="javascript", dimension=self.dimension, function=func)
+
+
+class Bound(Filter):
+    """
+    Bound filter can be used to filter by comparing dimension values to an
+    upper value or/and a lower value.
+
+    :ivar str dimension: Dimension to filter on.
+    :ivar str lower: Lower bound.
+    :ivar str upper: Upper bound.
+    :ivar bool lowerStrict: Strict lower inclusion. Initial value: False
+    :ivar bool upperStrict: Strict upper inclusion. Initial value: False
+    :ivar bool alphaNumeric: Numeric comparison. Initial value: False
+        NOTE: For backwards compatibility - Use "ordering" instead.
+    :ivar str ordering: Sorting Order. Initial value: lexicographic
+        Specifies the sorting order to use when comparing values against the bound.
+        Can be one of the following values: "lexicographic", "alphanumeric", "numeric",
+        "strlen", "version". See Sorting Orders
+        https://druid.apache.org/docs/latest/querying/filters.html#bound-filter
+        for more details.
+    :ivar ExtractionFunction extraction_function: extraction function to use,
+                                                  if not None
+    """
+
+    def __init__(
+        self,
+        dimension,
+        lower=None,
+        upper=None,
+        lowerStrict=False,
+        upperStrict=False,
+        alphaNumeric=False,
+        ordering="lexicographic",
+        extraction_function=None,
+    ):
+        if not lower and not upper:
+            raise ValueError("Must include either lower or upper or both")
+        Filter.__init__(
+            self,
+            type="bound",
+            dimension=dimension,
+            lower=lower,
+            upper=upper,
+            lowerStrict=lowerStrict,
+            upperStrict=upperStrict,
+            alphaNumeric=alphaNumeric,
+            ordering=ordering,
+            extraction_function=extraction_function,
+        )
+
+
+class Interval(Filter):
+    """
+    Interval filter can be used to filter by comparing dimension(__time)
+    values to a list of intervals.
+
+    :ivar str dimension: Dimension to filter on.
+    :ivar list intervals: List of ISO-8601 intervals of data to filter out.
+    :ivar ExtractionFunction extraction_function: extraction function to use,
+                                                  if not None
+    """
+
+    def __init__(self, dimension, intervals, extraction_function=None):
+
+        Filter.__init__(
+            self,
+            type="interval",
+            dimension=dimension,
+            intervals=intervals,
+            extraction_function=extraction_function,
+        )
+
+
+class Spatial(Filter):
+    """
+    Spatial filter can be used to filter by spatial bounds
+
+    :ivar str dimension: Dimension to filter on.
+    :ivar str bound_type: Spatial bound type: ['rectangle','radius','polygon'].
+    :param `**kwargs`: addition arguments required for the selected bound type:
+        'rectange': 'minCoords' and 'maxCoords'
+        'radius': 'coords' and 'radius'
+        'polygon': 'abscissa' and 'ordinate'
+    """
+
+    def __init__(self, dimension, bound_type, **args):
+
+        _bound = {"type": bound_type}
+
+        if bound_type == "rectangle":
+            if not args["minCoords"] or not args["maxCoords"]:
+                raise ValueError(
+                    "Rectangle bound must include both minCoords and maxCoords"
+                )
+            _bound["minCoords"] = args["minCoords"]
+            _bound["maxCoords"] = args["maxCoords"]
+        elif bound_type == "radius":
+            if not args["coords"] or not args["radius"]:
+                raise ValueError("Radius bound must include both coords and radius")
+            _bound["coords"] = args["coords"]
+            _bound["radius"] = args["radius"]
+        elif bound_type == "polygon":
+            if not args["abscissa"] or not args["ordinate"]:
+                raise ValueError(
+                    "Polygon bound must include both abscissa and ordinate"
+                )
+            _bound["abscissa"] = args["abscissa"]
+            _bound["ordinate"] = args["ordinate"]
+        else:
+            raise ValueError("Unsupport Spatial Bound type: {0}".format(bound_type))
+
+        Filter.__init__(self, type="spatial", dimension=dimension, bound=_bound)

+ 111 - 0
desktop/core/ext-py/pydruid-0.5.11/pydruid/utils/having.py

@@ -0,0 +1,111 @@
+#
+# Copyright 2013 Metamarkets Group Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+try:
+    import simplejson as json
+except ImportError:
+    import json
+
+
+class Having:
+    def __init__(self, **args):
+
+        if args["type"] in ("equalTo", "lessThan", "greaterThan"):
+            self.having = {
+                "having": {
+                    "type": args["type"],
+                    "aggregation": args["aggregation"],
+                    "value": args["value"],
+                }
+            }
+
+        elif args["type"] == "and":
+            self.having = {
+                "having": {"type": "and", "havingSpecs": args["havingSpecs"]}
+            }
+
+        elif args["type"] == "or":
+            self.having = {"having": {"type": "or", "havingSpecs": args["havingSpecs"]}}
+
+        elif args["type"] == "not":
+            self.having = {"having": {"type": "not", "havingSpec": args["havingSpec"]}}
+
+        elif args["type"] == "filter":
+            self.having = {"having": {"type": args["type"], "filter": args["filter"]}}
+
+        elif args["type"] == "dimSelector":
+            self.having = {
+                "having": {
+                    "type": args["type"],
+                    "dimension": args["dimension"],
+                    "value": args["value"],
+                }
+            }
+
+        else:
+            raise NotImplementedError(
+                "Having type: {0} does not exist".format(args["type"])
+            )
+
+    def show(self):
+        print(json.dumps(self.having, indent=4))
+
+    def _combine(self, typ, x):
+        # collapse nested and/ors
+        if self.having["having"]["type"] == typ:
+            havingSpecs = self.having["having"]["havingSpecs"] + [x.having["having"]]
+            return Having(type=typ, havingSpecs=havingSpecs)
+        elif x.having["having"]["type"] == typ:
+            havingSpecs = [self.having["having"]] + x.having["having"]["havingSpecs"]
+            return Having(type=typ, havingSpecs=havingSpecs)
+        else:
+            return Having(
+                type=typ, havingSpecs=[self.having["having"], x.having["having"]]
+            )
+
+    def __and__(self, x):
+        return self._combine("and", x)
+
+    def __or__(self, x):
+        return self._combine("or", x)
+
+    def __invert__(self):
+        return Having(type="not", havingSpec=self.having["having"])
+
+    @staticmethod
+    def build_having(having_obj):
+        return having_obj.having["having"]
+
+
+class Aggregation:
+    def __init__(self, agg):
+        self.aggregation = agg
+
+    def __eq__(self, other):
+        return Having(type="equalTo", aggregation=self.aggregation, value=other)
+
+    def __lt__(self, other):
+        return Having(type="lessThan", aggregation=self.aggregation, value=other)
+
+    def __gt__(self, other):
+        return Having(type="greaterThan", aggregation=self.aggregation, value=other)
+
+
+class Dimension:
+    def __init__(self, dim):
+        self.dimension = dim
+
+    def __eq__(self, other):
+        return Having(type="dimSelector", dimension=self.dimension, value=other)

+ 77 - 62
desktop/core/ext-py/pydruid/utils/postaggregator.py → desktop/core/ext-py/pydruid-0.5.11/pydruid/utils/postaggregator.py

@@ -20,27 +20,25 @@ import six
 
 class Postaggregator:
     def __init__(self, fn, fields, name):
-        self.post_aggregator = {'type': 'arithmetic',
-                                'name': name,
-                                'fn': fn,
-                                'fields': fields}
+        self.post_aggregator = {
+            "type": "arithmetic",
+            "name": name,
+            "fn": fn,
+            "fields": fields,
+        }
         self.name = name
 
     def __mul__(self, other):
-        return Postaggregator('*', self.fields(other),
-                              self.name + 'mul' + other.name)
+        return Postaggregator("*", self.fields(other), self.name + "mul" + other.name)
 
     def __sub__(self, other):
-        return Postaggregator('-', self.fields(other),
-                              self.name + 'sub' + other.name)
+        return Postaggregator("-", self.fields(other), self.name + "sub" + other.name)
 
     def __add__(self, other):
-        return Postaggregator('+', self.fields(other),
-                              self.name + 'add' + other.name)
+        return Postaggregator("+", self.fields(other), self.name + "add" + other.name)
 
     def __div__(self, other):
-        return Postaggregator('/', self.fields(other),
-                              self.name + 'div' + other.name)
+        return Postaggregator("/", self.fields(other), self.name + "div" + other.name)
 
     def __truediv__(self, other):
         return self.__div__(other)
@@ -51,134 +49,147 @@ class Postaggregator:
     @staticmethod
     def build_post_aggregators(postaggs):
         def rename_postagg(new_name, post_aggregator):
-            post_aggregator['name'] = new_name
+            post_aggregator["name"] = new_name
             return post_aggregator
 
-        return [rename_postagg(new_name, postagg.post_aggregator)
-                for (new_name, postagg) in six.iteritems(postaggs)]
+        return [
+            rename_postagg(new_name, postagg.post_aggregator)
+            for (new_name, postagg) in six.iteritems(postaggs)
+        ]
 
 
 class Quantile(Postaggregator):
     def __init__(self, name, probability):
         Postaggregator.__init__(self, None, None, name)
         self.post_aggregator = {
-            'type': 'quantile', 'fieldName': name, 'probability': probability}
+            "type": "quantile",
+            "fieldName": name,
+            "probability": probability,
+        }
 
 
 class Quantiles(Postaggregator):
     def __init__(self, name, probabilities):
         Postaggregator.__init__(self, None, None, name)
         self.post_aggregator = {
-            'type': 'quantiles', 'fieldName': name,
-            'probabilities': probabilities}
+            "type": "quantiles",
+            "fieldName": name,
+            "probabilities": probabilities,
+        }
 
 
 class Field(Postaggregator):
     def __init__(self, name):
         Postaggregator.__init__(self, None, None, name)
-        self.post_aggregator = {
-            'type': 'fieldAccess', 'fieldName': name}
+        self.post_aggregator = {"type": "fieldAccess", "fieldName": name}
 
 
 class Const(Postaggregator):
     def __init__(self, value, output_name=None):
 
         if output_name is None:
-            name = 'const'
+            name = "const"
         else:
             name = output_name
 
         Postaggregator.__init__(self, None, None, name)
-        self.post_aggregator = {
-            'type': 'constant', 'name': name, 'value': value}
+        self.post_aggregator = {"type": "constant", "name": name, "value": value}
 
 
 class HyperUniqueCardinality(Postaggregator):
     def __init__(self, name):
         Postaggregator.__init__(self, None, None, name)
-        self.post_aggregator = {
-            'type': 'hyperUniqueCardinality', 'fieldName': name}
+        self.post_aggregator = {"type": "hyperUniqueCardinality", "fieldName": name}
 
 
 class DoubleGreatest(Postaggregator):
     def __init__(self, fields, output_name=None):
 
         if output_name is None:
-            name = 'doubleGreatest'
+            name = "doubleGreatest"
         else:
             name = output_name
 
         Postaggregator.__init__(self, None, None, name)
         self.post_aggregator = {
-                'type': 'doubleGreatest',
-                'name': name,
-                'fields': [f.post_aggregator for f in fields]}
+            "type": "doubleGreatest",
+            "name": name,
+            "fields": [f.post_aggregator for f in fields],
+        }
 
 
 class DoubleLeast(Postaggregator):
     def __init__(self, fields, output_name=None):
 
         if output_name is None:
-            name = 'doubleLeast'
+            name = "doubleLeast"
         else:
             name = output_name
 
         Postaggregator.__init__(self, None, None, name)
         self.post_aggregator = {
-                'type': 'doubleLeast',
-                'name': name,
-                'fields': [f.post_aggregator for f in fields]}
+            "type": "doubleLeast",
+            "name": name,
+            "fields": [f.post_aggregator for f in fields],
+        }
 
 
 class LongGreatest(Postaggregator):
     def __init__(self, fields, output_name=None):
 
         if output_name is None:
-            name = 'longGreatest'
+            name = "longGreatest"
         else:
             name = output_name
 
         Postaggregator.__init__(self, None, None, name)
         self.post_aggregator = {
-                'type': 'longGreatest',
-                'name': name,
-                'fields': [f.post_aggregator for f in fields]}
+            "type": "longGreatest",
+            "name": name,
+            "fields": [f.post_aggregator for f in fields],
+        }
 
 
 class LongLeast(Postaggregator):
     def __init__(self, fields, output_name=None):
 
         if output_name is None:
-            name = 'longLeast'
+            name = "longLeast"
         else:
             name = output_name
 
         Postaggregator.__init__(self, None, None, name)
         self.post_aggregator = {
-                'type': 'longLeast',
-                'name': name,
-                'fields': [f.post_aggregator for f in fields]}
+            "type": "longLeast",
+            "name": name,
+            "fields": [f.post_aggregator for f in fields],
+        }
 
 
 class ThetaSketchOp(object):
     def __init__(self, fn, fields, name):
-        self.post_aggregator = {'type': 'thetaSketchSetOp',
-                                'name': name,
-                                'func': fn,
-                                'fields': fields}
+        self.post_aggregator = {
+            "type": "thetaSketchSetOp",
+            "name": name,
+            "func": fn,
+            "fields": fields,
+        }
         self.name = name
 
     def __or__(self, other):
-        return ThetaSketchOp('UNION', self.fields(other),
-                             self.name + '_OR_' + other.name)
+        return ThetaSketchOp(
+            "UNION", self.fields(other), self.name + "_OR_" + other.name
+        )
 
     def __and__(self, other):
-        return ThetaSketchOp('INTERSECT', self.fields(other),
-                             self.name + '_AND_' + other.name)
+        return ThetaSketchOp(
+            "INTERSECT", self.fields(other), self.name + "_AND_" + other.name
+        )
 
     def __ne__(self, other):
-        return ThetaSketchOp('NOT', self.fields(other),
-                             self.name + '_NOT_' + other.name)
+        return ThetaSketchOp(
+            "NOT", self.fields(other), self.name + "_NOT_" + other.name
+        )
 
     def fields(self, other):
         return [self.post_aggregator, other.post_aggregator]
@@ -186,27 +197,31 @@ class ThetaSketchOp(object):
     @staticmethod
     def build_post_aggregators(thetasketchops):
         def rename_thetasketchop(new_name, thetasketchop):
-            thetasketchop['name'] = new_name
+            thetasketchop["name"] = new_name
             return thetasketchop
 
-        return [rename_thetasketchop(new_name, thetasketchop.post_aggregator)
-                for (new_name, thetasketchop) in six.iteritems(thetasketchops)]
+        return [
+            rename_thetasketchop(new_name, thetasketchop.post_aggregator)
+            for (new_name, thetasketchop) in six.iteritems(thetasketchops)
+        ]
 
 
 class ThetaSketch(ThetaSketchOp):
     def __init__(self, name):
         ThetaSketchOp.__init__(self, None, None, name)
-        self.post_aggregator = {
-            'type': 'fieldAccess', 'fieldName': name}
+        self.post_aggregator = {"type": "fieldAccess", "fieldName": name}
 
 
 class ThetaSketchEstimate(Postaggregator):
     def __init__(self, fields):
-        field = fields.post_aggregator \
-            if type(fields) in [ThetaSketch, ThetaSketchOp] else fields
+        field = (
+            fields.post_aggregator
+            if type(fields) in [ThetaSketch, ThetaSketchOp]
+            else fields
+        )
         self.post_aggregator = {
-            'type': 'thetaSketchEstimate',
-            'name': 'thetasketchestimate',
-            'field': field,
+            "type": "thetaSketchEstimate",
+            "name": "thetasketchestimate",
+            "field": field,
         }
-        self.name = 'thetasketchestimate'
+        self.name = "thetasketchestimate"

+ 3 - 3
desktop/core/ext-py/pydruid/utils/query_utils.py → desktop/core/ext-py/pydruid-0.5.11/pydruid/utils/query_utils.py

@@ -16,6 +16,7 @@
 import csv
 import codecs
 import six
+
 # A special CSV writer which will write rows to TSV file "f", which is encoded in utf-8.
 # this is necessary because the values in druid are not all ASCII.
 
@@ -31,9 +32,8 @@ class UnicodeWriter(object):
     def __encode(self, data):
         data = str(data) if isinstance(data, six.integer_types) else data
         if not six.PY3:
-            data = data.encode('utf-8') \
-                if isinstance(data, unicode) else data  # noqa
-            data = data.decode('utf-8')
+            data = data.encode("utf-8") if isinstance(data, unicode) else data  # noqa
+            data = data.decode("utf-8")
             return self.encoder.encode(data)
         return data
 

+ 7 - 0
desktop/core/ext-py/pydruid-0.5.11/requirements-dev.txt

@@ -0,0 +1,7 @@
+black==19.10b0
+flake8-mypy==17.8.0
+flake8==3.8.2
+ipdb==0.12
+pip-tools==4.4.0
+pre-commit==1.17.0
+tox==3.11.1

+ 23 - 0
desktop/core/ext-py/pydruid-0.5.11/requirements.txt

@@ -0,0 +1,23 @@
+#
+# This file is autogenerated by pip-compile
+# To update, run:
+#
+#    pip-compile requirements.in
+#
+certifi==2020.4.5.1       # via requests
+chardet==3.0.4            # via requests
+idna==2.9                 # via requests
+numpy==1.18.3             # via pandas
+pandas==0.25.3
+prompt-toolkit==1.0.18
+pycurl==7.43.0.5
+pygments==2.6.1
+python-dateutil==2.8.1    # via pandas
+pytz==2019.3              # via pandas
+requests==2.23.0
+six==1.14.0               # via prompt-toolkit, python-dateutil
+sqlalchemy==1.3.16
+tabulate==0.8.7
+tornado==6.0.4
+urllib3==1.25.9           # via requests
+wcwidth==0.1.9            # via prompt-toolkit

+ 15 - 0
desktop/core/ext-py/pydruid-0.5.11/setup.cfg

@@ -0,0 +1,15 @@
+[build_sphinx]
+source-dir = docs/source
+build-dir = docs/build
+all_files = 1
+
+[upload_sphinx]
+upload-dir = docs/build/html
+
+[bdist_wheel]
+universal = 1
+
+[egg_info]
+tag_build = 
+tag_date = 0
+

+ 61 - 0
desktop/core/ext-py/pydruid-0.5.11/setup.py

@@ -0,0 +1,61 @@
+import io
+import sys
+from setuptools import find_packages, setup
+
+install_requires = ["six >= 1.9.0", "requests"]
+
+extras_require = {
+    "pandas": ["pandas<1.0.0"],
+    "async": ["tornado"],
+    "sqlalchemy": ["sqlalchemy"],
+    "cli": ["pygments", "prompt_toolkit<2.0.0", "tabulate"],
+}
+
+# only require simplejson on python < 2.6
+if sys.version_info < (2, 6):
+    install_requires.append("simplejson >= 3.3.0")
+
+with io.open("README.md", encoding="utf-8") as f:
+    long_description = f.read()
+
+setup(
+    name="pydruid",
+    version="0.5.11",
+    author="Druid Developers",
+    author_email="druid-development@googlegroups.com",
+    packages=find_packages(where='pydruid'),
+    package_dir={
+        '': 'pydruid',
+    },
+    url="https://druid.apache.org",
+    project_urls={
+        "Bug Tracker": "https://github.com/druid-io/pydruid/issues",
+        "Documentation": "https://pythonhosted.org/pydruid/",
+        "Source Code": "https://github.com/druid-io/pydruid",
+    },
+    license="Apache License, Version 2.0",
+    description="A Python connector for Druid.",
+    long_description=long_description,
+    long_description_content_type="text/markdown",
+    install_requires=install_requires,
+    extras_require=extras_require,
+    tests_require=["pytest", "six", "mock"],
+    entry_points={
+        "console_scripts": ["pydruid = pydruid.console:main"],
+        "sqlalchemy.dialects": [
+            "druid = pydruid.db.sqlalchemy:DruidHTTPDialect",
+            "druid.http = pydruid.db.sqlalchemy:DruidHTTPDialect",
+            "druid.https = pydruid.db.sqlalchemy:DruidHTTPSDialect",
+        ],
+    },
+    include_package_data=True,
+    classifiers=[
+        "License :: OSI Approved :: Apache Software License",
+        "Programming Language :: Python",
+        "Programming Language :: Python :: 3",
+        "Programming Language :: Python :: 3.5",
+        "Programming Language :: Python :: 3.6",
+        "Programming Language :: Python :: 3.7",
+        "Programming Language :: Python :: 3.8",
+    ],
+)

+ 0 - 162
desktop/core/ext-py/pydruid/async_client.py

@@ -1,162 +0,0 @@
-#
-# Copyright 2016 Metamarkets Group Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-from __future__ import division
-from __future__ import absolute_import
-
-import json
-from pydruid.client import BaseDruidClient
-
-try:
-    from tornado import gen
-    from tornado.httpclient import AsyncHTTPClient, HTTPError
-except ImportError:
-    print('Warning: unable to import Tornado. The asynchronous client will not work.')
-
-
-class AsyncPyDruid(BaseDruidClient):
-    """
-    Asynchronous PyDruid client which mirrors functionality of the synchronous
-    PyDruid, but it executes queries
-    asynchronously (using an asynchronous http client from Tornado framework).
-
-    Returns Query objects that can be used for exporting query results into
-    TSV files or pandas.DataFrame objects
-    for subsequent analysis.
-
-    :param str url: URL of Broker node in the Druid cluster
-    :param str endpoint: Endpoint that Broker listens for queries on
-
-    Example
-
-    .. code-block:: python
-        :linenos:
-
-            >>> from pydruid.async_client import *
-
-            >>> query = AsyncPyDruid('http://localhost:8083', 'druid/v2/')
-
-            >>> top = yield query.topn(
-                    datasource='twitterstream',
-                    granularity='all',
-                    intervals='2013-10-04/pt1h',
-                    aggregations={"count": doublesum("count")},
-                    dimension='user_name',
-                    filter = Dimension('user_lang') == 'en',
-                    metric='count',
-                    threshold=2
-                )
-
-            >>> print json.dumps(top.query_dict, indent=2)
-            >>> {
-                  "metric": "count",
-                  "aggregations": [
-                    {
-                      "type": "doubleSum",
-                      "fieldName": "count",
-                      "name": "count"
-                    }
-                  ],
-                  "dimension": "user_name",
-                  "filter": {
-                    "type": "selector",
-                    "dimension": "user_lang",
-                    "value": "en"
-                  },
-                  "intervals": "2013-10-04/pt1h",
-                  "dataSource": "twitterstream",
-                  "granularity": "all",
-                  "threshold": 2,
-                  "queryType": "topN"
-                }
-
-            >>> print top.result
-            >>> [{'timestamp': '2013-10-04T00:00:00.000Z',
-                'result': [{'count': 7.0, 'user_name': 'user_1'},
-                {'count': 6.0, 'user_name': 'user_2'}]}]
-
-            >>> df = top.export_pandas()
-            >>> print df
-            >>>    count                 timestamp      user_name
-                0      7  2013-10-04T00:00:00.000Z         user_1
-                1      6  2013-10-04T00:00:00.000Z         user_2
-    """
-
-    def __init__(self, url, endpoint):
-        super(AsyncPyDruid, self).__init__(url, endpoint)
-
-    @gen.coroutine
-    def _post(self, query):
-        http_client = AsyncHTTPClient()
-        try:
-            headers, querystr, url = self._prepare_url_headers_and_body(query)
-            response = yield http_client.fetch(
-                url, method='POST', headers=headers, body=querystr)
-        except HTTPError as e:
-            self.__handle_http_error(e, query)
-        else:
-            query.parse(response.body.decode("utf-8"))
-            raise gen.Return(query)
-
-    @staticmethod
-    def __handle_http_error(e, query):
-        err = None
-        if e.code == 500:
-            # has Druid returned an error?
-            try:
-                err = json.loads(e.response.body.decode("utf-8"))
-            except ValueError:
-                pass
-            else:
-                err = err.get('error', None)
-        raise IOError('{0} \n Druid Error: {1} \n Query is: {2}'.format(
-                e, err, json.dumps(query.query_dict, indent=4)))
-
-    @gen.coroutine
-    def topn(self, **kwargs):
-        query = self.query_builder.topn(kwargs)
-        result = yield self._post(query)
-        raise gen.Return(result)
-
-    @gen.coroutine
-    def timeseries(self, **kwargs):
-        query = self.query_builder.timeseries(kwargs)
-        result = yield self._post(query)
-        raise gen.Return(result)
-
-    @gen.coroutine
-    def groupby(self, **kwargs):
-        query = self.query_builder.groupby(kwargs)
-        result = yield self._post(query)
-        raise gen.Return(result)
-
-    @gen.coroutine
-    def segment_metadata(self, **kwargs):
-        query = self.query_builder.segment_metadata(kwargs)
-        result = yield self._post(query)
-        raise gen.Return(result)
-
-    @gen.coroutine
-    def time_boundary(self, **kwargs):
-        query = self.query_builder.time_boundary(kwargs)
-        result = yield self._post(query)
-        raise gen.Return(result)
-
-    @gen.coroutine
-    def select(self, **kwargs):
-        query = self.query_builder.select(kwargs)
-        result = yield self._post(query)
-        raise gen.Return(result)

+ 0 - 560
desktop/core/ext-py/pydruid/client.py

@@ -1,560 +0,0 @@
-#
-# Copyright 2013 Metamarkets Group Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-from __future__ import division
-from __future__ import absolute_import
-
-import json
-import re
-
-from six.moves import urllib
-
-from pydruid.query import QueryBuilder
-from base64 import b64encode
-
-
-# extract error from the <PRE> tag inside the HTML response
-HTML_ERROR = re.compile('<pre>\s*(.*?)\s*</pre>', re.IGNORECASE)
-
-
-class BaseDruidClient(object):
-    def __init__(self, url, endpoint):
-        self.url = url
-        self.endpoint = endpoint
-        self.query_builder = QueryBuilder()
-        self.username = None
-        self.password = None
-
-    def set_basic_auth_credentials(self, username, password):
-        self.username = username
-        self.password = password
-
-    def _prepare_url_headers_and_body(self, query):
-        querystr = json.dumps(query.query_dict).encode('utf-8')
-        if self.url.endswith('/'):
-            url = self.url + self.endpoint
-        else:
-            url = self.url + '/' + self.endpoint
-        headers = {'Content-Type': 'application/json'}
-        if (self.username is not None) and (self.password is not None):
-            authstring = '{}:{}'.format(self.username, self.password)
-            b64string = b64encode(authstring.encode()).decode()
-            headers['Authorization'] = 'Basic {}'.format(b64string)
-
-        return headers, querystr, url
-
-    def _post(self, query):
-        """
-        Fills Query object with results.
-
-        :param Query query: query to execute
-
-        :return: Query filled with results
-        :rtype: Query
-        """
-        raise NotImplementedError("Subclasses must implement this method")
-
-    # --------- Query implementations ---------
-
-    def topn(self, **kwargs):
-        """
-        A TopN query returns a set of the values in a given dimension,
-        sorted by a specified metric. Conceptually, a topN can be
-        thought of as an approximate GroupByQuery over a single
-        dimension with an Ordering spec. TopNs are
-        faster and more resource efficient than GroupBy for this use case.
-
-        Required key/value pairs:
-
-        :param str datasource: Data source to query
-        :param str granularity: Aggregate data by hour, day, minute, etc.,
-        :param intervals: ISO-8601 intervals of data to query
-        :type intervals: str or list
-        :param dict aggregations: A map from aggregator name to one of
-          the pydruid.utils.aggregators e.g., doublesum
-        :param str dimension: Dimension to run the query against
-        :param str metric: Metric over which to sort the specified dimension by
-        :param int threshold: How many of the top items to return
-
-        :return: The query result
-        :rtype: Query
-
-        Optional key/value pairs:
-
-        :param pydruid.utils.filters.Filter filter: Indicates which rows
-          of data to include in the query
-        :param post_aggregations:   A dict with string key = 'post_aggregator_name',
-          and value pydruid.utils.PostAggregator
-        :param dict context: A dict of query context options
-
-        Example:
-
-        .. code-block:: python
-            :linenos:
-
-                >>> top = client.topn(
-                            datasource='twitterstream',
-                            granularity='all',
-                            intervals='2013-06-14/pt1h',
-                            aggregations={"count": doublesum("count")},
-                            dimension='user_name',
-                            metric='count',
-                            filter=Dimension('user_lang') == 'en',
-                            threshold=1,
-                            context={"timeout": 1000}
-                        )
-                >>> print top
-                >>> [{'timestamp': '2013-06-14T00:00:00.000Z',
-                    'result': [{'count': 22.0, 'user': "cool_user"}}]}]
-        """
-        query = self.query_builder.topn(kwargs)
-        return self._post(query)
-
-    def timeseries(self, **kwargs):
-        """
-        A timeseries query returns the values of the requested metrics (in aggregate)
-        for each timestamp.
-
-        Required key/value pairs:
-
-        :param str datasource: Data source to query
-        :param str granularity: Time bucket to aggregate data by hour, day, minute, etc.,
-        :param intervals: ISO-8601 intervals for which to run the query on
-        :type intervals: str or list
-        :param dict aggregations: A map from aggregator name to one of the
-          ``pydruid.utils.aggregators`` e.g., ``doublesum``
-
-        :return: The query result
-        :rtype: Query
-
-        Optional key/value pairs:
-
-        :param pydruid.utils.filters.Filter filter: Indicates which rows of
-          data to include in the query
-        :param post_aggregations:   A dict with string key =
-          'post_aggregator_name', and value pydruid.utils.PostAggregator
-        :param dict context: A dict of query context options
-
-        Example:
-
-        .. code-block:: python
-            :linenos:
-
-                >>> counts = client.timeseries(
-                        datasource=twitterstream,
-                        granularity='hour',
-                        intervals='2013-06-14/pt1h',
-                        aggregations=\
-                            {"count": doublesum("count"), "rows": count("rows")},
-                        post_aggregations=\
-                            {'percent': (Field('count') / Field('rows')) * Const(100))},
-                        context={"timeout": 1000}
-                    )
-                >>> print counts
-                >>> [{'timestamp': '2013-06-14T00:00:00.000Z',
-                    'result': {'count': 9619.0, 'rows': 8007,
-                    'percent': 120.13238416385663}}]
-        """
-        query = self.query_builder.timeseries(kwargs)
-        return self._post(query)
-
-    def groupby(self, **kwargs):
-        """
-        A group-by query groups a results set (the requested aggregate
-        metrics) by the specified dimension(s).
-
-        Required key/value pairs:
-
-        :param str datasource: Data source to query
-        :param str granularity: Time bucket to aggregate data by hour, day, minute, etc.,
-        :param intervals: ISO-8601 intervals for which to run the query on
-        :type intervals: str or list
-        :param dict aggregations: A map from aggregator name to one of the
-          ``pydruid.utils.aggregators`` e.g., ``doublesum``
-        :param list dimensions: The dimensions to group by
-
-        :return: The query result
-        :rtype: Query
-
-        Optional key/value pairs:
-
-        :param pydruid.utils.filters.Filter filter: Indicates which rows of
-          data to include in the query
-        :param pydruid.utils.having.Having having: Indicates which groups
-          in results set of query to keep
-        :param post_aggregations:   A dict with string key = 'post_aggregator_name',
-          and value pydruid.utils.PostAggregator
-        :param dict context: A dict of query context options
-        :param dict limit_spec: A dict of parameters defining how to limit
-          the rows returned, as specified in the Druid api documentation
-
-        Example:
-
-        .. code-block:: python
-            :linenos:
-
-                >>> group = client.groupby(
-                        datasource='twitterstream',
-                        granularity='hour',
-                        intervals='2013-10-04/pt1h',
-                        dimensions=["user_name", "reply_to_name"],
-                        filter=~(Dimension("reply_to_name") == "Not A Reply"),
-                        aggregations={"count": doublesum("count")},
-                        context={"timeout": 1000}
-                        limit_spec={
-                            "type": "default",
-                            "limit": 50,
-                            "columns" : ["count"]
-                        }
-                    )
-                >>> for k in range(2):
-                    ...     print group[k]
-                >>> {
-                    'timestamp': '2013-10-04T00:00:00.000Z',
-                    'version': 'v1',
-                    'event': {
-                        'count': 1.0,
-                        'user_name': 'user_1',
-                        'reply_to_name': 'user_2',
-                    }
-                }
-                >>> {
-                    'timestamp': '2013-10-04T00:00:00.000Z',
-                    'version': 'v1',
-                    'event': {
-                        'count': 1.0,
-                        'user_name': 'user_2',
-                        'reply_to_name':
-                        'user_3',
-                    }
-                }
-        """
-        query = self.query_builder.groupby(kwargs)
-        return self._post(query)
-
-    def segment_metadata(self, **kwargs):
-        """
-        A segment meta-data query returns per segment information about:
-
-        * Cardinality of all the columns present
-        * Column type
-        * Estimated size in bytes
-        * Estimated size in bytes of each column
-        * Interval the segment covers
-        * Segment ID
-
-        Required key/value pairs:
-
-        :param str datasource: Data source to query
-        :param intervals: ISO-8601 intervals for which to run the query on
-        :type intervals: str or list
-
-        Optional key/value pairs:
-
-        :param dict context: A dict of query context options
-
-        :return: The query result
-        :rtype: Query
-
-        Example:
-
-        .. code-block:: python
-            :linenos:
-
-                >>> meta = client.segment_metadata(
-                    datasource='twitterstream', intervals = '2013-10-04/pt1h')
-                >>> print meta[0].keys()
-                >>> ['intervals', 'id', 'columns', 'size']
-                >>> print meta[0]['columns']['tweet_length']
-                >>> {
-                    'errorMessage': None,
-                    'cardinality': None,
-                    'type': 'FLOAT',
-                    'size': 30908008,
-                }
-
-        """
-        query = self.query_builder.segment_metadata(kwargs)
-        return self._post(query)
-
-    def time_boundary(self, **kwargs):
-        """
-        A time boundary query returns the min and max timestamps present in a data source.
-
-        Required key/value pairs:
-
-        :param str datasource: Data source to query
-
-        Optional key/value pairs:
-
-        :param dict context: A dict of query context options
-
-        :return: The query result
-        :rtype: Query
-
-        Example:
-
-        .. code-block:: python
-            :linenos:
-
-                >>> bound = client.time_boundary(datasource='twitterstream')
-                >>> print bound
-                >>> [{
-                    'timestamp': '2011-09-14T15:00:00.000Z',
-                    'result': {
-                        'minTime': '2011-09-14T15:00:00.000Z',
-                        'maxTime': '2014-03-04T23:44:00.000Z',
-                    }
-                }]
-        """
-        query = self.query_builder.time_boundary(kwargs)
-        return self._post(query)
-
-    def select(self, **kwargs):
-        """
-        A select query returns raw Druid rows and supports pagination.
-
-        Required key/value pairs:
-
-        :param str datasource: Data source to query
-        :param str granularity: Time bucket to aggregate data by hour, day, minute, etc.
-        :param dict paging_spec: Indicates offsets into different scanned segments
-        :param intervals: ISO-8601 intervals for which to run the query on
-        :type intervals: str or list
-
-        Optional key/value pairs:
-
-        :param pydruid.utils.filters.Filter filter: Indicates which rows of
-          data to include in the query
-        :param list dimensions: The list of dimensions to select. If left
-          empty, all dimensions are returned
-        :param list metrics: The list of metrics to select. If left empty,
-          all metrics are returned
-        :param dict context: A dict of query context options
-
-        :return: The query result
-        :rtype: Query
-
-        Example:
-
-        .. code-block:: python
-            :linenos:
-
-                >>> raw_data = client.select(
-                        datasource=twitterstream,
-                        granularity='all',
-                        intervals='2013-06-14/pt1h',
-                        paging_spec={'pagingIdentifies': {}, 'threshold': 1},
-                        context={"timeout": 1000}
-                    )
-                >>> print(raw_data)
-                >>> [{
-                    'timestamp': '2013-06-14T00:00:00.000Z',
-                    'result': {
-                        'pagingIdentifiers': {
-                            'twitterstream_...08:00:00.000Z_v1': 1,
-                            'events': [{
-                                'segmentId': 'twitterstr...000Z_v1',
-                                'offset': 0,
-                                'event': {
-                                    'timestamp': '2013-06-14T00:00:00.000Z',
-                                    'dim': 'value',
-                                }
-                            }]
-                        }
-                }]
-        """
-        query = self.query_builder.select(kwargs)
-        return self._post(query)
-
-    def export_tsv(self, dest_path):
-        """
-        Export the current query result to a tsv file.
-
-        .. deprecated::
-            Use Query.export_tsv() method instead.
-        """
-        if self.query_builder.last_query is None:
-            raise AttributeError(
-                "There was no query executed by this client yet. Can't export!")
-        else:
-            return self.query_builder.last_query.export_tsv(dest_path)
-
-    def export_pandas(self):
-        """
-        Export the current query result to a Pandas DataFrame object.
-
-        .. deprecated::
-            Use Query.export_pandas() method instead
-        """
-        if self.query_builder.last_query is None:
-            raise AttributeError(
-                "There was no query executed by this client yet. Can't export!")
-        else:
-            return self.query_builder.last_query.export_pandas()
-
-
-class PyDruid(BaseDruidClient):
-    """
-    PyDruid contains the functions for creating and executing Druid queries.
-    Returns Query objects that can be used for exporting query results
-    into TSV files or pandas.DataFrame objects for subsequent analysis.
-
-    :param str url: URL of Broker node in the Druid cluster
-    :param str endpoint: Endpoint that Broker listens for queries on
-
-    Example
-
-    .. code-block:: python
-        :linenos:
-
-            >>> from pydruid.client import *
-
-            >>> query = PyDruid('http://localhost:8083', 'druid/v2/')
-
-            >>> top = query.topn(
-                    datasource='twitterstream',
-                    granularity='all',
-                    intervals='2013-10-04/pt1h',
-                    aggregations={"count": doublesum("count")},
-                    dimension='user_name',
-                    filter = Dimension('user_lang') == 'en',
-                    metric='count',
-                    threshold=2
-                )
-
-            >>> print json.dumps(top.query_dict, indent=2)
-            >>> {
-                  "metric": "count",
-                  "aggregations": [
-                    {
-                      "type": "doubleSum",
-                      "fieldName": "count",
-                      "name": "count"
-                    }
-                  ],
-                  "dimension": "user_name",
-                  "filter": {
-                    "type": "selector",
-                    "dimension": "user_lang",
-                    "value": "en"
-                  },
-                  "intervals": "2013-10-04/pt1h",
-                  "dataSource": "twitterstream",
-                  "granularity": "all",
-                  "threshold": 2,
-                  "queryType": "topN"
-                }
-
-            >>> print top.result
-            >>> [{
-                'timestamp': '2013-10-04T00:00:00.000Z',
-                'result': [
-                    {
-                        'count': 7.0,
-                        'user_name': 'user_1',
-                    },
-                    {
-                        'count': 6.0,
-                        'user_name': 'user_2',
-                    },
-                ]}]
-
-            >>> df = top.export_pandas()
-            >>> print df
-            >>>    count                 timestamp      user_name
-                0      7  2013-10-04T00:00:00.000Z         user_1
-                1      6  2013-10-04T00:00:00.000Z         user_2
-    """
-    def __init__(self, url, endpoint):
-        super(PyDruid, self).__init__(url, endpoint)
-
-    def _post(self, query):
-        try:
-            headers, querystr, url = self._prepare_url_headers_and_body(query)
-            req = urllib.request.Request(url, querystr, headers)
-            res = urllib.request.urlopen(req)
-            data = res.read().decode("utf-8")
-            res.close()
-        except urllib.error.HTTPError as e:
-            err = e.reason
-            if e.code == 500:
-                # has Druid returned an error?
-                try:
-                    err = json.loads(err)
-                except ValueError:
-                    if HTML_ERROR.search(err):
-                        err = HTML_ERROR.search(err).group(1)
-                except (ValueError, AttributeError, KeyError):
-                    pass
-
-            raise IOError('{0} \n Druid Error: {1} \n Query is: {2}'.format(
-                    e, err, json.dumps(
-                        query.query_dict,
-                        indent=4,
-                        sort_keys=True,
-                        separators=(',', ': '))))
-        else:
-            query.parse(data)
-            return query
-
-    def scan(self, **kwargs):
-        """
-        A scan query returns raw Druid rows
-
-        Required key/value pairs:
-
-        :param str datasource: Data source to query
-        :param str granularity: Time bucket to aggregate data by hour, day, minute, etc.
-        :param int limit: The maximum number of rows to return
-        :param intervals: ISO-8601 intervals for which to run the query on
-        :type intervals: str or list
-
-        Optional key/value pairs:
-
-        :param pydruid.utils.filters.Filter filter: Indicates which rows of
-          data to include in the query
-        :param list dimensions: The list of dimensions to select. If left
-          empty, all dimensions are returned
-        :param list metrics: The list of metrics to select. If left empty,
-          all metrics are returned
-        :param dict context: A dict of query context options
-
-        :return: The query result
-        :rtype: Query
-
-        Example:
-
-        .. code-block:: python
-            :linenos:
-
-                >>> raw_data = client.scan(
-                        datasource=twitterstream,
-                        granularity='all',
-                        intervals='2013-06-14/pt1h',
-                        limit=1,
-                        context={"timeout": 1000}
-                    )
-                >>> print raw_data
-                >>> [{
-                    u'segmentId': u'zzzz',
-                    u'columns': [u'__time', 'status', 'region'],
-                    'events': [{
-                        u'status': u'ok', 'region': u'SF', u'__time': 1509494400000,
-                    }]
-                }]
-        """
-        query = self.query_builder.scan(kwargs)
-        return self._post(query)

+ 0 - 202
desktop/core/ext-py/pydruid/console.py

@@ -1,202 +0,0 @@
-from __future__ import unicode_literals
-
-import os
-import re
-import sys
-
-from prompt_toolkit import prompt, AbortAction
-from prompt_toolkit.history import FileHistory
-from prompt_toolkit.contrib.completers import WordCompleter
-from pygments.lexers import SqlLexer
-from pygments.style import Style
-from pygments.token import Token
-from pygments.styles.default import DefaultStyle
-from six.moves.urllib import parse
-from tabulate import tabulate
-
-from pydruid.db.api import connect
-
-
-keywords = [
-    'EXPLAIN PLAN FOR',
-    'WITH',
-    'SELECT',
-    'ALL',
-    'DISTINCT',
-    'FROM',
-    'WHERE',
-    'GROUP BY',
-    'HAVING',
-    'ORDER BY',
-    'ASC',
-    'DESC',
-    'LIMIT',
-]
-
-aggregate_functions = [
-    'COUNT',
-    'SUM',
-    'MIN',
-    'MAX',
-    'AVG',
-    'APPROX_COUNT_DISTINCT',
-    'APPROX_QUANTILE',
-]
-
-numeric_functions = [
-    'ABS',
-    'CEIL',
-    'EXP',
-    'FLOOR',
-    'LN',
-    'LOG10',
-    'POW',
-    'SQRT',
-]
-
-string_functions = [
-    'CHARACTER_LENGTH',
-    'LOOKUP',
-    'LOWER',
-    'REGEXP_EXTRACT',
-    'REPLACE',
-    'SUBSTRING',
-    'TRIM',
-    'BTRIM',
-    'RTRIM',
-    'LTRIM',
-    'UPPER',
-]
-
-time_functions = [
-    'CURRENT_TIMESTAMP',
-    'CURRENT_DATE',
-    'TIME_FLOOR',
-    'TIME_SHIFT',
-    'TIME_EXTRACT',
-    'TIME_PARSE',
-    'TIME_FORMAT',
-    'MILLIS_TO_TIMESTAMP',
-    'TIMESTAMP_TO_MILLIS',
-    'EXTRACT',
-    'FLOOR',
-    'CEIL',
-]
-
-other_functions = [
-    'CAST',
-    'CASE',
-    'WHEN',
-    'THEN',
-    'END',
-    'NULLIF',
-    'COALESCE',
-]
-
-
-replacements = {
-    '^SHOW SCHEMAS': 'SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA',
-    '^SHOW TABLES': 'SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES',
-    '^DESC (?P<table>[^;\s]*)': r"""
-        SELECT COLUMN_NAME,
-               ORDINAL_POSITION,
-               COLUMN_DEFAULT,
-               IS_NULLABLE,
-               DATA_TYPE
-          FROM INFORMATION_SCHEMA.COLUMNS
-         WHERE TABLE_NAME='\1'
-    """.strip(),
-}
-
-
-class DocumentStyle(Style):
-    styles = {
-        Token.Menu.Completions.Completion.Current: 'bg:#00aaaa #000000',
-        Token.Menu.Completions.Completion: 'bg:#008888 #ffffff',
-        Token.Menu.Completions.ProgressButton: 'bg:#003333',
-        Token.Menu.Completions.ProgressBar: 'bg:#00aaaa',
-    }
-    styles.update(DefaultStyle.styles)
-
-
-def get_connection_kwargs(url):
-    parts = parse.urlparse(url)
-    if ':' in parts.netloc:
-        host, port = parts.netloc.split(':', 1)
-        port = int(port)
-    else:
-        host = parts.netloc
-        port = 8082
-
-    return {
-        'host': host,
-        'port': port,
-        'path': parts.path,
-        'scheme': parts.scheme,
-    }
-
-
-def get_tables(connection):
-    cursor = connection.cursor()
-    return [
-        row.TABLE_NAME for row in
-        cursor.execute('SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES')
-    ]
-
-
-def get_autocomplete(connection):
-    return (
-        keywords +
-        aggregate_functions +
-        numeric_functions +
-        string_functions +
-        time_functions +
-        other_functions +
-        get_tables(connection)
-    )
-
-
-def main():
-    history = FileHistory(os.path.expanduser('~/.pydruid_history'))
-
-    try:
-        url = sys.argv[1]
-    except IndexError:
-        url = 'http://localhost:8082/druid/v2/sql/'
-    kwargs = get_connection_kwargs(url)
-    connection = connect(**kwargs)
-    cursor = connection.cursor()
-
-    words = get_autocomplete(connection)
-    sql_completer = WordCompleter(words, ignore_case=True)
-
-    while True:
-        try:
-            query = prompt(
-                '> ', lexer=SqlLexer, completer=sql_completer,
-                style=DocumentStyle, history=history,
-                on_abort=AbortAction.RETRY)
-        except EOFError:
-            break  # Control-D pressed.
-
-        # run query
-        query = query.strip('; ')
-        if query:
-            # shortcuts
-            for pattern, repl in replacements.items():
-                query = re.sub(pattern, repl, query)
-
-            try:
-                result = cursor.execute(query)
-            except Exception as e:
-                print(e)
-                continue
-
-            headers = [t[0] for t in cursor.description or []]
-            print(tabulate(result, headers=headers))
-
-    print('GoodBye!')
-
-
-if __name__ == '__main__':
-    main()

+ 0 - 446
desktop/core/ext-py/pydruid/query.py

@@ -1,446 +0,0 @@
-#
-# Copyright 2016 Metamarkets Group Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-import six
-import json
-import collections
-from pydruid.utils.aggregators import build_aggregators
-from pydruid.utils.filters import Filter
-from pydruid.utils.having import Having
-from pydruid.utils.dimensions import build_dimension
-from pydruid.utils.postaggregator import Postaggregator
-from pydruid.utils.query_utils import UnicodeWriter
-
-
-class Query(collections.MutableSequence):
-    """
-    Query objects are produced by PyDruid clients and can be used for
-    exporting query results into TSV files or
-    pandas.DataFrame objects for subsequent analysis. They also hold
-    information about the issued query.
-
-    Query acts as a wrapper over raw result list of dictionaries.
-
-    :ivar str result_json: JSON object representing a query result. Initial value: None
-    :ivar list result: Query result parsed into a list of dicts. Initial value: None
-    :ivar str query_type: Name of most recently run query, e.g., topN. Initial value: None
-    :ivar dict query_dict: JSON object representing the query. Initial value: None
-    """
-
-    def __init__(self, query_dict, query_type):
-        super(Query, self).__init__()
-        self.query_dict = query_dict
-        self.query_type = query_type
-        self.result = None
-        self.result_json = None
-
-    def parse(self, data):
-        if data:
-            self.result_json = data
-            res = json.loads(self.result_json)
-            self.result = res
-        else:
-            raise IOError('{Error parsing result: {0} for {1} query'.format(
-                    self.result_json, self.query_type))
-
-    def export_tsv(self, dest_path):
-        """
-        Export the current query result to a tsv file.
-
-        :param str dest_path: file to write query results to
-        :raise NotImplementedError:
-
-        Example
-
-        .. code-block:: python
-            :linenos:
-
-                >>> top = client.topn(
-                        datasource='twitterstream',
-                        granularity='all',
-                        intervals='2013-10-04/pt1h',
-                        aggregations={"count": doublesum("count")},
-                        dimension='user_name',
-                        filter = Dimension('user_lang') == 'en',
-                        metric='count',
-                        threshold=2
-                    )
-
-                >>> top.export_tsv('top.tsv')
-                >>> !cat top.tsv
-                >>> count	user_name	timestamp
-                    7.0	user_1	2013-10-04T00:00:00.000Z
-                    6.0	user_2	2013-10-04T00:00:00.000Z
-        """
-        if six.PY3:
-            f = open(dest_path, 'w', newline='', encoding='utf-8')
-        else:
-            f = open(dest_path, 'wb')
-        w = UnicodeWriter(f)
-
-        if self.query_type == "timeseries":
-            header = list(self.result[0]['result'].keys())
-            header.append('timestamp')
-        elif self.query_type == 'topN':
-            header = list(self.result[0]['result'][0].keys())
-            header.append('timestamp')
-        elif self.query_type == "groupBy":
-            header = list(self.result[0]['event'].keys())
-            header.append('timestamp')
-            header.append('version')
-        else:
-            raise NotImplementedError(
-                'TSV export not implemented for query type: {0}'.format(self.query_type))
-
-        w.writerow(header)
-
-        if self.result:
-            if self.query_type == "topN" or self.query_type == "timeseries":
-                for item in self.result:
-                    timestamp = item['timestamp']
-                    result = item['result']
-                    if type(result) is list:  # topN
-                        for line in result:
-                            w.writerow(list(line.values()) + [timestamp])
-                    else:  # timeseries
-                        w.writerow(list(result.values()) + [timestamp])
-            elif self.query_type == "groupBy":
-                for item in self.result:
-                    timestamp = item['timestamp']
-                    version = item['version']
-                    w.writerow(
-                        list(item['event'].values()) + [timestamp] + [version])
-
-        f.close()
-
-    def export_pandas(self):
-        """
-        Export the current query result to a Pandas DataFrame object.
-
-        :return: The DataFrame representing the query result
-        :rtype: DataFrame
-        :raise NotImplementedError:
-
-        Example
-
-        .. code-block:: python
-            :linenos:
-
-                >>> top = client.topn(
-                        datasource='twitterstream',
-                        granularity='all',
-                        intervals='2013-10-04/pt1h',
-                        aggregations={"count": doublesum("count")},
-                        dimension='user_name',
-                        filter = Dimension('user_lang') == 'en',
-                        metric='count',
-                        threshold=2
-                    )
-
-                >>> df = top.export_pandas()
-                >>> print df
-                >>>    count                 timestamp      user_name
-                    0      7  2013-10-04T00:00:00.000Z         user_1
-                    1      6  2013-10-04T00:00:00.000Z         user_2
-        """
-        import pandas
-
-        if self.result:
-            if self.query_type == "timeseries":
-                nres = [list(v['result'].items()) + [('timestamp', v['timestamp'])]
-                        for v in self.result]
-                nres = [dict(v) for v in nres]
-            elif self.query_type == "topN":
-                nres = []
-                for item in self.result:
-                    timestamp = item['timestamp']
-                    results = item['result']
-                    tres = [dict(list(res.items()) + [('timestamp', timestamp)])
-                            for res in results]
-                    nres += tres
-            elif self.query_type == "groupBy":
-                nres = [list(v['event'].items()) + [('timestamp', v['timestamp'])]
-                        for v in self.result]
-                nres = [dict(v) for v in nres]
-            elif self.query_type == "select":
-                nres = []
-                for item in self.result:
-                    nres += [e.get('event') for e in item['result'].get('events')]
-            elif self.query_type == "scan":
-                nres = []
-                for item in self.result:
-                    nres += [e for e in item.get('events')]
-            else:
-                raise NotImplementedError(
-                    'Pandas export not implemented for query '
-                    'type: {0}'.format(self.query_type))
-
-            df = pandas.DataFrame(nres)
-            return df
-
-    def __str__(self):
-        return str(self.result)
-
-    def __len__(self):
-        return len(self.result)
-
-    def __delitem__(self, index):
-        del self.result[index]
-
-    def insert(self, index, value):
-        self.result.insert(index, value)
-
-    def __setitem__(self, index, value):
-        self.result[index] = value
-
-    def __getitem__(self, index):
-        return self.result[index]
-
-
-class QueryBuilder(object):
-    def __init__(self):
-        self.last_query = None
-
-    @staticmethod
-    def parse_datasource(datasource, query_type):
-        """
-        Parse an input datasource object into valid dictionary
-
-        Input can be a string, in which case it is simply returned, or a
-        list, when it is turned into a UNION datasource.
-
-        :param datasource: datasource parameter
-        :param string query_type: query type
-        :raise ValueError: if input is not string or list of strings
-        """
-        if not (
-                    isinstance(datasource, six.string_types) or
-                    (
-                        isinstance(datasource, list) and
-                        all([isinstance(x, six.string_types) for x in datasource])
-                    )
-                ):
-            raise ValueError(
-                'Datasource definition not valid. Must be string or list of strings')
-        if isinstance(datasource, six.string_types):
-            return datasource
-        else:
-            return {'type': 'union', 'dataSources': datasource}
-
-    @staticmethod
-    def validate_query(query_type, valid_parts, args):
-        """
-        Validate the query parts so only allowed objects are sent.
-
-        Each query type can have an optional 'context' object attached which
-        is used to set certain query context settings, etc. timeout or
-        priority. As each query can have this object, there's
-        no need for it to be sent - it might as well be added here.
-
-        :param string query_type: a type of query
-        :param list valid_parts: a list of valid object names
-        :param dict args: the dict of args to be sent
-        :raise ValueError: if an invalid object is given
-        """
-        valid_parts = valid_parts[:] + ['context']
-        for key, val in six.iteritems(args):
-            if key not in valid_parts:
-                raise ValueError(
-                        'Query component: {0} is not valid for query type: {1}.'
-                        .format(key, query_type) +
-                        'The list of valid components is: \n {0}'
-                        .format(valid_parts))
-
-    def build_query(self, query_type, args):
-        """
-        Build query based on given query type and arguments.
-
-        :param string query_type: a type of query
-        :param dict args: the dict of args to be sent
-        :return: the resulting query
-        :rtype: Query
-        """
-        query_dict = {'queryType': query_type}
-
-        for key, val in six.iteritems(args):
-            if key == 'aggregations':
-                query_dict[key] = build_aggregators(val)
-            elif key == 'post_aggregations':
-                query_dict['postAggregations'] = \
-                    Postaggregator.build_post_aggregators(val)
-            elif key == 'context':
-                query_dict['context'] = val
-            elif key == 'datasource':
-                query_dict['dataSource'] = self.parse_datasource(val, query_type)
-            elif key == 'paging_spec':
-                query_dict['pagingSpec'] = val
-            elif key == 'limit_spec':
-                query_dict['limitSpec'] = val
-            elif key == "filter" and val is not None:
-                query_dict[key] = Filter.build_filter(val)
-            elif key == "having" and val is not None:
-                query_dict[key] = Having.build_having(val)
-            elif key == 'dimension' and val is not None:
-                query_dict[key] = build_dimension(val)
-            elif key == 'dimensions':
-                query_dict[key] = [build_dimension(v) for v in val]
-            else:
-                query_dict[key] = val
-
-        self.last_query = Query(query_dict, query_type)
-        return self.last_query
-
-    def topn(self, args):
-        """
-        A TopN query returns a set of the values in a given dimension,
-        sorted by a specified metric. Conceptually, a
-        topN can be thought of as an approximate GroupByQuery over a
-        single dimension with an Ordering spec. TopNs are
-        faster and more resource efficient than GroupBy for this use case.
-
-        :param dict args: dict of arguments
-
-        :return: topn query
-        :rtype: Query
-        """
-        query_type = 'topN'
-        valid_parts = [
-            'datasource', 'granularity', 'filter', 'aggregations',
-            'post_aggregations', 'intervals', 'dimension', 'threshold',
-            'metric'
-        ]
-        self.validate_query(query_type, valid_parts, args)
-        return self.build_query(query_type, args)
-
-    def timeseries(self, args):
-        """
-        A timeseries query returns the values of the requested metrics
-        (in aggregate) for each timestamp.
-
-        :param dict args: dict of args
-
-        :return: timeseries query
-        :rtype: Query
-        """
-        query_type = 'timeseries'
-        valid_parts = [
-            'datasource', 'granularity', 'filter', 'aggregations', 'descending',
-            'post_aggregations', 'intervals'
-        ]
-        self.validate_query(query_type, valid_parts, args)
-        return self.build_query(query_type, args)
-
-    def groupby(self, args):
-        """
-        A group-by query groups a results set (the requested aggregate
-        metrics) by the specified dimension(s).
-
-        :param dict args: dict of args
-
-        :return: group by query
-        :rtype: Query
-        """
-        query_type = 'groupBy'
-        valid_parts = [
-            'datasource', 'granularity', 'filter', 'aggregations',
-            'having', 'post_aggregations', 'intervals', 'dimensions',
-            'limit_spec',
-        ]
-        self.validate_query(query_type, valid_parts, args)
-        return self.build_query(query_type, args)
-
-    def segment_metadata(self, args):
-        """
-        * Column type
-        * Estimated size in bytes
-        * Estimated size in bytes of each column
-        * Interval the segment covers
-        * Segment ID
-
-        :param dict args: dict of args
-
-        :return: segment metadata query
-        :rtype: Query
-        """
-        query_type = 'segmentMetadata'
-        valid_parts = ['datasource', 'intervals', 'analysisTypes', 'merge']
-        self.validate_query(query_type, valid_parts, args)
-        return self.build_query(query_type, args)
-
-    def time_boundary(self, args):
-        """
-        A time boundary query returns the min and max timestamps present in a data source.
-
-        :param dict args: dict of args
-
-        :return: time boundary query
-        :rtype: Query
-        """
-        query_type = 'timeBoundary'
-        valid_parts = ['datasource']
-        self.validate_query(query_type, valid_parts, args)
-        return self.build_query(query_type, args)
-
-    def select(self, args):
-        """
-        A select query returns raw Druid rows and supports pagination.
-
-        :param dict args: dict of args
-
-        :return: select query
-        :rtype: Query
-        """
-        query_type = 'select'
-        valid_parts = [
-            'datasource', 'granularity', 'filter', 'dimensions', 'metrics',
-            'paging_spec', 'intervals'
-        ]
-        self.validate_query(query_type, valid_parts, args)
-        return self.build_query(query_type, args)
-
-    def search(self, args):
-        """
-        A search query returns dimension values that match the search specification.
-
-        :param dict args: dict of args
-
-        :return: search query
-        :rtype: Query
-        """
-        query_type = 'search'
-        valid_parts = [
-            'datasource', 'granularity', 'filter', 'searchDimensions', 'query',
-            'limit', 'intervals', 'sort'
-        ]
-        self.validate_query(query_type, valid_parts, args)
-        return self.build_query(query_type, args)
-
-    def scan(self, args):
-        """
-        A scan query returns raw Druid rows
-
-        :param dict args: dict of args
-
-        :return: select query
-        :rtype: Query
-        """
-        query_type = 'scan'
-        valid_parts = [
-            'datasource', 'granularity', 'filter', 'dimensions', 'metrics',
-            'intervals', 'limit',
-        ]
-        self.validate_query(query_type, valid_parts, args)
-        return self.build_query(query_type, args)

+ 0 - 0
desktop/core/ext-py/pydruid/utils/__init__.py


+ 0 - 187
desktop/core/ext-py/pydruid/utils/filters.py

@@ -1,187 +0,0 @@
-#
-# Copyright 2013 Metamarkets Group Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-try:
-    import simplejson as json
-except ImportError:
-    import json
-
-from .dimensions import build_dimension
-
-
-class Filter:
-
-    # filter types supporting extraction function
-    _FILTERS_WITH_EXTR_FN = ('selector', 'regex', 'javascript', 'in', 'bound',
-                             'interval', 'extraction')
-
-    def __init__(self, extraction_function=None, **args):
-
-        type_ = args.get('type', 'selector')
-
-        if extraction_function is not None:
-            if type_ not in self._FILTERS_WITH_EXTR_FN:
-                raise ValueError('Filter of type {0} doesn\'t support '
-                                 'extraction function'.format(type_))
-        elif type_ == 'extraction':
-            raise ValueError('Filter of type extraction requires extraction '
-                             'function')
-
-        self.extraction_function = extraction_function
-
-        self.filter = {"filter": {"type": type_}}
-
-        if type_ == "selector":
-            self.filter["filter"].update({"dimension": args["dimension"],
-                                          "value": args["value"]})
-        elif type_ == "javascript":
-            self.filter["filter"].update({"dimension": args["dimension"],
-                                          "function": args["function"]})
-        elif type_ == "and":
-            self.filter["filter"].update({"fields": args['fields']})
-        elif type_ == "or":
-            self.filter["filter"].update({"fields": args["fields"]})
-        elif type_ == "not":
-            self.filter["filter"].update({"field": args["field"]})
-        elif type_ == "in":
-            self.filter["filter"].update({"dimension": args["dimension"],
-                                          "values": args["values"]})
-        elif type_ == "regex":
-            self.filter['filter'].update({"dimension": args["dimension"],
-                                          "pattern": args["pattern"]})
-        elif type_ == "bound":
-            self.filter["filter"].update({
-                "dimension": args["dimension"],
-                "lower": args["lower"],
-                "lowerStrict": args["lowerStrict"],
-                "upper": args["upper"],
-                "upperStrict": args["upperStrict"],
-                "alphaNumeric": args["alphaNumeric"]
-            })
-        elif type_ == "columnComparison":
-            self.filter['filter'].update({'dimensions': args['dimensions']})
-        elif type_ == "interval":
-            self.filter['filter'].update({'dimension': args['dimension'],
-                                          'intervals': args['intervals']})
-        elif type_ == "extraction":
-            self.filter["filter"].update({"dimension": args["dimension"],
-                                          "value": args["value"]})
-        else:
-            raise NotImplementedError(
-                'Filter type: {0} does not exist'.format(type_))
-
-    def show(self):
-        print(json.dumps(self.filter, indent=4))
-
-    def __and__(self, x):
-        if self.filter['filter']['type'] == 'and':
-            # if `self` is already `and`, don't create a new filter
-            # but just append `x` to the filter fields.
-            self.filter['filter']['fields'].append(x)
-            return self
-        return Filter(type="and", fields=[self, x])
-
-    def __or__(self, x):
-        if self.filter['filter']['type'] == 'or':
-            # if `self` is already `or`, don't create a new filter
-            # but just append `x` to the filter fields.
-            self.filter['filter']['fields'].append(x)
-            return self
-        return Filter(type="or", fields=[self, x])
-
-    def __invert__(self):
-        return Filter(type="not", field=self)
-
-    @staticmethod
-    def build_filter(filter_obj):
-        filter = filter_obj.filter['filter']
-        if filter['type'] in ['and', 'or']:
-            filter = filter.copy()  # make a copy so we don't overwrite `fields`
-            filter['fields'] = [Filter.build_filter(f) for f in filter['fields']]
-        elif filter['type'] in ['not']:
-            filter = filter.copy()
-            filter['field'] = Filter.build_filter(filter['field'])
-        elif filter['type'] in ['columnComparison']:
-            filter = filter.copy()
-            filter['dimensions'] = [build_dimension(d) for d in filter['dimensions']]
-
-        if filter_obj.extraction_function is not None:
-            if filter is filter_obj.filter['filter']:  # copy if not yet copied
-                filter = filter.copy()
-            filter['extractionFn'] = filter_obj.extraction_function.build()
-
-        return filter
-
-
-class Dimension:
-    def __init__(self, dim):
-        self.dimension = dim
-
-    def __eq__(self, other):
-        return Filter(dimension=self.dimension, value=other)
-
-    def __ne__(self, other):
-        return ~Filter(dimension=self.dimension, value=other)
-
-
-class JavaScript:
-    def __init__(self, dim):
-        self.dimension = dim
-
-    def __eq__(self, func):
-        return Filter(type='javascript', dimension=self.dimension, function=func)
-
-
-class Bound(Filter):
-    """
-    Bound filter can be used to filter by comparing dimension values to an
-    upper value or/and a lower value.
-
-    :ivar str dimension: Dimension to filter on.
-    :ivar str lower: Lower bound.
-    :ivar str upper: Upper bound.
-    :ivar bool lowerStrict: Strict lower inclusion. Initial value: False
-    :ivar bool upperStrict: Strict upper inclusion. Initial value: False
-    :ivar bool alphaNumeric: Numeric comparison. Initial value: False
-    :ivar ExtractionFunction extraction_function: extraction function to use,
-                                                  if not None
-    """
-    def __init__(
-            self, dimension, lower, upper, lowerStrict=False,
-            upperStrict=False, alphaNumeric=False, extraction_function=None):
-        Filter.__init__(
-            self,
-            type='bound', dimension=dimension,
-            lower=lower, upper=upper,
-            lowerStrict=lowerStrict, upperStrict=upperStrict,
-            alphaNumeric=alphaNumeric, extraction_function=extraction_function)
-
-
-class Interval(Filter):
-    """
-    Interval filter can be used to filter by comparing dimension(__time)
-    values to a list of intervals.
-
-    :ivar str dimension: Dimension to filter on.
-    :ivar list intervals: List of ISO-8601 intervals of data to filter out.
-    :ivar ExtractionFunction extraction_function: extraction function to use,
-                                                  if not None
-    """
-    def __init__(self, dimension, intervals, extraction_function=None):
-
-        Filter.__init__(
-            self,
-            type='interval', dimension=dimension,
-            intervals=intervals, extraction_function=extraction_function)

+ 0 - 85
desktop/core/ext-py/pydruid/utils/having.py

@@ -1,85 +0,0 @@
-#
-# Copyright 2013 Metamarkets Group Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-try:
-    import simplejson as json
-except ImportError:
-    import json
-
-
-class Having:
-    def __init__(self, **args):
-
-        if args['type'] in ('equalTo', 'lessThan', 'greaterThan'):
-            self.having = {'having': {'type': args['type'],
-                                      'aggregation': args['aggregation'],
-                                      'value': args['value']}}
-
-        elif args['type'] == 'and':
-            self.having = {'having': {'type': 'and',
-                                      'havingSpecs': args['havingSpecs']}}
-
-        elif args['type'] == 'or':
-            self.having = {'having': {'type': 'or',
-                                      'havingSpecs': args['havingSpecs']}}
-
-        elif args['type'] == 'not':
-            self.having = {'having': {'type': 'not',
-                                      'havingSpec': args['havingSpec']}}
-        else:
-            raise NotImplemented(
-                'Having type: {0} does not exist'.format(args['type']))
-
-    def show(self):
-        print(json.dumps(self.having, indent=4))
-
-    def _combine(self, typ, x):
-        # collapse nested and/ors
-        if self.having['having']['type'] == typ:
-            havingSpecs = self.having['having']['havingSpecs'] + [x.having['having']]
-            return Having(type=typ, havingSpecs=havingSpecs)
-        elif x.having['having']['type'] == typ:
-            havingSpecs = [self.having['having']] + x.having['having']['havingSpecs']
-            return Having(type=typ, havingSpecs=havingSpecs)
-        else:
-            return Having(type=typ,
-                          havingSpecs=[self.having['having'], x.having['having']])
-
-    def __and__(self, x):
-        return self._combine('and', x)
-
-    def __or__(self, x):
-        return self._combine('or', x)
-
-    def __invert__(self):
-        return Having(type='not', havingSpec=self.having['having'])
-
-    @staticmethod
-    def build_having(having_obj):
-        return having_obj.having['having']
-
-
-class Aggregation:
-    def __init__(self, agg):
-        self.aggregation = agg
-
-    def __eq__(self, other):
-        return Having(type='equalTo', aggregation=self.aggregation, value=other)
-
-    def __lt__(self, other):
-        return Having(type='lessThan', aggregation=self.aggregation, value=other)
-
-    def __gt__(self, other):
-        return Having(type='greaterThan', aggregation=self.aggregation, value=other)