PKG-INFO 13 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303
  1. Metadata-Version: 2.1
  2. Name: pydruid
  3. Version: 0.5.11
  4. Summary: A Python connector for Druid.
  5. Home-page: https://druid.apache.org
  6. Author: Druid Developers
  7. Author-email: druid-development@googlegroups.com
  8. License: Apache License, Version 2.0
  9. Project-URL: Bug Tracker, https://github.com/druid-io/pydruid/issues
  10. Project-URL: Documentation, https://pythonhosted.org/pydruid/
  11. Project-URL: Source Code, https://github.com/druid-io/pydruid
  12. Description: # pydruid
  13. pydruid exposes a simple API to create, execute, and analyze [Druid](http://druid.io/) queries. pydruid can parse query results into [Pandas](http://pandas.pydata.org/) DataFrame objects for subsequent data analysis -- this offers a tight integration between [Druid](http://druid.io/), the [SciPy](http://www.scipy.org/stackspec.html) stack (for scientific computing) and [scikit-learn](http://scikit-learn.org/stable/) (for machine learning). pydruid can export query results into TSV or JSON for further processing with your favorite tool, e.g., R, Julia, Matlab, Excel. It provides both synchronous and asynchronous clients.
  14. Additionally, pydruid implements the [Python DB API 2.0](https://www.python.org/dev/peps/pep-0249/), a [SQLAlchemy dialect](http://docs.sqlalchemy.org/en/latest/dialects/), and a provides a command line interface to interact with Druid.
  15. To install:
  16. ```python
  17. pip install pydruid
  18. # or, if you intend to use asynchronous client
  19. pip install pydruid[async]
  20. # or, if you intend to export query results into pandas
  21. pip install pydruid[pandas]
  22. # or, if you intend to do both
  23. pip install pydruid[async, pandas]
  24. # or, if you want to use the SQLAlchemy engine
  25. pip install pydruid[sqlalchemy]
  26. # or, if you want to use the CLI
  27. pip install pydruid[cli]
  28. ```
  29. Documentation: https://pythonhosted.org/pydruid/.
  30. # examples
  31. The following exampes show how to execute and analyze the results of three types of queries: timeseries, topN, and groupby. We will use these queries to ask simple questions about twitter's public data set.
  32. ## timeseries
  33. What was the average tweet length, per day, surrounding the 2014 Sochi olympics?
  34. ```python
  35. from pydruid.client import *
  36. from pylab import plt
  37. query = PyDruid(druid_url_goes_here, 'druid/v2')
  38. ts = query.timeseries(
  39. datasource='twitterstream',
  40. granularity='day',
  41. intervals='2014-02-02/p4w',
  42. aggregations={'length': doublesum('tweet_length'), 'count': doublesum('count')},
  43. post_aggregations={'avg_tweet_length': (Field('length') / Field('count'))},
  44. filter=Dimension('first_hashtag') == 'sochi2014'
  45. )
  46. df = query.export_pandas()
  47. df['timestamp'] = df['timestamp'].map(lambda x: x.split('T')[0])
  48. df.plot(x='timestamp', y='avg_tweet_length', ylim=(80, 140), rot=20,
  49. title='Sochi 2014')
  50. plt.ylabel('avg tweet length (chars)')
  51. plt.show()
  52. ```
  53. ![alt text](https://github.com/metamx/pydruid/raw/master/docs/figures/avg_tweet_length.png "Avg. tweet length")
  54. ## topN
  55. Who were the top ten mentions (@user_name) during the 2014 Oscars?
  56. ```python
  57. top = query.topn(
  58. datasource='twitterstream',
  59. granularity='all',
  60. intervals='2014-03-03/p1d', # utc time of 2014 oscars
  61. aggregations={'count': doublesum('count')},
  62. dimension='user_mention_name',
  63. filter=(Dimension('user_lang') == 'en') & (Dimension('first_hashtag') == 'oscars') &
  64. (Dimension('user_time_zone') == 'Pacific Time (US & Canada)') &
  65. ~(Dimension('user_mention_name') == 'No Mention'),
  66. metric='count',
  67. threshold=10
  68. )
  69. df = query.export_pandas()
  70. print df
  71. count timestamp user_mention_name
  72. 0 1303 2014-03-03T00:00:00.000Z TheEllenShow
  73. 1 44 2014-03-03T00:00:00.000Z TheAcademy
  74. 2 21 2014-03-03T00:00:00.000Z MTV
  75. 3 21 2014-03-03T00:00:00.000Z peoplemag
  76. 4 17 2014-03-03T00:00:00.000Z THR
  77. 5 16 2014-03-03T00:00:00.000Z ItsQueenElsa
  78. 6 16 2014-03-03T00:00:00.000Z eonline
  79. 7 15 2014-03-03T00:00:00.000Z PerezHilton
  80. 8 14 2014-03-03T00:00:00.000Z realjohngreen
  81. 9 12 2014-03-03T00:00:00.000Z KevinSpacey
  82. ```
  83. ## groupby
  84. What does the social network of users replying to other users look like?
  85. ```python
  86. from igraph import *
  87. from cairo import *
  88. from pandas import concat
  89. group = query.groupby(
  90. datasource='twitterstream',
  91. granularity='hour',
  92. intervals='2013-10-04/pt12h',
  93. dimensions=["user_name", "reply_to_name"],
  94. filter=(~(Dimension("reply_to_name") == "Not A Reply")) &
  95. (Dimension("user_location") == "California"),
  96. aggregations={"count": doublesum("count")}
  97. )
  98. df = query.export_pandas()
  99. # map names to categorical variables with a lookup table
  100. names = concat([df['user_name'], df['reply_to_name']]).unique()
  101. nameLookup = dict([pair[::-1] for pair in enumerate(names)])
  102. df['user_name_lookup'] = df['user_name'].map(nameLookup.get)
  103. df['reply_to_name_lookup'] = df['reply_to_name'].map(nameLookup.get)
  104. # create the graph with igraph
  105. g = Graph(len(names), directed=False)
  106. vertices = zip(df['user_name_lookup'], df['reply_to_name_lookup'])
  107. g.vs["name"] = names
  108. g.add_edges(vertices)
  109. layout = g.layout_fruchterman_reingold()
  110. plot(g, "tweets.png", layout=layout, vertex_size=2, bbox=(400, 400), margin=25, edge_width=1, vertex_color="blue")
  111. ```
  112. ![alt text](https://github.com/metamx/pydruid/raw/master/docs/figures/twitter_graph.png "Social Network")
  113. # asynchronous client
  114. ```pydruid.async_client.AsyncPyDruid``` implements an asynchronous client. To achieve that, it utilizes an asynchronous
  115. HTTP client from ```Tornado``` framework. The asynchronous client is suitable for use with async frameworks such as Tornado
  116. and provides much better performance at scale. It lets you serve multiple requests at the same time, without blocking on
  117. Druid executing your queries.
  118. ## example
  119. ```python
  120. from tornado import gen
  121. from pydruid.async_client import AsyncPyDruid
  122. from pydruid.utils.aggregators import longsum
  123. from pydruid.utils.filters import Dimension
  124. client = AsyncPyDruid(url_to_druid_broker, 'druid/v2')
  125. @gen.coroutine
  126. def your_asynchronous_method_serving_top10_mentions_for_day(day
  127. top_mentions = yield client.topn(
  128. datasource='twitterstream',
  129. granularity='all',
  130. intervals="%s/p1d" % (day, ),
  131. aggregations={'count': doublesum('count')},
  132. dimension='user_mention_name',
  133. filter=(Dimension('user_lang') == 'en') & (Dimension('first_hashtag') == 'oscars') &
  134. (Dimension('user_time_zone') == 'Pacific Time (US & Canada)') &
  135. ~(Dimension('user_mention_name') == 'No Mention'),
  136. metric='count',
  137. threshold=10)
  138. # asynchronously return results
  139. # can be simply ```return top_mentions``` in python 3.x
  140. raise gen.Return(top_mentions)
  141. ```
  142. # thetaSketches
  143. Theta sketch Post aggregators are built slightly differently to normal Post Aggregators, as they have different operators.
  144. Note: you must have the ```druid-datasketches``` extension loaded into your Druid cluster in order to use these.
  145. See the [Druid datasketches](http://druid.io/docs/latest/development/extensions-core/datasketches-aggregators.html) documentation for details.
  146. ```python
  147. from pydruid.client import *
  148. from pydruid.utils import aggregators
  149. from pydruid.utils import filters
  150. from pydruid.utils import postaggregator
  151. query = PyDruid(url_to_druid_broker, 'druid/v2')
  152. ts = query.groupby(
  153. datasource='test_datasource',
  154. granularity='all',
  155. intervals='2016-09-01/P1M',
  156. filter = ( filters.Dimension('product').in_(['product_A', 'product_B'])),
  157. aggregations={
  158. 'product_A_users': aggregators.filtered(
  159. filters.Dimension('product') == 'product_A',
  160. aggregators.thetasketch('user_id')
  161. ),
  162. 'product_B_users': aggregators.filtered(
  163. filters.Dimension('product') == 'product_B',
  164. aggregators.thetasketch('user_id')
  165. )
  166. },
  167. post_aggregations={
  168. 'both_A_and_B': postaggregator.ThetaSketchEstimate(
  169. postaggregator.ThetaSketch('product_A_users') & postaggregator.ThetaSketch('product_B_users')
  170. )
  171. }
  172. )
  173. ```
  174. # DB API
  175. ```python
  176. from pydruid.db import connect
  177. conn = connect(host='localhost', port=8082, path='/druid/v2/sql/', scheme='http')
  178. curs = conn.cursor()
  179. curs.execute("""
  180. SELECT place,
  181. CAST(REGEXP_EXTRACT(place, '(.*),', 1) AS FLOAT) AS lat,
  182. CAST(REGEXP_EXTRACT(place, ',(.*)', 1) AS FLOAT) AS lon
  183. FROM places
  184. LIMIT 10
  185. """)
  186. for row in curs:
  187. print(row)
  188. ```
  189. # SQLAlchemy
  190. ```python
  191. from sqlalchemy import *
  192. from sqlalchemy.engine import create_engine
  193. from sqlalchemy.schema import *
  194. engine = create_engine('druid://localhost:8082/druid/v2/sql/') # uses HTTP by default :(
  195. # engine = create_engine('druid+http://localhost:8082/druid/v2/sql/')
  196. # engine = create_engine('druid+https://localhost:8082/druid/v2/sql/')
  197. places = Table('places', MetaData(bind=engine), autoload=True)
  198. print(select([func.count('*')], from_obj=places).scalar())
  199. ```
  200. ## Column headers
  201. In version 0.13.0 Druid SQL added support for including the column names in the
  202. response which can be requested via the "header" field in the request. This
  203. helps to ensure that the cursor description is defined (which is a requirement
  204. for SQLAlchemy query statements) regardless on whether the result set contains
  205. any rows. Historically this was problematic for result sets which contained no
  206. rows at one could not infer the expected column names.
  207. Enabling the header can be configured via the SQLAlchemy URI by using the query
  208. parameter, i.e.,
  209. ```python
  210. engine = create_engine('druid://localhost:8082/druid/v2/sql?header=true')
  211. ```
  212. Note the current default is `false` to ensure backwards compatibility but should
  213. be set to `true` for Druid versions >= 0.13.0.
  214. # Command line
  215. ```bash
  216. $ pydruid http://localhost:8082/druid/v2/sql/
  217. > SELECT COUNT(*) AS cnt FROM places
  218. cnt
  219. -----
  220. 12345
  221. > SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES;
  222. TABLE_NAME
  223. ----------
  224. test_table
  225. COLUMNS
  226. SCHEMATA
  227. TABLES
  228. > BYE;
  229. GoodBye!
  230. ```
  231. # Contributing
  232. Contributions are welcomed of course. We like to use `black` and `flake8`.
  233. ```bash
  234. pip install -r requirements-dev.txt # installs useful dev deps
  235. pre-commit install # installs useful commit hooks
  236. ```
  237. Platform: UNKNOWN
  238. Classifier: License :: OSI Approved :: Apache Software License
  239. Classifier: Programming Language :: Python
  240. Classifier: Programming Language :: Python :: 3
  241. Classifier: Programming Language :: Python :: 3.5
  242. Classifier: Programming Language :: Python :: 3.6
  243. Classifier: Programming Language :: Python :: 3.7
  244. Classifier: Programming Language :: Python :: 3.8
  245. Description-Content-Type: text/markdown
  246. Provides-Extra: pandas
  247. Provides-Extra: async
  248. Provides-Extra: sqlalchemy
  249. Provides-Extra: cli