intro.txt 12 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354
  1. .. include:: header.txt
  2. ==============
  3. Introduction
  4. ==============
  5. Threads, processes and the GIL
  6. ==============================
  7. To run more than one piece of code at the same time on the same
  8. computer one has the choice of either using multiple processes or
  9. multiple threads.
  10. Although a program can be made up of multiple processes, these
  11. processes are in effect completely independent of one another:
  12. different processes are not able to cooperate with one another unless
  13. one sets up some means of communication between them (such as by using
  14. sockets). If a lot of data must be transferred between processes then
  15. this can be inefficient.
  16. On the other hand, multiple threads within a single process are
  17. intimately connected: they share their data but often can interfere
  18. badly with one another. It is often argued that the only way to make
  19. multithreaded programming "easy" is to avoid relying on any shared
  20. state and for the threads to only communicate by passing messages to
  21. each other.
  22. CPython has a *Global Interpreter Lock* (GIL) which in many ways makes
  23. threading easier than it is in most languages by making sure that only
  24. one thread can manipulate the interpreter's objects at a time. As a
  25. result, it is often safe to let multiple threads access data without
  26. using any additional locking as one would need to in a language such
  27. as C.
  28. One downside of the GIL is that on multi-processor (or multi-core)
  29. systems a multithreaded Python program can only make use of one
  30. processor at a time. This is a problem that can be overcome by using
  31. multiple processes instead.
  32. Python gives little direct support for writing programs using multiple
  33. process. This package allows one to write multi-process programs
  34. using much the same API that one uses for writing threaded programs.
  35. Forking and spawning
  36. ====================
  37. There are two ways of creating a new process in Python:
  38. * The current process can *fork* a new child process by using the
  39. `os.fork()` function. This effectively creates an identical copy
  40. of the current process which is now able to go off and perform some
  41. task set by the parent process. This means that the child process
  42. inherits *copies* of all variables that the parent process had.
  43. However, `os.fork()` is not available on every platform: in
  44. particular Windows does not support it.
  45. * Alternatively, the current process can spawn a completely new Python
  46. interpreter by using the `subprocess` module or one of the
  47. `os.spawn*()` functions.
  48. Getting this new interpreter in to a fit state to perform the task
  49. set for it by its parent process is, however, a bit of a challenge.
  50. The `processing` package uses `os.fork()` if it is available since
  51. it makes life a lot simpler. Forking the process is also more
  52. efficient in terms of memory usage and the time needed to create the
  53. new process.
  54. The Process class
  55. =================
  56. In the `processing` package processes are spawned by creating a
  57. `Process` object and then calling its `start()` method.
  58. `processing.Process` follows the API of `threading.Thread`. A
  59. trivial example of a multiprocess program is ::
  60. from processing import Process
  61. def f(name):
  62. print 'hello', name
  63. if __name__ == '__main__':
  64. p = Process(target=f, args=('bob',))
  65. p.start()
  66. p.join()
  67. Here the function `f` is run in a child process.
  68. For an explanation of why (on Windows) the `if __name__ == '__main__'`
  69. part is necessary see `Programming guidelines
  70. <programming-guidelines.html>`_.
  71. Exchanging objects between processes
  72. ====================================
  73. `processing` supports two types of communication channel between
  74. processes:
  75. **Queues**:
  76. The function `Queue()` returns a near clone of `Queue.Queue`
  77. -- see the Python standard documentation. For example ::
  78. from processing import Process, Queue
  79. def f(q):
  80. q.put([42, None, 'hello'])
  81. if __name__ == '__main__':
  82. q = Queue()
  83. p = Process(target=f, args=(q,))
  84. p.start()
  85. print q.get() # prints "[42, None, 'hello']"
  86. p.join()
  87. Queues are thread and process safe. See `Queues
  88. <processing-ref.html#pipes-and-queues>`_.
  89. **Pipes**:
  90. The `Pipe()` function returns a pair of connection objects
  91. connected by a pipe which by default is duplex (two-way). For
  92. example ::
  93. from processing import Process, Pipe
  94. def f(conn):
  95. conn.send([42, None, 'hello'])
  96. conn.close()
  97. if __name__ == '__main__':
  98. parent_conn, child_conn = Pipe()
  99. p = Process(target=f, args=(child_conn,))
  100. p.start()
  101. print parent_conn.recv() # prints "[42, None, 'hello']"
  102. p.join()
  103. The two connection objects returned by `Pipe()` represent the two
  104. ends of the pipe. Each connection object has `send()` and
  105. `recv()` methods (among others). Note that data in a pipe may
  106. become corrupted if two processes (or threads) try to read from or
  107. write to the *same* end of the pipe at the same time. Of course
  108. there is no risk of corruption from processes using different ends
  109. of the pipe at the same time. See `Pipes
  110. <processing-ref.html#pipes-and-queues>`_.
  111. Synchronization between processes
  112. =================================
  113. `processing` contains equivalents of all the synchronization
  114. primitives from `threading`. For instance one can use a lock to
  115. ensure that only one process prints to standard output at a time::
  116. from processing import Process, Lock
  117. def f(l, i):
  118. l.acquire()
  119. print 'hello world', i
  120. l.release()
  121. if __name__ == '__main__':
  122. lock = Lock()
  123. for num in range(10):
  124. Process(target=f, args=(lock, num)).start()
  125. Without using the lock output from the different processes is liable
  126. to get all mixed up.
  127. Sharing state between processes
  128. ===============================
  129. As mentioned above, when doing concurrent programming it is usually
  130. best to avoid using shared state as far as possible. This is
  131. particularly true when using multiple processes.
  132. However, if you really do need to use some shared data then
  133. `processing` provides a couple of ways of doing so.
  134. **Shared memory**:
  135. Data can be stored in a shared memory map using `Value` or `Array`.
  136. For example the following code ::
  137. from processing import Process, Value, Array
  138. def f(n, a):
  139. n.value = 3.1415927
  140. for i in range(len(a)):
  141. a[i] = -a[i]
  142. if __name__ == '__main__':
  143. num = Value('d', 0.0)
  144. arr = Array('i', range(10))
  145. p = Process(target=f, args=(num, arr))
  146. p.start()
  147. p.join()
  148. print num.value
  149. print arr[:]
  150. will print ::
  151. 3.1415927
  152. [0, -1, -2, -3, -4, -5, -6, -7, -8, -9]
  153. The `'d'` and `'i'` arguments used when creating `num` and `arr`
  154. are typecodes of the kind used by the `array` module: `'d'`
  155. indicates a double precision float and `'i'` inidicates a signed
  156. integer. These shared objects will be process and thread safe.
  157. For more flexibility in using shared memory one can use the
  158. `processing.sharedctypes` module which supports the creation of
  159. arbitrary `ctypes objects allocated from shared memory
  160. <sharedctypes.html>`_.
  161. **Server process**:
  162. A manager object returned by `Manager()` controls a server process
  163. which holds python objects and allows other processes to manipulate
  164. them using proxies.
  165. A manager returned by `Manager()` will support types `list`,
  166. `dict`, `Namespace`, `Lock`, `RLock`, `Semaphore`,
  167. `BoundedSemaphore`, `Condition`, `Event`, `Queue`, `Value`
  168. and `Array`. For example::
  169. from processing import Process, Manager
  170. def f(d, l):
  171. d[1] = '1'
  172. d['2'] = 2
  173. d[0.25] = None
  174. l.reverse()
  175. if __name__ == '__main__':
  176. manager = Manager()
  177. d = manager.dict()
  178. l = manager.list(range(10))
  179. p = Process(target=f, args=(d, l))
  180. p.start()
  181. p.join()
  182. print d
  183. print l
  184. will print ::
  185. {0.25: None, 1: '1', '2': 2}
  186. [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
  187. Creating managers which support other types is not hard --- see
  188. `Customized managers <manager-objects.html#customized-managers>`_.
  189. Server process managers are more flexible than using shared memory
  190. objects because they can be made to support arbitrary object types.
  191. Also, a single manager can be shared by processes on different
  192. computers over a network. They are, however, slower than using
  193. shared memory. See `Server process managers
  194. <manager-objects.html#server-process-managers>`_.
  195. Using a pool of workers
  196. =======================
  197. The `Pool()` function returns an object representing a pool of worker
  198. processes. It has methods which allows tasks to be offloaded to the
  199. worker processes in a few different ways.
  200. For example::
  201. from processing import Pool
  202. def f(x):
  203. return x*x
  204. if __name__ == '__main__':
  205. pool = Pool(processes=4) # start 4 worker processes
  206. result = pool.applyAsync(f, [10]) # evaluate "f(10)" asynchronously
  207. print result.get(timeout=1) # prints "100" unless your computer is *very* slow
  208. print pool.map(f, range(10)) # prints "[0, 1, 4,..., 81]"
  209. See `Process pools <pool-objects.html>`_.
  210. Speed
  211. =====
  212. The following benchmarks were performed on a single core Pentium 4,
  213. 2.5Ghz laptop running Windows XP and Ubuntu Linux 6.10 --- see
  214. `benchmarks.py <../examples/benchmarks.py>`_.
  215. *Number of 256 byte string objects passed between processes/threads per sec*:
  216. ================================== ========== ==================
  217. Connection type Windows Linux
  218. ================================== ========== ==================
  219. Queue.Queue 49,000 17,000-50,000 [1]_
  220. processing.Queue 22,000 21,000
  221. Queue managed by server 6,900 6,500
  222. processing.Pipe 52,000 57,000
  223. ================================== ========== ==================
  224. .. [1] For some reason the performance of `Queue.Queue` is very
  225. variable on Linux.
  226. *Number of acquires/releases of a lock per sec*:
  227. ============================== ========== ==========
  228. Lock type Windows Linux
  229. ============================== ========== ==========
  230. threading.Lock 850,000 560,000
  231. processing.Lock 420,000 510,000
  232. Lock managed by server 10,000 8,400
  233. threading.RLock 93,000 76,000
  234. processing.RLock 420,000 500,000
  235. RLock managed by server 8,800 7,400
  236. ============================== ========== ==========
  237. *Number of interleaved waits/notifies per sec on a
  238. condition variable by two processes*:
  239. ============================== ========== ==========
  240. Condition type Windows Linux
  241. ============================== ========== ==========
  242. threading.Condition 27,000 31,000
  243. processing.Condition 26,000 25,000
  244. Condition managed by server 6,600 6,000
  245. ============================== ========== ==========
  246. *Number of integers retrieved from a sequence per sec*:
  247. ============================== ========== ==========
  248. Sequence type Windows Linux
  249. ============================== ========== ==========
  250. list 6,400,000 5,100,000
  251. unsynchornized shared array 3,900,000 3,100,000
  252. synchronized shared array 200,000 220,000
  253. list managed by server 20,000 17,000
  254. ============================== ========== ==========
  255. .. _Prev: index.html
  256. .. _Up: index.html
  257. .. _Next: processing-ref.html