### 導航
- [索引](../genindex.xhtml "總目錄")
- [模塊](../py-modindex.xhtml "Python 模塊索引") |
- [下一頁](concurrent.xhtml "concurrent 包") |
- [上一頁](threading.xhtml "threading --- 基于線程的并行") |
- 
- [Python](https://www.python.org/) ?
- zh\_CN 3.7.3 [文檔](../index.xhtml) ?
- [Python 標準庫](index.xhtml) ?
- [并發執行](concurrency.xhtml) ?
- $('.inline-search').show(0); |
# [`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") --- 基于進程的并行
**源代碼** [Lib/multiprocessing/](https://github.com/python/cpython/tree/3.7/Lib/multiprocessing/) \[https://github.com/python/cpython/tree/3.7/Lib/multiprocessing/\]
- - - - - -
## 概述
[`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") 是一個用與 [`threading`](threading.xhtml#module-threading "threading: Thread-based parallelism.") 模塊相似API的支持產生進程的包。 [`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") 包同時提供本地和遠程并發,使用子進程代替線程,有效避免 [Global Interpreter Lock](../glossary.xhtml#term-global-interpreter-lock) 帶來的影響。因此, [`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") 模塊允許程序員充分利用機器上的多個核心。Unix 和 Windows 上都可以運行。
[`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") 模塊還引入了在 [`threading`](threading.xhtml#module-threading "threading: Thread-based parallelism.") 模塊中沒有類似物的API。這方面的一個主要例子是 [`Pool`](#multiprocessing.pool.Pool "multiprocessing.pool.Pool") 對象,它提供了一種方便的方法,可以跨多個輸入值并行化函數的執行,跨進程分配輸入數據(數據并行)。以下示例演示了在模塊中定義此類函數的常見做法,以便子進程可以成功導入該模塊。這個數據并行的基本例子使用 [`Pool`](#multiprocessing.pool.Pool "multiprocessing.pool.Pool") ,
```
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
with Pool(5) as p:
print(p.map(f, [1, 2, 3]))
```
將打印到標準輸出
```
[1, 4, 9]
```
### `Process` 類
在 [`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") 中,通過創建一個 [`Process`](#multiprocessing.Process "multiprocessing.Process") 對象然后調用它的 `start()` 方法來生成進程。 [`Process`](#multiprocessing.Process "multiprocessing.Process") 和 [`threading.Thread`](threading.xhtml#threading.Thread "threading.Thread") API 相同。 一個簡單的多進程程序示例是:
```
from multiprocessing import Process
def f(name):
print('hello', name)
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
```
要顯示所涉及的各個進程ID,這是一個擴展示例:
```
from multiprocessing import Process
import os
def info(title):
print(title)
print('module name:', __name__)
print('parent process:', os.getppid())
print('process id:', os.getpid())
def f(name):
info('function f')
print('hello', name)
if __name__ == '__main__':
info('main line')
p = Process(target=f, args=('bob',))
p.start()
p.join()
```
為了解釋為什么 `if __name__ == '__main__'` 部分是必需的,請參見 [Programming guidelines](#multiprocessing-programming)。
### 上下文和啟動方法
根據不同的平臺, [`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") 支持三種啟動進程的方法。這些 *啟動方法* 有
> *spawn*父進程啟動一個新的Python解釋器進程。子進程只會繼承那些運行進程對象的 [`run()`](#multiprocessing.Process.run "multiprocessing.Process.run") 方法所需的資源。特別是父進程中非必須的文件描述符和句柄不會被繼承。相對于使用 *fork* 或者 *forkserver*,使用這個方法啟動進程相當慢。
>
> 可在Unix和Windows上使用。 Windows上的默認設置。
>
> *fork*父進程使用 [`os.fork()`](os.xhtml#os.fork "os.fork") 來產生 Python 解釋器分叉。子進程在開始時實際上與父進程相同。父進程的所有資源都由子進程繼承。請注意,安全分叉多線程進程是棘手的。
>
> 只存在于Unix。Unix中的默認值。
>
> *forkserver*程序啟動并選擇\* forkserver \* 啟動方法時,將啟動服務器進程。從那時起,每當需要一個新進程時,父進程就會連接到服務器并請求它分叉一個新進程。分叉服務器進程是單線程的,因此使用 [`os.fork()`](os.xhtml#os.fork "os.fork") 是安全的。沒有不必要的資源被繼承。
>
> 可在Unix平臺上使用,支持通過Unix管道傳遞文件描述符。
在 3.4 版更改: *spawn* 在所有unix平臺上添加,并且為一些unix平臺添加了 *forkserver* 。子進程不再繼承Windows上的所有上級進程可繼承的句柄。
在Unix上使用 *spawn* 或 *forkserver* 啟動方法也將啟動一個 *信號量跟蹤器* 進程,該進程跟蹤由程序進程創建的未鏈接的命名信號量。當所有進程退出時,信號量跟蹤器取消鏈接任何剩余的信號量。通常不應該有,但如果一個進程被信號殺死,可能會有一些“泄露”的信號量。(取消鏈接命名的信號量是一個嚴重的問題,因為系統只允許有限的數量,并且在下次重新啟動之前它們不會自動取消鏈接。)
要選擇一個啟動方法,你應該在主模塊的 `if __name__ == '__main__'` 子句中調用 [`set_start_method()`](#multiprocessing.set_start_method "multiprocessing.set_start_method") 。例如:
```
import multiprocessing as mp
def foo(q):
q.put('hello')
if __name__ == '__main__':
mp.set_start_method('spawn')
q = mp.Queue()
p = mp.Process(target=foo, args=(q,))
p.start()
print(q.get())
p.join()
```
在程序中 [`set_start_method()`](#multiprocessing.set_start_method "multiprocessing.set_start_method") 不應該被多次調用。
或者,你可以使用 [`get_context()`](#multiprocessing.get_context "multiprocessing.get_context") 來獲取上下文對象。上下文對象與多處理模塊具有相同的API,并允許在同一程序中使用多個啟動方法。:
```
import multiprocessing as mp
def foo(q):
q.put('hello')
if __name__ == '__main__':
ctx = mp.get_context('spawn')
q = ctx.Queue()
p = ctx.Process(target=foo, args=(q,))
p.start()
print(q.get())
p.join()
```
請注意,與一個上下文相關的對象可能與不同上下文的進程不兼容。特別是,使用 *fork* 上下文創建的鎖不能傳遞給使用 *spawn* 或 *forkserver* 啟動方法啟動的進程。
想要使用特定啟動方法的庫應該使用 [`get_context()`](#multiprocessing.get_context "multiprocessing.get_context") 以避免干擾庫用戶的選擇。
警告
`'spawn'` 和 `'forkserver'` 啟動方法當前不能在Unix上和“凍結的”可執行內容一同使用(例如,有類似 **PyInstaller** 和 **cx\_Freeze** 的包產生的二進制文件)。 `'fork'` 啟動方法可以使用。
### 在進程之間交換對象
[`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") 支持進程之間的兩種通信通道:
**隊列**
> [`Queue`](#multiprocessing.Queue "multiprocessing.Queue") 類是一個近似 [`queue.Queue`](queue.xhtml#queue.Queue "queue.Queue") 的克隆。 例如:
>
>
> ```
> from multiprocessing import Process, Queue
>
> def f(q):
> q.put([42, None, 'hello'])
>
> if __name__ == '__main__':
> q = Queue()
> p = Process(target=f, args=(q,))
> p.start()
> print(q.get()) # prints "[42, None, 'hello']"
> p.join()
>
> ```
>
>
>
>
> 隊列是線程和進程安全的。
**管道**
> [`Pipe()`](#multiprocessing.Pipe "multiprocessing.Pipe") 函數返回一個由管道連接的連接對象,默認情況下是雙工(雙向)。例如:
>
>
> ```
> from multiprocessing import Process, Pipe
>
> def f(conn):
> conn.send([42, None, 'hello'])
> conn.close()
>
> if __name__ == '__main__':
> parent_conn, child_conn = Pipe()
> p = Process(target=f, args=(child_conn,))
> p.start()
> print(parent_conn.recv()) # prints "[42, None, 'hello']"
> p.join()
>
> ```
>
>
>
>
> 返回的兩個連接對象 [`Pipe()`](#multiprocessing.Pipe "multiprocessing.Pipe") 表示管道的兩端。每個連接對象都有 `send()` 和 `recv()` 方法(相互之間的)。請注意,如果兩個進程(或線程)同時嘗試讀取或寫入管道的 *同一* 端,則管道中的數據可能會損壞。當然,同時使用管道的不同端的進程不存在損壞的風險。
### 進程之間的同步
[`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") 包含來自 [`threading`](threading.xhtml#module-threading "threading: Thread-based parallelism.") 的所有同步基本體的等價物。例如,可以使用鎖來確保一次只有一個進程打印到標準輸出:
```
from multiprocessing import Process, Lock
def f(l, i):
l.acquire()
try:
print('hello world', i)
finally:
l.release()
if __name__ == '__main__':
lock = Lock()
for num in range(10):
Process(target=f, args=(lock, num)).start()
```
不使用來自不同進程的鎖輸出容易產生混淆。
### 在進程之間共享狀態
如上所述,在進行并發編程時,通常最好盡量避免使用共享狀態。使用多個進程時尤其如此。
但是,如果你真的需要使用一些共享數據,那么 [`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") 提供了兩種方法。
**共享內存**
> 可以使用 [`Value`](#multiprocessing.Value "multiprocessing.Value") 或 [`Array`](#multiprocessing.Array "multiprocessing.Array") 將數據存儲在共享內存映射中。例如,以下代碼:
>
>
> ```
> from multiprocessing import Process, Value, Array
>
> def f(n, a):
> n.value = 3.1415927
> for i in range(len(a)):
> a[i] = -a[i]
>
> if __name__ == '__main__':
> num = Value('d', 0.0)
> arr = Array('i', range(10))
>
> p = Process(target=f, args=(num, arr))
> p.start()
> p.join()
>
> print(num.value)
> print(arr[:])
>
> ```
>
>
>
>
> 將打印
>
>
> ```
> 3.1415927
> [0, -1, -2, -3, -4, -5, -6, -7, -8, -9]
>
> ```
>
>
>
>
> 創建 `num` 和 `arr` 時使用的 `'d'` 和 `'i'` 參數是 [`array`](array.xhtml#module-array "array: Space efficient arrays of uniformly typed numeric values.") 模塊使用的類型的 typecode : `'d'` 表示雙精度浮點數, `'i'` 表示有符號整數。這些共享對象將是進程和線程安全的。
>
> 為了更靈活地使用共享內存,可以使用 [`multiprocessing.sharedctypes`](#module-multiprocessing.sharedctypes "multiprocessing.sharedctypes: Allocate ctypes objects from shared memory.") 模塊,該模塊支持創建從共享內存分配的任意ctypes對象。
**服務器進程**
> 由 `Manager()` 返回的管理器對象控制一個服務器進程,該進程保存Python對象并允許其他進程使用代理操作它們。
>
> `Manager()` 返回的管理器支持類型: [`list`](stdtypes.xhtml#list "list") 、 [`dict`](stdtypes.xhtml#dict "dict") 、 [`Namespace`](#multiprocessing.managers.Namespace "multiprocessing.managers.Namespace") 、 [`Lock`](#multiprocessing.Lock "multiprocessing.Lock") 、 [`RLock`](#multiprocessing.RLock "multiprocessing.RLock") 、 [`Semaphore`](#multiprocessing.Semaphore "multiprocessing.Semaphore") 、 [`BoundedSemaphore`](#multiprocessing.BoundedSemaphore "multiprocessing.BoundedSemaphore") 、 [`Condition`](#multiprocessing.Condition "multiprocessing.Condition") 、 [`Event`](#multiprocessing.Event "multiprocessing.Event") 、 [`Barrier`](#multiprocessing.Barrier "multiprocessing.Barrier") 、 [`Queue`](#multiprocessing.Queue "multiprocessing.Queue") 、 [`Value`](#multiprocessing.Value "multiprocessing.Value") 和 [`Array`](#multiprocessing.Array "multiprocessing.Array") 。例如
>
>
> ```
> from multiprocessing import Process, Manager
>
> def f(d, l):
> d[1] = '1'
> d['2'] = 2
> d[0.25] = None
> l.reverse()
>
> if __name__ == '__main__':
> with Manager() as manager:
> d = manager.dict()
> l = manager.list(range(10))
>
> p = Process(target=f, args=(d, l))
> p.start()
> p.join()
>
> print(d)
> print(l)
>
> ```
>
>
>
>
> 將打印
>
>
> ```
> {0.25: None, 1: '1', '2': 2}
> [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
>
> ```
>
>
>
>
> 服務器進程管理器比使用共享內存對象更靈活,因為它們可以支持任意對象類型。此外,單個管理器可以通過網絡由不同計算機上的進程共享。但是,它們比使用共享內存慢。
### 使用工作進程
[`Pool`](#multiprocessing.pool.Pool "multiprocessing.pool.Pool") 類表示一個工作進程池。它具有允許以幾種不同方式將任務分配到工作進程的方法。
例如:
```
from multiprocessing import Pool, TimeoutError
import time
import os
def f(x):
return x*x
if __name__ == '__main__':
# start 4 worker processes
with Pool(processes=4) as pool:
# print "[0, 1, 4,..., 81]"
print(pool.map(f, range(10)))
# print same numbers in arbitrary order
for i in pool.imap_unordered(f, range(10)):
print(i)
# evaluate "f(20)" asynchronously
res = pool.apply_async(f, (20,)) # runs in *only* one process
print(res.get(timeout=1)) # prints "400"
# evaluate "os.getpid()" asynchronously
res = pool.apply_async(os.getpid, ()) # runs in *only* one process
print(res.get(timeout=1)) # prints the PID of that process
# launching multiple evaluations asynchronously *may* use more processes
multiple_results = [pool.apply_async(os.getpid, ()) for i in range(4)]
print([res.get(timeout=1) for res in multiple_results])
# make a single worker sleep for 10 secs
res = pool.apply_async(time.sleep, (10,))
try:
print(res.get(timeout=1))
except TimeoutError:
print("We lacked patience and got a multiprocessing.TimeoutError")
print("For the moment, the pool remains available for more work")
# exiting the 'with'-block has stopped the pool
print("Now the pool is closed and no longer available")
```
請注意,池的方法只能由創建它的進程使用。
注解
該軟件包中的功能要求子項可以導入 `__main__` 模塊。這包含在 [Programming guidelines](#multiprocessing-programming) 中,但值得指出。這意味著一些示例,例如 [`multiprocessing.pool.Pool`](#multiprocessing.pool.Pool "multiprocessing.pool.Pool") 示例在交互式解釋器中不起作用。例如:
```
>>> from multiprocessing import Pool
>>> p = Pool(5)
>>> def f(x):
... return x*x
...
>>> p.map(f, [1,2,3])
Process PoolWorker-1:
Process PoolWorker-2:
Process PoolWorker-3:
Traceback (most recent call last):
AttributeError: 'module' object has no attribute 'f'
AttributeError: 'module' object has no attribute 'f'
AttributeError: 'module' object has no attribute 'f'
```
(如果你嘗試這個,它實際上會以半隨機的方式輸出三個完整的回溯,然后你可能不得不以某種方式停止主進程。)
## 參考
[`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") 包大部分復制了 [`threading`](threading.xhtml#module-threading "threading: Thread-based parallelism.") 模塊的API。
### `Process` 和異常
*class* `multiprocessing.``Process`(*group=None*, *target=None*, *name=None*, *args=()*, *kwargs={}*, *\**, *daemon=None*)進程對象表示在單獨進程中運行的活動。 [`Process`](#multiprocessing.Process "multiprocessing.Process") 類等價于 [`threading.Thread`](threading.xhtml#threading.Thread "threading.Thread") 。
應始終使用關鍵字參數調用構造函數。 *group* 應該始終是 `None` ;它僅用于兼容 [`threading.Thread`](threading.xhtml#threading.Thread "threading.Thread") 。 *target* 是由 [`run()`](#multiprocessing.Process.run "multiprocessing.Process.run") 方法調用的可調用對象。它默認為 `None` ,意味著什么都沒有被調用。 *name* 是進程名稱(有關詳細信息,請參閱 [`name`](#multiprocessing.Process.name "multiprocessing.Process.name") )。 *args* 是目標調用的參數元組。 *kwargs* 是目標調用的關鍵字參數字典。如果提供,則鍵參數 *daemon* 將進程 [`daemon`](#multiprocessing.Process.daemon "multiprocessing.Process.daemon") 標志設置為 `True` 或 `False` 。如果是 `None` (默認值),則該標志將從創建的進程繼承。
默認情況下,不會將任何參數傳遞給 *target* 。
如果子類重寫構造函數,它必須確保它在對進程執行任何其他操作之前調用基類構造函數( `Process.__init__()` )。
在 3.3 版更改: 加入 *daemon* 參數。
`run`()表示進程活動的方法。
你可以在子類中重載此方法。標準 [`run()`](#multiprocessing.Process.run "multiprocessing.Process.run") 方法調用傳遞給對象構造函數的可調用對象作為目標參數(如果有),分別從 *args* 和 *kwargs* 參數中獲取順序和關鍵字參數。
`start`()啟動進程活動。
每個進程對象最多只能調用一次。它安排對象的 [`run()`](#multiprocessing.Process.run "multiprocessing.Process.run") 方法在一個單獨的進程中調用。
`join`(\[*timeout*\])如果可選參數 *timeout* 是 `None` (默認值),則該方法將阻塞,直到調用 [`join()`](#multiprocessing.Process.join "multiprocessing.Process.join") 方法的進程終止。如果 *timeout* 是一個正數,它最多會阻塞 *timeout* 秒。請注意,如果進程終止或方法超時,則該方法返回 `None` 。檢查進程的 [`exitcode`](#multiprocessing.Process.exitcode "multiprocessing.Process.exitcode") 以確定它是否終止。
一個進程可以合并多次。
進程無法并入自身,因為這會導致死鎖。嘗試在啟動進程之前合并進程是錯誤的。
`name`進程的名稱。該名稱是一個字符串,僅用于識別目的。它沒有語義。可以為多個進程指定相同的名稱。
初始名稱由構造器設定。 如果沒有為構造器提供顯式名稱,則會構造一個形式為 'Process-N1:N2:...:Nk' 的名稱,其中每個 Nk 是其父親的第 N 個孩子。
`is_alive`()返回進程是否還活著。
粗略地說,從 [`start()`](#multiprocessing.Process.start "multiprocessing.Process.start") 方法返回到子進程終止之前,進程對象仍處于活動狀態。
`daemon`進程的守護標志,一個布爾值。這必須在 [`start()`](#multiprocessing.Process.start "multiprocessing.Process.start") 被調用之前設置。
初始值繼承自創建進程。
當進程退出時,它會嘗試終止其所有守護進程子進程。
請注意,不允許守護進程創建子進程。否則,守護進程會在子進程退出時終止其子進程。 另外,這些 **不是** Unix守護進程或服務,它們是正常進程,如果非守護進程已經退出,它們將被終止(并且不被合并)。
除了 [`threading.Thread`](threading.xhtml#threading.Thread "threading.Thread") API ,[`Process`](#multiprocessing.Process "multiprocessing.Process") 對象還支持以下屬性和方法:
`pid`返回進程ID。在生成該進程之前,這將是 `None` 。
`exitcode`的退子進程出代碼。如果進程尚未終止,這將是 `None` 。負值 *-N* 表示孩子被信號 *N* 終止。
`authkey`進程的身份驗證密鑰(字節字符串)。
當 [`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") 初始化時,主進程使用 [`os.urandom()`](os.xhtml#os.urandom "os.urandom") 分配一個隨機字符串。
當創建 [`Process`](#multiprocessing.Process "multiprocessing.Process") 對象時,它將繼承其父進程的身份驗證密鑰,盡管可以通過將 [`authkey`](#multiprocessing.Process.authkey "multiprocessing.Process.authkey") 設置為另一個字節字符串來更改。
參見 [Authentication keys](#multiprocessing-auth-keys) 。
`sentinel`系統對象的數字句柄,當進程結束時將變為 "ready" 。
如果要使用 [`multiprocessing.connection.wait()`](#multiprocessing.connection.wait "multiprocessing.connection.wait") 一次等待多個事件,可以使用此值。否則調用 [`join()`](#multiprocessing.Process.join "multiprocessing.Process.join") 更簡單。
在Windows上,這是一個操作系統句柄,可以與 `WaitForSingleObject` 和 `WaitForMultipleObjects` 系列API調用一起使用。在Unix上,這是一個文件描述符,可以使用來自 [`select`](select.xhtml#module-select "select: Wait for I/O completion on multiple streams.") 模塊的原語。
3\.3 新版功能.
`terminate`()終止進程。 在Unix上,這是使用 `SIGTERM` 信號完成的;在Windows上使用 `TerminateProcess()` 。 請注意,不會執行退出處理程序和finally子句等。
請注意,進程的后代進程將不會被終止 —— 它們將簡單地變成孤立的。
警告
如果在關聯進程使用管道或隊列時使用此方法,則管道或隊列可能會損壞,并可能無法被其他進程使用。類似地,如果進程已獲得鎖或信號量等,則終止它可能導致其他進程死鎖。
`kill`()與 [`terminate()`](#multiprocessing.Process.terminate "multiprocessing.Process.terminate") 相同,但在Unix上使用 `SIGKILL` 信號。
3\.7 新版功能.
`close`()關閉 [`Process`](#multiprocessing.Process "multiprocessing.Process") 對象,釋放與之關聯的所有資源。如果底層進程仍在運行,則會引發 [`ValueError`](exceptions.xhtml#ValueError "ValueError") 。一旦 [`close()`](#multiprocessing.Process.close "multiprocessing.Process.close") 成功返回, [`Process`](#multiprocessing.Process "multiprocessing.Process") 對象的大多數其他方法和屬性將引發 [`ValueError`](exceptions.xhtml#ValueError "ValueError") 。
3\.7 新版功能.
注意 [`start()`](#multiprocessing.Process.start "multiprocessing.Process.start") 、 [`join()`](#multiprocessing.Process.join "multiprocessing.Process.join") 、 [`is_alive()`](#multiprocessing.Process.is_alive "multiprocessing.Process.is_alive") 、 [`terminate()`](#multiprocessing.Process.terminate "multiprocessing.Process.terminate") 和 [`exitcode`](#multiprocessing.Process.exitcode "multiprocessing.Process.exitcode") 方法只能由創建進程對象的進程調用。
[`Process`](#multiprocessing.Process "multiprocessing.Process") 一些方法的示例用法:
```
>>> import multiprocessing, time, signal
>>> p = multiprocessing.Process(target=time.sleep, args=(1000,))
>>> print(p, p.is_alive())
<Process(Process-1, initial)> False
>>> p.start()
>>> print(p, p.is_alive())
<Process(Process-1, started)> True
>>> p.terminate()
>>> time.sleep(0.1)
>>> print(p, p.is_alive())
<Process(Process-1, stopped[SIGTERM])> False
>>> p.exitcode == -signal.SIGTERM
True
```
*exception* `multiprocessing.``ProcessError`所有 [`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") 異常的基類。
*exception* `multiprocessing.``BufferTooShort`當提供的緩沖區對象太小而無法讀取消息時, `Connection.recv_bytes_into()` 引發的異常。
如果 `e` 是一個 [`BufferTooShort`](#multiprocessing.BufferTooShort "multiprocessing.BufferTooShort") 實例,那么 `e.args[0]` 將把消息作為字節字符串給出。
*exception* `multiprocessing.``AuthenticationError`出現身份驗證錯誤時引發。
*exception* `multiprocessing.``TimeoutError`有超時的方法超時時引發。
### 管道和隊列
使用多進程時,一般使用消息機制實現進程間通信,盡可能避免使用同步原語,例如鎖。
消息機制包含: [`Pipe()`](#multiprocessing.Pipe "multiprocessing.Pipe") (可以用于在兩個進程間傳遞消息),以及隊列(能夠在多個生產者和消費者之間通信)。
The [`Queue`](#multiprocessing.Queue "multiprocessing.Queue"), [`SimpleQueue`](#multiprocessing.SimpleQueue "multiprocessing.SimpleQueue") and [`JoinableQueue`](#multiprocessing.JoinableQueue "multiprocessing.JoinableQueue") types are multi-producer, multi-consumer FIFOqueues modelled on the [`queue.Queue`](queue.xhtml#queue.Queue "queue.Queue") class in the standard library. They differ in that [`Queue`](#multiprocessing.Queue "multiprocessing.Queue") lacks the [`task_done()`](queue.xhtml#queue.Queue.task_done "queue.Queue.task_done") and [`join()`](queue.xhtml#queue.Queue.join "queue.Queue.join") methods introduced into Python 2.5's [`queue.Queue`](queue.xhtml#queue.Queue "queue.Queue") class.
如果你使用了 [`JoinableQueue`](#multiprocessing.JoinableQueue "multiprocessing.JoinableQueue") ,那么你\*\*必須\*\*對每個已經移出隊列的任務調用 [`JoinableQueue.task_done()`](#multiprocessing.JoinableQueue.task_done "multiprocessing.JoinableQueue.task_done") 。不然的話用于統計未完成任務的信號量最終會溢出并拋出異常。
另外還可以通過使用一個管理器對象創建一個共享隊列,詳見 [Managers](#multiprocessing-managers) 。
注解
[`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") 使用了普通的 [`queue.Empty`](queue.xhtml#queue.Empty "queue.Empty") 和 [`queue.Full`](queue.xhtml#queue.Full "queue.Full") 異常去表示超時。 你需要從 [`queue`](queue.xhtml#module-queue "queue: A synchronized queue class.") 中導入它們,因為它們并不在 [`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") 的命名空間中。
注解
當一個對象被放入一個隊列中時,這個對象首先會被一個后臺線程用pickle序列化,并將序列化后的數據通過一個底層管道的管道傳遞到隊列中。這種做法會有點讓人驚訝,但一般不會出現什么問題。如果它們確實妨礙了你,你可以使用一個由管理器 [manager](#multiprocessing-managers) 創建的隊列替換它。
1. 將一個對象放入一個空隊列后,可能需要極小的延遲,隊列的方法 [`empty()`](#multiprocessing.Queue.empty "multiprocessing.Queue.empty") 才會返回 [`False`](constants.xhtml#False "False") 。而 [`get_nowait()`](#multiprocessing.Queue.get_nowait "multiprocessing.Queue.get_nowait") 可以不拋出 [`queue.Empty`](queue.xhtml#queue.Empty "queue.Empty") 直接返回。
2. 如果有多個進程同時將對象放入隊列,那么在隊列的另一端接受到的對象可能是無序的。但是由同一個進程放入的多個對象的順序在另一端輸出時總是一樣的。
警告
如果一個進程通過調用 [`Process.terminate()`](#multiprocessing.Process.terminate "multiprocessing.Process.terminate") 或 [`os.kill()`](os.xhtml#os.kill "os.kill") 在嘗試使用 [`Queue`](#multiprocessing.Queue "multiprocessing.Queue") 期間被終止了,那么隊列中的數據很可能被破壞。 這可能導致其他進程在嘗試使用該隊列時遇到異常。
警告
正如剛才提到的,如果一個子進程將一些對象放進隊列中 (并且它沒有用 [`JoinableQueue.cancel_join_thread`](#multiprocessing.Queue.cancel_join_thread "multiprocessing.Queue.cancel_join_thread") 方法),那么這個進程在所有緩沖區的對象被刷新進管道之前,是不會終止的。
這意味著,除非你確定所有放入隊列中的對象都已經被消費了,否則如果你試圖等待這個進程,你可能會陷入死鎖中。相似地,如果該子進程不是后臺進程,那么父進程可能在試圖等待所有非后臺進程退出時掛起。
注意用管理器創建的隊列不存在這個問題,詳見 [Programming guidelines](#multiprocessing-programming) 。
該 [示例](#multiprocessing-examples) 展示了如何使用隊列實現進程間通信。
`multiprocessing.``Pipe`(\[*duplex*\])返回一對 `Connection`對象? ``(conn1, conn2)`` , 分別表示管道的兩端。
如果 *duplex* 被置為 `True` (默認值),那么該管道是雙向的。如果 *duplex* 被置為 `False` ,那么該管道是單向的,即 `conn1` 只能用于接收消息,而 `conn2` 僅能用于發送消息。
*class* `multiprocessing.``Queue`(\[*maxsize*\])返回一個使用一個管道和少量鎖和信號量實現的共享隊列實例。當一個進程將一個對象放進隊列中時,一個寫入線程會啟動并將對象從緩沖區寫入管道中。
一旦超時,將拋出標準庫 [`queue`](queue.xhtml#module-queue "queue: A synchronized queue class.") 模塊中常見的異常 [`queue.Empty`](queue.xhtml#queue.Empty "queue.Empty") 和 [`queue.Full`](queue.xhtml#queue.Full "queue.Full")。
除了 [`task_done()`](queue.xhtml#queue.Queue.task_done "queue.Queue.task_done") 和 [`join()`](queue.xhtml#queue.Queue.join "queue.Queue.join") 之外,[`Queue`](#multiprocessing.Queue "multiprocessing.Queue") 實現了標準庫類 [`queue.Queue`](queue.xhtml#queue.Queue "queue.Queue") 中所有的方法。
`qsize`()返回隊列的大致長度。由于多線程或者多進程的上下文,這個數字是不可靠的。
注意,在 Unix 平臺上,例如 Mac OS X ,這個方法可能會拋出 [`NotImplementedError`](exceptions.xhtml#NotImplementedError "NotImplementedError") 異常,因為該平臺沒有實現 `sem_getvalue()` 。
`empty`()如果隊列是空的,返回 `True` ,反之返回 `False` 。 由于多線程或多進程的環境,該狀態是不可靠的。
`full`()如果隊列是滿的,返回 `True` ,反之返回 `False` 。 由于多線程或多進程的環境,該狀態是不可靠的。
`put`(*obj*\[, *block*\[, *timeout*\]\])將 obj 放入隊列。如果可選參數 *block* 是 `True` (默認值) 而且 *timeout* 是 `None` (默認值), 將會阻塞當前進程,直到有空的緩沖槽。如果 *timeout* 是正數,將會在阻塞了最多 *timeout* 秒之后還是沒有可用的緩沖槽時拋出 [`queue.Full`](queue.xhtml#queue.Full "queue.Full") 異常。反之 (*block* 是 `False` 時),僅當有可用緩沖槽時才放入對象,否則拋出 [`queue.Full`](queue.xhtml#queue.Full "queue.Full") 異常 (在這種情形下 *timeout* 參數會被忽略)。
`put_nowait`(*obj*)相當于 `put(obj, False)`。
`get`(\[*block*\[, *timeout*\]\])從隊列中取出并返回對象。如果可選參數 *block* 是 `True` (默認值) 而且 *timeout* 是 `None` (默認值), 將會阻塞當前進程,直到隊列中出現可用的對象。如果 *timeout* 是正數,將會在阻塞了最多 *timeout* 秒之后還是沒有可用的對象時拋出 [`queue.Empty`](queue.xhtml#queue.Empty "queue.Empty") 異常。反之 (*block* 是 `False` 時),僅當有可用對象能夠取出時返回,否則拋出 [`queue.Empty`](queue.xhtml#queue.Empty "queue.Empty") 異常 (在這種情形下 *timeout* 參數會被忽略)。
`get_nowait`()相當于 `get(False)`。
[`multiprocessing.Queue`](#multiprocessing.Queue "multiprocessing.Queue") 類有一些在 [`queue.Queue`](queue.xhtml#queue.Queue "queue.Queue") 類中沒有出現的方法。這些方法在大多數情形下并不是必須的。
`close`()指示當前進程將不會再往隊列中放入對象。一旦所有緩沖區中的數據被寫入管道之后,后臺的線程會退出。這個方法在隊列被gc回收時會自動調用。
`join_thread`()等待后臺線程。這個方法僅在調用了 [`close()`](#multiprocessing.Queue.close "multiprocessing.Queue.close") 方法之后可用。這會阻塞當前進程,直到后臺線程退出,確保所有緩沖區中的數據都被寫入管道中。
默認情況下,如果一個不是隊列創建者的進程試圖退出,它會嘗試等待這個隊列的后臺線程。這個進程可以使用 [`cancel_join_thread()`](#multiprocessing.Queue.cancel_join_thread "multiprocessing.Queue.cancel_join_thread") 讓 [`join_thread()`](#multiprocessing.Queue.join_thread "multiprocessing.Queue.join_thread") 方法什么都不做直接跳過。
`cancel_join_thread`()防止 [`join_thread()`](#multiprocessing.Queue.join_thread "multiprocessing.Queue.join_thread") 方法阻塞當前進程。具體而言,這防止進程退出時自動等待后臺線程退出。詳見 [`join_thread()`](#multiprocessing.Queue.join_thread "multiprocessing.Queue.join_thread")。
可能這個方法稱為”`allow_exit_without_flush()`“ 會更好。這有可能會導致正在排隊進入隊列的數據丟失,大多數情況下你不需要用到這個方法,僅當你不關心底層管道中可能丟失的數據,只是希望進程能夠馬上退出時使用。
注解
該類的功能依賴于宿主操作系統具有可用的共享信號量實現。否則該類將被禁用,任何試圖實例化一個 [`Queue`](#multiprocessing.Queue "multiprocessing.Queue") 對象的操作都會拋出 [`ImportError`](exceptions.xhtml#ImportError "ImportError") 異常,更多信息詳見 [bpo-3770](https://bugs.python.org/issue3770) \[https://bugs.python.org/issue3770\] 。后續說明的任何專用隊列對象亦如此。
*class* `multiprocessing.``SimpleQueue`這是一個簡化的 [`Queue`](#multiprocessing.Queue "multiprocessing.Queue") 類的實現,很像帶鎖的 [`Pipe`](#multiprocessing.Pipe "multiprocessing.Pipe") 。
`empty`()如果隊列為空返回 `True` ,否則返回 `False` 。
`get`()從隊列中移出并返回一個對象。
`put`(*item*)將 *item* 放入隊列。
*class* `multiprocessing.``JoinableQueue`(\[*maxsize*\])[`JoinableQueue`](#multiprocessing.JoinableQueue "multiprocessing.JoinableQueue") 類是 [`Queue`](#multiprocessing.Queue "multiprocessing.Queue") 的子類,額外添加了 [`task_done()`](#multiprocessing.JoinableQueue.task_done "multiprocessing.JoinableQueue.task_done") 和 [`join()`](#multiprocessing.JoinableQueue.join "multiprocessing.JoinableQueue.join") 方法。
`task_done`()指出之前進入隊列的任務已經完成。由隊列的消費者進程使用。對于每次調用 [`get()`](#multiprocessing.Queue.get "multiprocessing.Queue.get") 獲取的任務,執行完成后調用 [`task_done()`](#multiprocessing.JoinableQueue.task_done "multiprocessing.JoinableQueue.task_done") 告訴隊列該任務已經處理完成。
如果 [`join()`](queue.xhtml#queue.Queue.join "queue.Queue.join") 方法正在阻塞之中,該方法會在所有對象都被處理完的時候返回 (即對之前使用 [`put()`](#multiprocessing.Queue.put "multiprocessing.Queue.put") 放進隊列中的所有對象都已經返回了對應的 [`task_done()`](#multiprocessing.JoinableQueue.task_done "multiprocessing.JoinableQueue.task_done") ) 。
如果被調用的次數多于放入隊列中的項目數量,將引發 [`ValueError`](exceptions.xhtml#ValueError "ValueError") 異常 。
`join`()阻塞至隊列中所有的元素都被接收和處理完畢。
當條目添加到隊列的時候,未完成任務的計數就會增加。每當消費者進程調用 [`task_done()`](#multiprocessing.JoinableQueue.task_done "multiprocessing.JoinableQueue.task_done") 表示這個條目已經被回收,該條目所有工作已經完成,未完成計數就會減少。當未完成計數降到零的時候, `join()` 阻塞被解除。
### 雜項
`multiprocessing.``active_children`()返回當前進程存活的子進程的列表。
調用該方法有“等待”已經結束的進程的副作用。
`multiprocessing.``cpu_count`()返回系統的CPU數量。
該數量不同于當前進程可以使用的CPU數量。可用的CPU數量可以由 `len(os.sched_getaffinity(0))` 方法獲得。
可能引發 [`NotImplementedError`](exceptions.xhtml#NotImplementedError "NotImplementedError") 。
參見
[`os.cpu_count()`](os.xhtml#os.cpu_count "os.cpu_count")
`multiprocessing.``current_process`()返回與當前進程相對應的 [`Process`](#multiprocessing.Process "multiprocessing.Process") 對象。
和 [`threading.current_thread()`](threading.xhtml#threading.current_thread "threading.current_thread") 相同。
`multiprocessing.``freeze_support`()為使用了 [`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") 的程序,提供凍結以產生 Windows 可執行文件的支持。(在 **py2exe**, **PyInstaller** 和 **cx\_Freeze** 上測試通過)
需要在 main 模塊的 `if __name__ == '__main__'` 該行之后馬上調用該函數。例如:
```
from multiprocessing import Process, freeze_support
def f():
print('hello world!')
if __name__ == '__main__':
freeze_support()
Process(target=f).start()
```
如果沒有調用 `freeze_support()` 在嘗試運行被凍結的可執行文件時會拋出 [`RuntimeError`](exceptions.xhtml#RuntimeError "RuntimeError") 異常。
對 `freeze_support()` 的調用在非 Windows 平臺上是無效的。如果該模塊在 Windows 平臺的 Python 解釋器中正常運行 (該程序沒有被凍結), 調用``freeze\_support()`` 也是無效的。
`multiprocessing.``get_all_start_methods`()返回支持的啟動方法的列表,該列表的首項即為默認選項。可能的啟動方法有 `'fork'`, `'spawn'` 和``'forkserver'`。在 Windows 中,只有? ``'spawn'` 是可用的。Unix平臺總是支持``'fork'`` 和``'spawn'`,且 ``'fork'` 是默認值。
3\.4 新版功能.
`multiprocessing.``get_context`(*method=None*)返回一個 Context 對象。該對象具有和 [`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") 模塊相同的API。
如果 *method* 設置成 `None` 那么將返回默認上下文對象。否則 *method* 應該是 `'fork'`, `'spawn'`, `'forkserver'` 。 如果指定的啟動方法不存在,將拋出 [`ValueError`](exceptions.xhtml#ValueError "ValueError") 異常。
3\.4 新版功能.
`multiprocessing.``get_start_method`(*allow\_none=False*)返回啟動進程時使用的啟動方法名。
如果啟動方法已經固定,并且 *allow\_none* 被設置成 False ,那么啟動方法將被固定為默認的啟動方法,并且返回其方法名。如果啟動方法沒有設定,并且 *allow\_none* 被設置成 True ,那么將返回 `None` 。
返回值將為 `'fork'` , `'spawn'` , `'forkserver'` 或者 `None` 。 `'fork'``是 Unix 的默認值,?? ``'spawn'` 是 Windows 的默認值。
3\.4 新版功能.
`multiprocessing.``set_executable`()設置在啟動子進程時使用的 Python 解釋器路徑。 ( 默認使用 [`sys.executable`](sys.xhtml#sys.executable "sys.executable") ) 嵌入式編程人員可能需要這樣做:
```
set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
```
以使他們可以創建子進程。
在 3.4 版更改: 現在在 Unix 平臺上使用 `'spawn'` 啟動方法時支持調用該方法。
`multiprocessing.``set_start_method`(*method*)設置啟動子進程的方法。 *method* 可以是 `'fork'` , `'spawn'` 或者 `'forkserver'` 。
注意這最多只能調用一次,并且需要藏在 main 模塊中,由 `if __name__ == '__main__'` 保護著。
3\.4 新版功能.
注解
[`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") 并沒有包含類似 [`threading.active_count()`](threading.xhtml#threading.active_count "threading.active_count") , [`threading.enumerate()`](threading.xhtml#threading.enumerate "threading.enumerate") , [`threading.settrace()`](threading.xhtml#threading.settrace "threading.settrace") , [`threading.setprofile()`](threading.xhtml#threading.setprofile "threading.setprofile"), [`threading.Timer`](threading.xhtml#threading.Timer "threading.Timer") , 或者 [`threading.local`](threading.xhtml#threading.local "threading.local") 的方法和類。
### 連接(Connection)對象
Connection 對象允許收發可以序列化的對象或字符串。它們可以看作面向消息的連接套接字。
通常使用 [`Pipe`](#multiprocessing.Pipe "multiprocessing.Pipe") 創建 Connection 對象。詳見 : [Listeners and Clients](#multiprocessing-listeners-clients).
*class* `multiprocessing.connection.``Connection``send`(*obj*)將一個對象發送到連接的另一端,可以用 [`recv()`](#multiprocessing.connection.Connection.recv "multiprocessing.connection.Connection.recv") 讀取。
發送的對象必須是可以序列化的,過大的對象 ( 接近 32MiB+ ,這個值取決于操作系統 ) 有可能引發 [`ValueError`](exceptions.xhtml#ValueError "ValueError") 異常。
`recv`()返回一個由另一端使用 [`send()`](#multiprocessing.connection.Connection.send "multiprocessing.connection.Connection.send") 發送的對象。該方法會一直阻塞直到接收到對象。 如果對端關閉了連接或者沒有東西可接收,將拋出 [`EOFError`](exceptions.xhtml#EOFError "EOFError") 異常。
`fileno`()返回由連接對象使用的描述符或者句柄。
`close`()關閉連接對象。
當連接對象被垃圾回收時會自動調用。
`poll`(\[*timeout*\])返回連接對象中是否有可以讀取的數據。
如果未指定 *timeout* ,此方法會馬上返回。如果 *timeout* 是一個數字,則指定了最大阻塞的秒數。如果 *timeout* 是 `None` ,那么將一直等待,不會超時。
注意通過使用 [`multiprocessing.connection.wait()`](#multiprocessing.connection.wait "multiprocessing.connection.wait") 可以一次輪詢多個連接對象。
`send_bytes`(*buffer*\[, *offset*\[, *size*\]\])從一個 [bytes-like object](../glossary.xhtml#term-bytes-like-object) (字節類對象)對象中取出字節數組并作為一條完整消息發送。
如果由 *offset* 給定了在 *buffer* 中讀取數據的位置。 如果給定了 *size* ,那么將會從緩沖區中讀取多個字節。 過大的緩沖區 ( 接近 32MiB+ ,此值依賴于操作系統 ) 有可能引發 [`ValueError`](exceptions.xhtml#ValueError "ValueError") 異常。
`recv_bytes`(\[*maxlength*\])以字符串形式返回一條從連接對象另一端發送過來的字節數據。此方法在接收到數據前將一直阻塞。 如果連接對象被對端關閉或者沒有數據可讀取,將拋出 [`EOFError`](exceptions.xhtml#EOFError "EOFError") 異常。
如果給定了 *maxlength* 并且消息短于 *maxlength* 那么將拋出 [`OSError`](exceptions.xhtml#OSError "OSError") 并且該連接對象將不再可讀。
在 3.3 版更改: 曾經該函數拋出 [`IOError`](exceptions.xhtml#IOError "IOError") ,現在這是 [`OSError`](exceptions.xhtml#OSError "OSError") 的別名。
`recv_bytes_into`(*buffer*\[, *offset*\])將一條完整的字節數據消息讀入 *buffer* 中并返回消息的字節數。 此方法在接收到數據前將一直阻塞。 如果連接對象被對端關閉或者沒有數據可讀取,將拋出 [`EOFError`](exceptions.xhtml#EOFError "EOFError") 異常。
*buffer* must be a writable [bytes-like object](../glossary.xhtml#term-bytes-like-object). If *offset* is given then the message will be written into the buffer from that position. Offset must be a non-negative integer less than the length of *buffer* (in bytes).
如果緩沖區太小,則將引發 `BufferTooShort` 異常,并且完整的消息將會存放在異常實例 `e` 的 `e.args[0]` 中。
在 3.3 版更改: 現在連接對象自身可以通過 [`Connection.send()`](#multiprocessing.connection.Connection.send "multiprocessing.connection.Connection.send") 和 [`Connection.recv()`](#multiprocessing.connection.Connection.recv "multiprocessing.connection.Connection.recv") 在進程之間傳遞。
3\.3 新版功能: 連接對象現已支持上下文管理協議 -- 參見 see [上下文管理器類型](stdtypes.xhtml#typecontextmanager) 。 [`__enter__()`](stdtypes.xhtml#contextmanager.__enter__ "contextmanager.__enter__") 返回連接對象, [`__exit__()`](stdtypes.xhtml#contextmanager.__exit__ "contextmanager.__exit__") 會調用 [`close()`](#multiprocessing.connection.Connection.close "multiprocessing.connection.Connection.close") 。
例如:
```
>>> from multiprocessing import Pipe
>>> a, b = Pipe()
>>> a.send([1, 'hello', None])
>>> b.recv()
[1, 'hello', None]
>>> b.send_bytes(b'thank you')
>>> a.recv_bytes()
b'thank you'
>>> import array
>>> arr1 = array.array('i', range(5))
>>> arr2 = array.array('i', [0] * 10)
>>> a.send_bytes(arr1)
>>> count = b.recv_bytes_into(arr2)
>>> assert count == len(arr1) * arr1.itemsize
>>> arr2
array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])
```
警告
The [`Connection.recv()`](#multiprocessing.connection.Connection.recv "multiprocessing.connection.Connection.recv") method automatically unpickles the data it receives, which can be a security risk unless you can trust the process which sent the message.
因此, 除非連接對象是由 `Pipe()` 產生的,在通過一些認證手段之前你應該只使用 [`recv()`](#multiprocessing.connection.Connection.recv "multiprocessing.connection.Connection.recv") 和 [`send()`](#multiprocessing.connection.Connection.send "multiprocessing.connection.Connection.send") 方法。參考 [Authentication keys](#multiprocessing-auth-keys) 。
警告
如果一個進程在試圖讀寫管道時被終止了,那么管道中的數據很可能是不完整的,因為此時可能無法確定消息的邊界。
### 同步原語
通常來說同步愿意在多進程環境中并不像它們在多線程環境中那么必要。參考 [`threading`](threading.xhtml#module-threading "threading: Thread-based parallelism.") 模塊的文檔。
注意可以使用管理器對象創建同步原語,參考 [Managers](#multiprocessing-managers) 。
*class* `multiprocessing.``Barrier`(*parties*\[, *action*\[, *timeout*\]\])類似 [`threading.Barrier`](threading.xhtml#threading.Barrier "threading.Barrier") 的柵欄對象。
3\.3 新版功能.
*class* `multiprocessing.``BoundedSemaphore`(\[*value*\])非常類似 [`threading.BoundedSemaphore`](threading.xhtml#threading.BoundedSemaphore "threading.BoundedSemaphore") 的有界信號量對象。
一個小小的不同在于,它的 `acquire` 方法的第一個參數名是和 [`Lock.acquire()`](#multiprocessing.Lock.acquire "multiprocessing.Lock.acquire") 一樣的 *block* 。
注解
在 Mac OS X 平臺上, 該對象于 [`Semaphore`](#multiprocessing.Semaphore "multiprocessing.Semaphore") 不同在于 `sem_getvalue()` 方法并沒有在該平臺上實現。
*class* `multiprocessing.``Condition`(\[*lock*\])條件變量: [`threading.Condition`](threading.xhtml#threading.Condition "threading.Condition") 的別名。
指定的 *lock* 參數應該是 [`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") 模塊中的 [`Lock`](#multiprocessing.Lock "multiprocessing.Lock") 或者 [`RLock`](#multiprocessing.RLock "multiprocessing.RLock") 對象。
在 3.3 版更改: 新增了 [`wait_for()`](threading.xhtml#threading.Condition.wait_for "threading.Condition.wait_for") 方法。
*class* `multiprocessing.``Event`A clone of [`threading.Event`](threading.xhtml#threading.Event "threading.Event").
*class* `multiprocessing.``Lock`原始鎖(非遞歸鎖)對象,類似于 [`threading.Lock`](threading.xhtml#threading.Lock "threading.Lock") 。一旦一個進程或者線程拿到了鎖,后續的任何其他進程或線程的其他請求都會被阻塞直到鎖被釋放。任何進程或線程都可以釋放鎖。除非另有說明,否則 [`multiprocessing.Lock`](#multiprocessing.Lock "multiprocessing.Lock") 用于進程或者線程的概念和行為都和 [`threading.Lock`](threading.xhtml#threading.Lock "threading.Lock") 一致。
注意 [`Lock`](#multiprocessing.Lock "multiprocessing.Lock") 實際上是一個工廠函數。它返回由默認上下文初始化的 `multiprocessing.synchronize.Lock` 對象。
[`Lock`](#multiprocessing.Lock "multiprocessing.Lock") supports the [context manager](../glossary.xhtml#term-context-manager) protocol and thus may be used in [`with`](../reference/compound_stmts.xhtml#with) statements.
`acquire`(*block=True*, *timeout=None*)獲得鎖,阻塞或非阻塞的。
如果 *block* 參數被設為 `True` ( 默認值 ) , 對該方法的調用在鎖處于釋放狀態之前都會阻塞,然后將鎖設置為鎖住狀態并返回 `True` 。需要注意的是第一個參數名與 [`threading.Lock.acquire()`](threading.xhtml#threading.Lock.acquire "threading.Lock.acquire") 的不同。
如果 *block* 參數被設置成 `False` ,方法的調用將不會阻塞。 如果鎖當前處于鎖住狀態,將返回 `False` ; 否則將鎖設置成鎖住狀態,并返回 `True` 。
When invoked with a positive, floating-point value for *timeout*, block for at most the number of seconds specified by *timeout* as long as the lock can not be acquired. Invocations with a negative value for *timeout* are equivalent to a *timeout* of zero. Invocations with a *timeout* value of `None` (the default) set the timeout period to infinite. Note that the treatment of negative or `None` values for *timeout* differs from the implemented behavior in [`threading.Lock.acquire()`](threading.xhtml#threading.Lock.acquire "threading.Lock.acquire"). The *timeout* argument has no practical implications if the *block* argument is set to `False` and is thus ignored. Returns `True` if the lock has been acquired or `False` if the timeout period has elapsed.
`release`()Release a lock. This can be called from any process or thread, not only the process or thread which originally acquired the lock.
Behavior is the same as in [`threading.Lock.release()`](threading.xhtml#threading.Lock.release "threading.Lock.release") except that when invoked on an unlocked lock, a [`ValueError`](exceptions.xhtml#ValueError "ValueError") is raised.
*class* `multiprocessing.``RLock`A recursive lock object: a close analog of [`threading.RLock`](threading.xhtml#threading.RLock "threading.RLock"). A recursive lock must be released by the process or thread that acquired it. Once a process or thread has acquired a recursive lock, the same process or thread may acquire it again without blocking; that process or thread must release it once for each time it has been acquired.
Note that [`RLock`](#multiprocessing.RLock "multiprocessing.RLock") is actually a factory function which returns an instance of `multiprocessing.synchronize.RLock` initialized with a default context.
[`RLock`](#multiprocessing.RLock "multiprocessing.RLock") supports the [context manager](../glossary.xhtml#term-context-manager) protocol and thus may be used in [`with`](../reference/compound_stmts.xhtml#with) statements.
`acquire`(*block=True*, *timeout=None*)獲得鎖,阻塞或非阻塞的。
When invoked with the *block* argument set to `True`, block until the lock is in an unlocked state (not owned by any process or thread) unless the lock is already owned by the current process or thread. The current process or thread then takes ownership of the lock (if it does not already have ownership) and the recursion level inside the lock increments by one, resulting in a return value of `True`. Note that there are several differences in this first argument's behavior compared to the implementation of [`threading.RLock.acquire()`](threading.xhtml#threading.RLock.acquire "threading.RLock.acquire"), starting with the name of the argument itself.
When invoked with the *block* argument set to `False`, do not block. If the lock has already been acquired (and thus is owned) by another process or thread, the current process or thread does not take ownership and the recursion level within the lock is not changed, resulting in a return value of `False`. If the lock is in an unlocked state, the current process or thread takes ownership and the recursion level is incremented, resulting in a return value of `True`.
Use and behaviors of the *timeout* argument are the same as in [`Lock.acquire()`](#multiprocessing.Lock.acquire "multiprocessing.Lock.acquire"). Note that some of these behaviors of *timeout*differ from the implemented behaviors in [`threading.RLock.acquire()`](threading.xhtml#threading.RLock.acquire "threading.RLock.acquire").
`release`()Release a lock, decrementing the recursion level. If after the decrement the recursion level is zero, reset the lock to unlocked (not owned by any process or thread) and if any other processes or threads are blocked waiting for the lock to become unlocked, allow exactly one of them to proceed. If after the decrement the recursion level is still nonzero, the lock remains locked and owned by the calling process or thread.
Only call this method when the calling process or thread owns the lock. An [`AssertionError`](exceptions.xhtml#AssertionError "AssertionError") is raised if this method is called by a process or thread other than the owner or if the lock is in an unlocked (unowned) state. Note that the type of exception raised in this situation differs from the implemented behavior in [`threading.RLock.release()`](threading.xhtml#threading.RLock.release "threading.RLock.release").
*class* `multiprocessing.``Semaphore`(\[*value*\])A semaphore object: a close analog of [`threading.Semaphore`](threading.xhtml#threading.Semaphore "threading.Semaphore").
一個小小的不同在于,它的 `acquire` 方法的第一個參數名是和 [`Lock.acquire()`](#multiprocessing.Lock.acquire "multiprocessing.Lock.acquire") 一樣的 *block* 。
注解
On Mac OS X, `sem_timedwait` is unsupported, so calling `acquire()` with a timeout will emulate that function's behavior using a sleeping loop.
注解
If the SIGINT signal generated by Ctrl-C arrives while the main thread is blocked by a call to `BoundedSemaphore.acquire()`, [`Lock.acquire()`](#multiprocessing.Lock.acquire "multiprocessing.Lock.acquire"), [`RLock.acquire()`](#multiprocessing.RLock.acquire "multiprocessing.RLock.acquire"), `Semaphore.acquire()`, `Condition.acquire()`or `Condition.wait()` then the call will be immediately interrupted and [`KeyboardInterrupt`](exceptions.xhtml#KeyboardInterrupt "KeyboardInterrupt") will be raised.
This differs from the behaviour of [`threading`](threading.xhtml#module-threading "threading: Thread-based parallelism.") where SIGINT will be ignored while the equivalent blocking calls are in progress.
注解
Some of this package's functionality requires a functioning shared semaphore implementation on the host operating system. Without one, the `multiprocessing.synchronize` module will be disabled, and attempts to import it will result in an [`ImportError`](exceptions.xhtml#ImportError "ImportError"). See [bpo-3770](https://bugs.python.org/issue3770) \[https://bugs.python.org/issue3770\] for additional information.
### Shared [`ctypes`](ctypes.xhtml#module-ctypes "ctypes: A foreign function library for Python.") Objects
It is possible to create shared objects using shared memory which can be inherited by child processes.
`multiprocessing.``Value`(*typecode\_or\_type*, *\*args*, *lock=True*)Return a [`ctypes`](ctypes.xhtml#module-ctypes "ctypes: A foreign function library for Python.") object allocated from shared memory. By default the return value is actually a synchronized wrapper for the object. The object itself can be accessed via the *value* attribute of a [`Value`](#multiprocessing.Value "multiprocessing.Value").
*typecode\_or\_type* determines the type of the returned object: it is either a ctypes type or a one character typecode of the kind used by the [`array`](array.xhtml#module-array "array: Space efficient arrays of uniformly typed numeric values.")module. *\*args* is passed on to the constructor for the type.
If *lock* is `True` (the default) then a new recursive lock object is created to synchronize access to the value. If *lock* is a [`Lock`](#multiprocessing.Lock "multiprocessing.Lock") or [`RLock`](#multiprocessing.RLock "multiprocessing.RLock") object then that will be used to synchronize access to the value. If *lock* is `False` then access to the returned object will not be automatically protected by a lock, so it will not necessarily be "process-safe".
Operations like `+=` which involve a read and write are not atomic. So if, for instance, you want to atomically increment a shared value it is insufficient to just do
```
counter.value += 1
```
Assuming the associated lock is recursive (which it is by default) you can instead do
```
with counter.get_lock():
counter.value += 1
```
Note that *lock* is a keyword-only argument.
`multiprocessing.``Array`(*typecode\_or\_type*, *size\_or\_initializer*, *\**, *lock=True*)Return a ctypes array allocated from shared memory. By default the return value is actually a synchronized wrapper for the array.
*typecode\_or\_type* determines the type of the elements of the returned array: it is either a ctypes type or a one character typecode of the kind used by the [`array`](array.xhtml#module-array "array: Space efficient arrays of uniformly typed numeric values.") module. If *size\_or\_initializer* is an integer, then it determines the length of the array, and the array will be initially zeroed. Otherwise, *size\_or\_initializer* is a sequence which is used to initialize the array and whose length determines the length of the array.
If *lock* is `True` (the default) then a new lock object is created to synchronize access to the value. If *lock* is a [`Lock`](#multiprocessing.Lock "multiprocessing.Lock") or [`RLock`](#multiprocessing.RLock "multiprocessing.RLock") object then that will be used to synchronize access to the value. If *lock* is `False` then access to the returned object will not be automatically protected by a lock, so it will not necessarily be "process-safe".
Note that *lock* is a keyword only argument.
Note that an array of [`ctypes.c_char`](ctypes.xhtml#ctypes.c_char "ctypes.c_char") has *value* and *raw*attributes which allow one to use it to store and retrieve strings.
#### The [`multiprocessing.sharedctypes`](#module-multiprocessing.sharedctypes "multiprocessing.sharedctypes: Allocate ctypes objects from shared memory.") module
The [`multiprocessing.sharedctypes`](#module-multiprocessing.sharedctypes "multiprocessing.sharedctypes: Allocate ctypes objects from shared memory.") module provides functions for allocating [`ctypes`](ctypes.xhtml#module-ctypes "ctypes: A foreign function library for Python.") objects from shared memory which can be inherited by child processes.
注解
Although it is possible to store a pointer in shared memory remember that this will refer to a location in the address space of a specific process. However, the pointer is quite likely to be invalid in the context of a second process and trying to dereference the pointer from the second process may cause a crash.
`multiprocessing.sharedctypes.``RawArray`(*typecode\_or\_type*, *size\_or\_initializer*)Return a ctypes array allocated from shared memory.
*typecode\_or\_type* determines the type of the elements of the returned array: it is either a ctypes type or a one character typecode of the kind used by the [`array`](array.xhtml#module-array "array: Space efficient arrays of uniformly typed numeric values.") module. If *size\_or\_initializer* is an integer then it determines the length of the array, and the array will be initially zeroed. Otherwise *size\_or\_initializer* is a sequence which is used to initialize the array and whose length determines the length of the array.
Note that setting and getting an element is potentially non-atomic -- use [`Array()`](#multiprocessing.sharedctypes.Array "multiprocessing.sharedctypes.Array") instead to make sure that access is automatically synchronized using a lock.
`multiprocessing.sharedctypes.``RawValue`(*typecode\_or\_type*, *\*args*)Return a ctypes object allocated from shared memory.
*typecode\_or\_type* determines the type of the returned object: it is either a ctypes type or a one character typecode of the kind used by the [`array`](array.xhtml#module-array "array: Space efficient arrays of uniformly typed numeric values.")module. *\*args* is passed on to the constructor for the type.
Note that setting and getting the value is potentially non-atomic -- use [`Value()`](#multiprocessing.sharedctypes.Value "multiprocessing.sharedctypes.Value") instead to make sure that access is automatically synchronized using a lock.
Note that an array of [`ctypes.c_char`](ctypes.xhtml#ctypes.c_char "ctypes.c_char") has `value` and `raw`attributes which allow one to use it to store and retrieve strings -- see documentation for [`ctypes`](ctypes.xhtml#module-ctypes "ctypes: A foreign function library for Python.").
`multiprocessing.sharedctypes.``Array`(*typecode\_or\_type*, *size\_or\_initializer*, *\**, *lock=True*)The same as [`RawArray()`](#multiprocessing.sharedctypes.RawArray "multiprocessing.sharedctypes.RawArray") except that depending on the value of *lock* a process-safe synchronization wrapper may be returned instead of a raw ctypes array.
If *lock* is `True` (the default) then a new lock object is created to synchronize access to the value. If *lock* is a [`Lock`](#multiprocessing.Lock "multiprocessing.Lock") or [`RLock`](#multiprocessing.RLock "multiprocessing.RLock") object then that will be used to synchronize access to the value. If *lock* is `False` then access to the returned object will not be automatically protected by a lock, so it will not necessarily be "process-safe".
Note that *lock* is a keyword-only argument.
`multiprocessing.sharedctypes.``Value`(*typecode\_or\_type*, *\*args*, *lock=True*)The same as [`RawValue()`](#multiprocessing.sharedctypes.RawValue "multiprocessing.sharedctypes.RawValue") except that depending on the value of *lock* a process-safe synchronization wrapper may be returned instead of a raw ctypes object.
If *lock* is `True` (the default) then a new lock object is created to synchronize access to the value. If *lock* is a [`Lock`](#multiprocessing.Lock "multiprocessing.Lock") or [`RLock`](#multiprocessing.RLock "multiprocessing.RLock") object then that will be used to synchronize access to the value. If *lock* is `False` then access to the returned object will not be automatically protected by a lock, so it will not necessarily be "process-safe".
Note that *lock* is a keyword-only argument.
`multiprocessing.sharedctypes.``copy`(*obj*)Return a ctypes object allocated from shared memory which is a copy of the ctypes object *obj*.
`multiprocessing.sharedctypes.``synchronized`(*obj*\[, *lock*\])Return a process-safe wrapper object for a ctypes object which uses *lock* to synchronize access. If *lock* is `None` (the default) then a [`multiprocessing.RLock`](#multiprocessing.RLock "multiprocessing.RLock") object is created automatically.
A synchronized wrapper will have two methods in addition to those of the object it wraps: `get_obj()` returns the wrapped object and `get_lock()` returns the lock object used for synchronization.
Note that accessing the ctypes object through the wrapper can be a lot slower than accessing the raw ctypes object.
在 3.5 版更改: Synchronized objects support the [context manager](../glossary.xhtml#term-context-manager) protocol.
The table below compares the syntax for creating shared ctypes objects from shared memory with the normal ctypes syntax. (In the table `MyStruct` is some subclass of [`ctypes.Structure`](ctypes.xhtml#ctypes.Structure "ctypes.Structure").)
ctypes
使用類型的共享ctypes
使用 typecode 的共享 ctypes
c\_double(2.4)
RawValue(c\_double, 2.4)
RawValue('d', 2.4)
MyStruct(4, 6)
RawValue(MyStruct, 4, 6)
(c\_short \* 7)()
RawArray(c\_short, 7)
RawArray('h', 7)
(c\_int \* 3)(9, 2, 8)
RawArray(c\_int, (9, 2, 8))
RawArray('i', (9, 2, 8))
Below is an example where a number of ctypes objects are modified by a child process:
```
from multiprocessing import Process, Lock
from multiprocessing.sharedctypes import Value, Array
from ctypes import Structure, c_double
class Point(Structure):
_fields_ = [('x', c_double), ('y', c_double)]
def modify(n, x, s, A):
n.value **= 2
x.value **= 2
s.value = s.value.upper()
for a in A:
a.x **= 2
a.y **= 2
if __name__ == '__main__':
lock = Lock()
n = Value('i', 7)
x = Value(c_double, 1.0/3.0, lock=False)
s = Array('c', b'hello world', lock=lock)
A = Array(Point, [(1.875,-6.25), (-5.75,2.0), (2.375,9.5)], lock=lock)
p = Process(target=modify, args=(n, x, s, A))
p.start()
p.join()
print(n.value)
print(x.value)
print(s.value)
print([(a.x, a.y) for a in A])
```
The results printed are
```
49
0.1111111111111111
HELLO WORLD
[(3.515625, 39.0625), (33.0625, 4.0), (5.640625, 90.25)]
```
### Managers
Managers provide a way to create data which can be shared between different processes, including sharing over a network between processes running on different machines. A manager object controls a server process which manages *shared objects*. Other processes can access the shared objects by using proxies.
`multiprocessing.``Manager`()Returns a started [`SyncManager`](#multiprocessing.managers.SyncManager "multiprocessing.managers.SyncManager") object which can be used for sharing objects between processes. The returned manager object corresponds to a spawned child process and has methods which will create shared objects and return corresponding proxies.
Manager processes will be shutdown as soon as they are garbage collected or their parent process exits. The manager classes are defined in the [`multiprocessing.managers`](#module-multiprocessing.managers "multiprocessing.managers: Share data between process with shared objects.") module:
*class* `multiprocessing.managers.``BaseManager`(\[*address*\[, *authkey*\]\])Create a BaseManager object.
Once created one should call [`start()`](#multiprocessing.managers.BaseManager.start "multiprocessing.managers.BaseManager.start") or `get_server().serve_forever()` to ensure that the manager object refers to a started manager process.
*address* is the address on which the manager process listens for new connections. If *address* is `None` then an arbitrary one is chosen.
*authkey* is the authentication key which will be used to check the validity of incoming connections to the server process. If *authkey* is `None` then `current_process().authkey` is used. Otherwise *authkey* is used and it must be a byte string.
`start`(\[*initializer*\[, *initargs*\]\])Start a subprocess to start the manager. If *initializer* is not `None`then the subprocess will call `initializer(*initargs)` when it starts.
`get_server`()Returns a `Server` object which represents the actual server under the control of the Manager. The `Server` object supports the `serve_forever()` method:
```
>>> from multiprocessing.managers import BaseManager
>>> manager = BaseManager(address=('', 50000), authkey=b'abc')
>>> server = manager.get_server()
>>> server.serve_forever()
```
`Server` additionally has an [`address`](#multiprocessing.managers.BaseManager.address "multiprocessing.managers.BaseManager.address") attribute.
`connect`()Connect a local manager object to a remote manager process:
```
>>> from multiprocessing.managers import BaseManager
>>> m = BaseManager(address=('127.0.0.1', 50000), authkey=b'abc')
>>> m.connect()
```
`shutdown`()Stop the process used by the manager. This is only available if [`start()`](#multiprocessing.managers.BaseManager.start "multiprocessing.managers.BaseManager.start") has been used to start the server process.
它可以被多次調用。
`register`(*typeid*\[, *callable*\[, *proxytype*\[, *exposed*\[, *method\_to\_typeid*\[, *create\_method*\]\]\]\]\])A classmethod which can be used for registering a type or callable with the manager class.
*typeid* is a "type identifier" which is used to identify a particular type of shared object. This must be a string.
*callable* is a callable used for creating objects for this type identifier. If a manager instance will be connected to the server using the [`connect()`](#multiprocessing.managers.BaseManager.connect "multiprocessing.managers.BaseManager.connect") method, or if the *create\_method* argument is `False` then this can be left as `None`.
*proxytype* is a subclass of [`BaseProxy`](#multiprocessing.managers.BaseProxy "multiprocessing.managers.BaseProxy") which is used to create proxies for shared objects with this *typeid*. If `None` then a proxy class is created automatically.
*exposed* is used to specify a sequence of method names which proxies for this typeid should be allowed to access using [`BaseProxy._callmethod()`](#multiprocessing.managers.BaseProxy._callmethod "multiprocessing.managers.BaseProxy._callmethod"). (If *exposed* is `None` then `proxytype._exposed_` is used instead if it exists.) In the case where no exposed list is specified, all "public methods" of the shared object will be accessible. (Here a "public method" means any attribute which has a [`__call__()`](../reference/datamodel.xhtml#object.__call__ "object.__call__") method and whose name does not begin with `'_'`.)
*method\_to\_typeid* is a mapping used to specify the return type of those exposed methods which should return a proxy. It maps method names to typeid strings. (If *method\_to\_typeid* is `None` then `proxytype._method_to_typeid_` is used instead if it exists.) If a method's name is not a key of this mapping or if the mapping is `None`then the object returned by the method will be copied by value.
*create\_method* determines whether a method should be created with name *typeid* which can be used to tell the server process to create a new shared object and return a proxy for it. By default it is `True`.
[`BaseManager`](#multiprocessing.managers.BaseManager "multiprocessing.managers.BaseManager") instances also have one read-only property:
`address`管理器所用的地址。
在 3.3 版更改: Manager objects support the context management protocol -- see [上下文管理器類型](stdtypes.xhtml#typecontextmanager). [`__enter__()`](stdtypes.xhtml#contextmanager.__enter__ "contextmanager.__enter__") starts the server process (if it has not already started) and then returns the manager object. [`__exit__()`](stdtypes.xhtml#contextmanager.__exit__ "contextmanager.__exit__") calls [`shutdown()`](#multiprocessing.managers.BaseManager.shutdown "multiprocessing.managers.BaseManager.shutdown").
In previous versions [`__enter__()`](stdtypes.xhtml#contextmanager.__enter__ "contextmanager.__enter__") did not start the manager's server process if it was not already started.
*class* `multiprocessing.managers.``SyncManager`A subclass of [`BaseManager`](#multiprocessing.managers.BaseManager "multiprocessing.managers.BaseManager") which can be used for the synchronization of processes. Objects of this type are returned by `multiprocessing.Manager()`.
Its methods create and return [代理對象](#multiprocessing-proxy-objects) for a number of commonly used data types to be synchronized across processes. This notably includes shared lists and dictionaries.
`Barrier`(*parties*\[, *action*\[, *timeout*\]\])Create a shared [`threading.Barrier`](threading.xhtml#threading.Barrier "threading.Barrier") object and return a proxy for it.
3\.3 新版功能.
`BoundedSemaphore`(\[*value*\])Create a shared [`threading.BoundedSemaphore`](threading.xhtml#threading.BoundedSemaphore "threading.BoundedSemaphore") object and return a proxy for it.
`Condition`(\[*lock*\])Create a shared [`threading.Condition`](threading.xhtml#threading.Condition "threading.Condition") object and return a proxy for it.
If *lock* is supplied then it should be a proxy for a [`threading.Lock`](threading.xhtml#threading.Lock "threading.Lock") or [`threading.RLock`](threading.xhtml#threading.RLock "threading.RLock") object.
在 3.3 版更改: 新增了 [`wait_for()`](threading.xhtml#threading.Condition.wait_for "threading.Condition.wait_for") 方法。
`Event`()Create a shared [`threading.Event`](threading.xhtml#threading.Event "threading.Event") object and return a proxy for it.
`Lock`()Create a shared [`threading.Lock`](threading.xhtml#threading.Lock "threading.Lock") object and return a proxy for it.
`Namespace`()Create a shared [`Namespace`](#multiprocessing.managers.Namespace "multiprocessing.managers.Namespace") object and return a proxy for it.
`Queue`(\[*maxsize*\])Create a shared [`queue.Queue`](queue.xhtml#queue.Queue "queue.Queue") object and return a proxy for it.
`RLock`()Create a shared [`threading.RLock`](threading.xhtml#threading.RLock "threading.RLock") object and return a proxy for it.
`Semaphore`(\[*value*\])Create a shared [`threading.Semaphore`](threading.xhtml#threading.Semaphore "threading.Semaphore") object and return a proxy for it.
`Array`(*typecode*, *sequence*)Create an array and return a proxy for it.
`Value`(*typecode*, *value*)Create an object with a writable `value` attribute and return a proxy for it.
`dict`()`dict`(*mapping*)`dict`(*sequence*)Create a shared [`dict`](stdtypes.xhtml#dict "dict") object and return a proxy for it.
`list`()`list`(*sequence*)Create a shared [`list`](stdtypes.xhtml#list "list") object and return a proxy for it.
在 3.6 版更改: Shared objects are capable of being nested. For example, a shared container object such as a shared list can contain other shared objects which will all be managed and synchronized by the [`SyncManager`](#multiprocessing.managers.SyncManager "multiprocessing.managers.SyncManager").
*class* `multiprocessing.managers.``Namespace`A type that can register with [`SyncManager`](#multiprocessing.managers.SyncManager "multiprocessing.managers.SyncManager").
A namespace object has no public methods, but does have writable attributes. Its representation shows the values of its attributes.
However, when using a proxy for a namespace object, an attribute beginning with `'_'` will be an attribute of the proxy and not an attribute of the referent:
```
>>> manager = multiprocessing.Manager()
>>> Global = manager.Namespace()
>>> Global.x = 10
>>> Global.y = 'hello'
>>> Global._z = 12.3 # this is an attribute of the proxy
>>> print(Global)
Namespace(x=10, y='hello')
```
#### Customized managers
To create one's own manager, one creates a subclass of [`BaseManager`](#multiprocessing.managers.BaseManager "multiprocessing.managers.BaseManager") and uses the [`register()`](#multiprocessing.managers.BaseManager.register "multiprocessing.managers.BaseManager.register") classmethod to register new types or callables with the manager class. For example:
```
from multiprocessing.managers import BaseManager
class MathsClass:
def add(self, x, y):
return x + y
def mul(self, x, y):
return x * y
class MyManager(BaseManager):
pass
MyManager.register('Maths', MathsClass)
if __name__ == '__main__':
with MyManager() as manager:
maths = manager.Maths()
print(maths.add(4, 3)) # prints 7
print(maths.mul(7, 8)) # prints 56
```
#### Using a remote manager
It is possible to run a manager server on one machine and have clients use it from other machines (assuming that the firewalls involved allow it).
Running the following commands creates a server for a single shared queue which remote clients can access:
```
>>> from multiprocessing.managers import BaseManager
>>> from queue import Queue
>>> queue = Queue()
>>> class QueueManager(BaseManager): pass
>>> QueueManager.register('get_queue', callable=lambda:queue)
>>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
>>> s = m.get_server()
>>> s.serve_forever()
```
One client can access the server as follows:
```
>>> from multiprocessing.managers import BaseManager
>>> class QueueManager(BaseManager): pass
>>> QueueManager.register('get_queue')
>>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
>>> m.connect()
>>> queue = m.get_queue()
>>> queue.put('hello')
```
Another client can also use it:
```
>>> from multiprocessing.managers import BaseManager
>>> class QueueManager(BaseManager): pass
>>> QueueManager.register('get_queue')
>>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
>>> m.connect()
>>> queue = m.get_queue()
>>> queue.get()
'hello'
```
Local processes can also access that queue, using the code from above on the client to access it remotely:
```
>>> from multiprocessing import Process, Queue
>>> from multiprocessing.managers import BaseManager
>>> class Worker(Process):
... def __init__(self, q):
... self.q = q
... super(Worker, self).__init__()
... def run(self):
... self.q.put('local hello')
...
>>> queue = Queue()
>>> w = Worker(queue)
>>> w.start()
>>> class QueueManager(BaseManager): pass
...
>>> QueueManager.register('get_queue', callable=lambda: queue)
>>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
>>> s = m.get_server()
>>> s.serve_forever()
```
### 代理對象
A proxy is an object which *refers* to a shared object which lives (presumably) in a different process. The shared object is said to be the *referent* of the proxy. Multiple proxy objects may have the same referent.
A proxy object has methods which invoke corresponding methods of its referent (although not every method of the referent will necessarily be available through the proxy). In this way, a proxy can be used just like its referent can:
```
>>> from multiprocessing import Manager
>>> manager = Manager()
>>> l = manager.list([i*i for i in range(10)])
>>> print(l)
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
>>> print(repr(l))
<ListProxy object, typeid 'list' at 0x...>
>>> l[4]
16
>>> l[2:5]
[4, 9, 16]
```
Notice that applying [`str()`](stdtypes.xhtml#str "str") to a proxy will return the representation of the referent, whereas applying [`repr()`](functions.xhtml#repr "repr") will return the representation of the proxy.
An important feature of proxy objects is that they are picklable so they can be passed between processes. As such, a referent can contain [代理對象](#multiprocessing-proxy-objects). This permits nesting of these managed lists, dicts, and other [代理對象](#multiprocessing-proxy-objects):
```
>>> a = manager.list()
>>> b = manager.list()
>>> a.append(b) # referent of a now contains referent of b
>>> print(a, b)
[<ListProxy object, typeid 'list' at ...>] []
>>> b.append('hello')
>>> print(a[0], b)
['hello'] ['hello']
```
Similarly, dict and list proxies may be nested inside one another:
```
>>> l_outer = manager.list([ manager.dict() for i in range(2) ])
>>> d_first_inner = l_outer[0]
>>> d_first_inner['a'] = 1
>>> d_first_inner['b'] = 2
>>> l_outer[1]['c'] = 3
>>> l_outer[1]['z'] = 26
>>> print(l_outer[0])
{'a': 1, 'b': 2}
>>> print(l_outer[1])
{'c': 3, 'z': 26}
```
If standard (non-proxy) [`list`](stdtypes.xhtml#list "list") or [`dict`](stdtypes.xhtml#dict "dict") objects are contained in a referent, modifications to those mutable values will not be propagated through the manager because the proxy has no way of knowing when the values contained within are modified. However, storing a value in a container proxy (which triggers a `__setitem__` on the proxy object) does propagate through the manager and so to effectively modify such an item, one could re-assign the modified value to the container proxy:
```
# create a list proxy and append a mutable object (a dictionary)
lproxy = manager.list()
lproxy.append({})
# now mutate the dictionary
d = lproxy[0]
d['a'] = 1
d['b'] = 2
# at this point, the changes to d are not yet synced, but by
# updating the dictionary, the proxy is notified of the change
lproxy[0] = d
```
This approach is perhaps less convenient than employing nested [代理對象](#multiprocessing-proxy-objects) for most use cases but also demonstrates a level of control over the synchronization.
注解
The proxy types in [`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") do nothing to support comparisons by value. So, for instance, we have:
```
>>> manager.list([1,2,3]) == [1,2,3]
False
```
One should just use a copy of the referent instead when making comparisons.
*class* `multiprocessing.managers.``BaseProxy`Proxy objects are instances of subclasses of [`BaseProxy`](#multiprocessing.managers.BaseProxy "multiprocessing.managers.BaseProxy").
`_callmethod`(*methodname*\[, *args*\[, *kwds*\]\])Call and return the result of a method of the proxy's referent.
If `proxy` is a proxy whose referent is `obj` then the expression
```
proxy._callmethod(methodname, args, kwds)
```
will evaluate the expression
```
getattr(obj, methodname)(*args, **kwds)
```
in the manager's process.
The returned value will be a copy of the result of the call or a proxy to a new shared object -- see documentation for the *method\_to\_typeid*argument of [`BaseManager.register()`](#multiprocessing.managers.BaseManager.register "multiprocessing.managers.BaseManager.register").
If an exception is raised by the call, then is re-raised by [`_callmethod()`](#multiprocessing.managers.BaseProxy._callmethod "multiprocessing.managers.BaseProxy._callmethod"). If some other exception is raised in the manager's process then this is converted into a `RemoteError` exception and is raised by [`_callmethod()`](#multiprocessing.managers.BaseProxy._callmethod "multiprocessing.managers.BaseProxy._callmethod").
Note in particular that an exception will be raised if *methodname* has not been *exposed*.
An example of the usage of [`_callmethod()`](#multiprocessing.managers.BaseProxy._callmethod "multiprocessing.managers.BaseProxy._callmethod"):
```
>>> l = manager.list(range(10))
>>> l._callmethod('__len__')
10
>>> l._callmethod('__getitem__', (slice(2, 7),)) # equivalent to l[2:7]
[2, 3, 4, 5, 6]
>>> l._callmethod('__getitem__', (20,)) # equivalent to l[20]
Traceback (most recent call last):
...
IndexError: list index out of range
```
`_getvalue`()Return a copy of the referent.
If the referent is unpicklable then this will raise an exception.
`__repr__`()Return a representation of the proxy object.
`__str__`()Return the representation of the referent.
#### Cleanup
A proxy object uses a weakref callback so that when it gets garbage collected it deregisters itself from the manager which owns its referent.
A shared object gets deleted from the manager process when there are no longer any proxies referring to it.
### 進程池
One can create a pool of processes which will carry out tasks submitted to it with the [`Pool`](#multiprocessing.pool.Pool "multiprocessing.pool.Pool") class.
*class* `multiprocessing.pool.``Pool`(\[*processes*\[, *initializer*\[, *initargs*\[, *maxtasksperchild*\[, *context*\]\]\]\]\])A process pool object which controls a pool of worker processes to which jobs can be submitted. It supports asynchronous results with timeouts and callbacks and has a parallel map implementation.
*processes* is the number of worker processes to use. If *processes* is `None` then the number returned by [`os.cpu_count()`](os.xhtml#os.cpu_count "os.cpu_count") is used.
If *initializer* is not `None` then each worker process will call `initializer(*initargs)` when it starts.
*maxtasksperchild* is the number of tasks a worker process can complete before it will exit and be replaced with a fresh worker process, to enable unused resources to be freed. The default *maxtasksperchild* is `None`, which means worker processes will live as long as the pool.
*context* can be used to specify the context used for starting the worker processes. Usually a pool is created using the function `multiprocessing.Pool()` or the [`Pool()`](#multiprocessing.pool.Pool "multiprocessing.pool.Pool") method of a context object. In both cases *context* is set appropriately.
Note that the methods of the pool object should only be called by the process which created the pool.
3\.2 新版功能: *maxtasksperchild*
3\.4 新版功能: *context*
注解
Worker processes within a [`Pool`](#multiprocessing.pool.Pool "multiprocessing.pool.Pool") typically live for the complete duration of the Pool's work queue. A frequent pattern found in other systems (such as Apache, mod\_wsgi, etc) to free resources held by workers is to allow a worker within a pool to complete only a set amount of work before being exiting, being cleaned up and a new process spawned to replace the old one. The *maxtasksperchild*argument to the [`Pool`](#multiprocessing.pool.Pool "multiprocessing.pool.Pool") exposes this ability to the end user.
`apply`(*func*\[, *args*\[, *kwds*\]\])Call *func* with arguments *args* and keyword arguments *kwds*. It blocks until the result is ready. Given this blocks, [`apply_async()`](#multiprocessing.pool.Pool.apply_async "multiprocessing.pool.Pool.apply_async") is better suited for performing work in parallel. Additionally, *func*is only executed in one of the workers of the pool.
`apply_async`(*func*\[, *args*\[, *kwds*\[, *callback*\[, *error\_callback*\]\]\]\])A variant of the [`apply()`](#multiprocessing.pool.Pool.apply "multiprocessing.pool.Pool.apply") method which returns a result object.
If *callback* is specified then it should be a callable which accepts a single argument. When the result becomes ready *callback* is applied to it, that is unless the call failed, in which case the *error\_callback*is applied instead.
If *error\_callback* is specified then it should be a callable which accepts a single argument. If the target function fails, then the *error\_callback* is called with the exception instance.
Callbacks should complete immediately since otherwise the thread which handles the results will get blocked.
`map`(*func*, *iterable*\[, *chunksize*\])A parallel equivalent of the [`map()`](functions.xhtml#map "map") built-in function (it supports only one *iterable* argument though). It blocks until the result is ready.
This method chops the iterable into a number of chunks which it submits to the process pool as separate tasks. The (approximate) size of these chunks can be specified by setting *chunksize* to a positive integer.
Note that it may cause high memory usage for very long iterables. Consider using [`imap()`](#multiprocessing.pool.Pool.imap "multiprocessing.pool.Pool.imap") or [`imap_unordered()`](#multiprocessing.pool.Pool.imap_unordered "multiprocessing.pool.Pool.imap_unordered") with explicit *chunksize*option for better efficiency.
`map_async`(*func*, *iterable*\[, *chunksize*\[, *callback*\[, *error\_callback*\]\]\])A variant of the [`map()`](#multiprocessing.pool.Pool.map "multiprocessing.pool.Pool.map") method which returns a result object.
If *callback* is specified then it should be a callable which accepts a single argument. When the result becomes ready *callback* is applied to it, that is unless the call failed, in which case the *error\_callback*is applied instead.
If *error\_callback* is specified then it should be a callable which accepts a single argument. If the target function fails, then the *error\_callback* is called with the exception instance.
Callbacks should complete immediately since otherwise the thread which handles the results will get blocked.
`imap`(*func*, *iterable*\[, *chunksize*\])A lazier version of [`map()`](#multiprocessing.pool.Pool.map "multiprocessing.pool.Pool.map").
The *chunksize* argument is the same as the one used by the [`map()`](#multiprocessing.pool.Pool.map "multiprocessing.pool.Pool.map")method. For very long iterables using a large value for *chunksize* can make the job complete **much** faster than using the default value of `1`.
Also if *chunksize* is `1` then the `next()` method of the iterator returned by the [`imap()`](#multiprocessing.pool.Pool.imap "multiprocessing.pool.Pool.imap") method has an optional *timeout* parameter: `next(timeout)` will raise [`multiprocessing.TimeoutError`](#multiprocessing.TimeoutError "multiprocessing.TimeoutError") if the result cannot be returned within *timeout* seconds.
`imap_unordered`(*func*, *iterable*\[, *chunksize*\])The same as [`imap()`](#multiprocessing.pool.Pool.imap "multiprocessing.pool.Pool.imap") except that the ordering of the results from the returned iterator should be considered arbitrary. (Only when there is only one worker process is the order guaranteed to be "correct".)
`starmap`(*func*, *iterable*\[, *chunksize*\])Like [`map()`](functions.xhtml#map "map") except that the elements of the *iterable* are expected to be iterables that are unpacked as arguments.
Hence an *iterable* of `[(1,2), (3, 4)]` results in
```
[func(1,2),
func(3,4)]
```
.
3\.3 新版功能.
`starmap_async`(*func*, *iterable*\[, *chunksize*\[, *callback*\[, *error\_callback*\]\]\])A combination of [`starmap()`](#multiprocessing.pool.Pool.starmap "multiprocessing.pool.Pool.starmap") and [`map_async()`](#multiprocessing.pool.Pool.map_async "multiprocessing.pool.Pool.map_async") that iterates over *iterable* of iterables and calls *func* with the iterables unpacked. Returns a result object.
3\.3 新版功能.
`close`()Prevents any more tasks from being submitted to the pool. Once all the tasks have been completed the worker processes will exit.
`terminate`()Stops the worker processes immediately without completing outstanding work. When the pool object is garbage collected [`terminate()`](#multiprocessing.pool.Pool.terminate "multiprocessing.pool.Pool.terminate") will be called immediately.
`join`()Wait for the worker processes to exit. One must call [`close()`](#multiprocessing.pool.Pool.close "multiprocessing.pool.Pool.close") or [`terminate()`](#multiprocessing.pool.Pool.terminate "multiprocessing.pool.Pool.terminate") before using [`join()`](#multiprocessing.pool.Pool.join "multiprocessing.pool.Pool.join").
3\.3 新版功能: Pool objects now support the context management protocol -- see [上下文管理器類型](stdtypes.xhtml#typecontextmanager). [`__enter__()`](stdtypes.xhtml#contextmanager.__enter__ "contextmanager.__enter__") returns the pool object, and [`__exit__()`](stdtypes.xhtml#contextmanager.__exit__ "contextmanager.__exit__") calls [`terminate()`](#multiprocessing.pool.Pool.terminate "multiprocessing.pool.Pool.terminate").
*class* `multiprocessing.pool.``AsyncResult`The class of the result returned by [`Pool.apply_async()`](#multiprocessing.pool.Pool.apply_async "multiprocessing.pool.Pool.apply_async") and [`Pool.map_async()`](#multiprocessing.pool.Pool.map_async "multiprocessing.pool.Pool.map_async").
`get`(\[*timeout*\])Return the result when it arrives. If *timeout* is not `None` and the result does not arrive within *timeout* seconds then [`multiprocessing.TimeoutError`](#multiprocessing.TimeoutError "multiprocessing.TimeoutError") is raised. If the remote call raised an exception then that exception will be reraised by [`get()`](#multiprocessing.pool.AsyncResult.get "multiprocessing.pool.AsyncResult.get").
`wait`(\[*timeout*\])Wait until the result is available or until *timeout* seconds pass.
`ready`()Return whether the call has completed.
`successful`()Return whether the call completed without raising an exception. Will raise [`AssertionError`](exceptions.xhtml#AssertionError "AssertionError") if the result is not ready.
The following example demonstrates the use of a pool:
```
from multiprocessing import Pool
import time
def f(x):
return x*x
if __name__ == '__main__':
with Pool(processes=4) as pool: # start 4 worker processes
result = pool.apply_async(f, (10,)) # evaluate "f(10)" asynchronously in a single process
print(result.get(timeout=1)) # prints "100" unless your computer is *very* slow
print(pool.map(f, range(10))) # prints "[0, 1, 4,..., 81]"
it = pool.imap(f, range(10))
print(next(it)) # prints "0"
print(next(it)) # prints "1"
print(it.next(timeout=1)) # prints "4" unless your computer is *very* slow
result = pool.apply_async(time.sleep, (10,))
print(result.get(timeout=1)) # raises multiprocessing.TimeoutError
```
### Listeners and Clients
Usually message passing between processes is done using queues or by using [`Connection`](#multiprocessing.connection.Connection "multiprocessing.connection.Connection") objects returned by [`Pipe()`](#multiprocessing.Pipe "multiprocessing.Pipe").
However, the [`multiprocessing.connection`](#module-multiprocessing.connection "multiprocessing.connection: API for dealing with sockets.") module allows some extra flexibility. It basically gives a high level message oriented API for dealing with sockets or Windows named pipes. It also has support for *digest authentication* using the [`hmac`](hmac.xhtml#module-hmac "hmac: Keyed-Hashing for Message Authentication (HMAC) implementation") module, and for polling multiple connections at the same time.
`multiprocessing.connection.``deliver_challenge`(*connection*, *authkey*)Send a randomly generated message to the other end of the connection and wait for a reply.
If the reply matches the digest of the message using *authkey* as the key then a welcome message is sent to the other end of the connection. Otherwise [`AuthenticationError`](#multiprocessing.AuthenticationError "multiprocessing.AuthenticationError") is raised.
`multiprocessing.connection.``answer_challenge`(*connection*, *authkey*)Receive a message, calculate the digest of the message using *authkey* as the key, and then send the digest back.
If a welcome message is not received, then [`AuthenticationError`](#multiprocessing.AuthenticationError "multiprocessing.AuthenticationError") is raised.
`multiprocessing.connection.``Client`(*address*\[, *family*\[, *authkey*\]\])Attempt to set up a connection to the listener which is using address *address*, returning a [`Connection`](#multiprocessing.connection.Connection "multiprocessing.connection.Connection").
The type of the connection is determined by *family* argument, but this can generally be omitted since it can usually be inferred from the format of *address*. (See [Address Formats](#multiprocessing-address-formats))
If *authkey* is given and not None, it should be a byte string and will be used as the secret key for an HMAC-based authentication challenge. No authentication is done if *authkey* is None. [`AuthenticationError`](#multiprocessing.AuthenticationError "multiprocessing.AuthenticationError") is raised if authentication fails. See [Authentication keys](#multiprocessing-auth-keys).
*class* `multiprocessing.connection.``Listener`(\[*address*\[, *family*\[, *backlog*\[, *authkey*\]\]\]\])A wrapper for a bound socket or Windows named pipe which is 'listening' for connections.
*address* is the address to be used by the bound socket or named pipe of the listener object.
注解
If an address of '0.0.0.0' is used, the address will not be a connectable end point on Windows. If you require a connectable end-point, you should use '127.0.0.1'.
*family* is the type of socket (or named pipe) to use. This can be one of the strings `'AF_INET'` (for a TCP socket), `'AF_UNIX'` (for a Unix domain socket) or `'AF_PIPE'` (for a Windows named pipe). Of these only the first is guaranteed to be available. If *family* is `None` then the family is inferred from the format of *address*. If *address* is also `None` then a default is chosen. This default is the family which is assumed to be the fastest available. See [Address Formats](#multiprocessing-address-formats). Note that if *family* is `'AF_UNIX'` and address is `None` then the socket will be created in a private temporary directory created using [`tempfile.mkstemp()`](tempfile.xhtml#tempfile.mkstemp "tempfile.mkstemp").
If the listener object uses a socket then *backlog* (1 by default) is passed to the [`listen()`](socket.xhtml#socket.socket.listen "socket.socket.listen") method of the socket once it has been bound.
If *authkey* is given and not None, it should be a byte string and will be used as the secret key for an HMAC-based authentication challenge. No authentication is done if *authkey* is None. [`AuthenticationError`](#multiprocessing.AuthenticationError "multiprocessing.AuthenticationError") is raised if authentication fails. See [Authentication keys](#multiprocessing-auth-keys).
`accept`()Accept a connection on the bound socket or named pipe of the listener object and return a [`Connection`](#multiprocessing.connection.Connection "multiprocessing.connection.Connection") object. If authentication is attempted and fails, then [`AuthenticationError`](#multiprocessing.AuthenticationError "multiprocessing.AuthenticationError") is raised.
`close`()Close the bound socket or named pipe of the listener object. This is called automatically when the listener is garbage collected. However it is advisable to call it explicitly.
Listener objects have the following read-only properties:
`address`The address which is being used by the Listener object.
`last_accepted`The address from which the last accepted connection came. If this is unavailable then it is `None`.
3\.3 新版功能: Listener objects now support the context management protocol -- see [上下文管理器類型](stdtypes.xhtml#typecontextmanager). [`__enter__()`](stdtypes.xhtml#contextmanager.__enter__ "contextmanager.__enter__") returns the listener object, and [`__exit__()`](stdtypes.xhtml#contextmanager.__exit__ "contextmanager.__exit__") calls [`close()`](#multiprocessing.connection.Listener.close "multiprocessing.connection.Listener.close").
`multiprocessing.connection.``wait`(*object\_list*, *timeout=None*)Wait till an object in *object\_list* is ready. Returns the list of those objects in *object\_list* which are ready. If *timeout* is a float then the call blocks for at most that many seconds. If *timeout* is `None` then it will block for an unlimited period. A negative timeout is equivalent to a zero timeout.
For both Unix and Windows, an object can appear in *object\_list* if it is
- a readable [`Connection`](#multiprocessing.connection.Connection "multiprocessing.connection.Connection") object;
- a connected and readable [`socket.socket`](socket.xhtml#socket.socket "socket.socket") object; or
- the [`sentinel`](#multiprocessing.Process.sentinel "multiprocessing.Process.sentinel") attribute of a [`Process`](#multiprocessing.Process "multiprocessing.Process") object.
A connection or socket object is ready when there is data available to be read from it, or the other end has been closed.
**Unix**: `wait(object_list, timeout)` almost equivalent `select.select(object_list, [], [], timeout)`. The difference is that, if [`select.select()`](select.xhtml#select.select "select.select") is interrupted by a signal, it can raise [`OSError`](exceptions.xhtml#OSError "OSError") with an error number of `EINTR`, whereas [`wait()`](#multiprocessing.connection.wait "multiprocessing.connection.wait") will not.
**Windows**: An item in *object\_list* must either be an integer handle which is waitable (according to the definition used by the documentation of the Win32 function `WaitForMultipleObjects()`) or it can be an object with a `fileno()` method which returns a socket handle or pipe handle. (Note that pipe handles and socket handles are **not** waitable handles.)
3\.3 新版功能.
**Examples**
The following server code creates a listener which uses `'secret password'` as an authentication key. It then waits for a connection and sends some data to the client:
```
from multiprocessing.connection import Listener
from array import array
address = ('localhost', 6000) # family is deduced to be 'AF_INET'
with Listener(address, authkey=b'secret password') as listener:
with listener.accept() as conn:
print('connection accepted from', listener.last_accepted)
conn.send([2.25, None, 'junk', float])
conn.send_bytes(b'hello')
conn.send_bytes(array('i', [42, 1729]))
```
The following code connects to the server and receives some data from the server:
```
from multiprocessing.connection import Client
from array import array
address = ('localhost', 6000)
with Client(address, authkey=b'secret password') as conn:
print(conn.recv()) # => [2.25, None, 'junk', float]
print(conn.recv_bytes()) # => 'hello'
arr = array('i', [0, 0, 0, 0, 0])
print(conn.recv_bytes_into(arr)) # => 8
print(arr) # => array('i', [42, 1729, 0, 0, 0])
```
The following code uses [`wait()`](#multiprocessing.connection.wait "multiprocessing.connection.wait") to wait for messages from multiple processes at once:
```
import time, random
from multiprocessing import Process, Pipe, current_process
from multiprocessing.connection import wait
def foo(w):
for i in range(10):
w.send((i, current_process().name))
w.close()
if __name__ == '__main__':
readers = []
for i in range(4):
r, w = Pipe(duplex=False)
readers.append(r)
p = Process(target=foo, args=(w,))
p.start()
# We close the writable end of the pipe now to be sure that
# p is the only process which owns a handle for it. This
# ensures that when p closes its handle for the writable end,
# wait() will promptly report the readable end as being ready.
w.close()
while readers:
for r in wait(readers):
try:
msg = r.recv()
except EOFError:
readers.remove(r)
else:
print(msg)
```
#### Address Formats
- An `'AF_INET'` address is a tuple of the form `(hostname, port)` where *hostname* is a string and *port* is an integer.
- An `'AF_UNIX'` address is a string representing a filename on the filesystem.
- An `'AF_PIPE'` address is a string of the form`r'\\.\pipe{PipeName}'`. To use [`Client()`](#multiprocessing.connection.Client "multiprocessing.connection.Client") to connect to a named pipe on a remote computer called *ServerName* one should use an address of the form `r'\ServerName\pipe{PipeName}'` instead.
Note that any string beginning with two backslashes is assumed by default to be an `'AF_PIPE'` address rather than an `'AF_UNIX'` address.
### Authentication keys
When one uses [`Connection.recv`](#multiprocessing.connection.Connection.recv "multiprocessing.connection.Connection.recv"), the data received is automatically unpickled. Unfortunately unpickling data from an untrusted source is a security risk. Therefore [`Listener`](#multiprocessing.connection.Listener "multiprocessing.connection.Listener") and [`Client()`](#multiprocessing.connection.Client "multiprocessing.connection.Client") use the [`hmac`](hmac.xhtml#module-hmac "hmac: Keyed-Hashing for Message Authentication (HMAC) implementation") module to provide digest authentication.
An authentication key is a byte string which can be thought of as a password: once a connection is established both ends will demand proof that the other knows the authentication key. (Demonstrating that both ends are using the same key does **not** involve sending the key over the connection.)
If authentication is requested but no authentication key is specified then the return value of `current_process().authkey` is used (see [`Process`](#multiprocessing.Process "multiprocessing.Process")). This value will be automatically inherited by any [`Process`](#multiprocessing.Process "multiprocessing.Process") object that the current process creates. This means that (by default) all processes of a multi-process program will share a single authentication key which can be used when setting up connections between themselves.
Suitable authentication keys can also be generated by using [`os.urandom()`](os.xhtml#os.urandom "os.urandom").
### 日志
Some support for logging is available. Note, however, that the [`logging`](logging.xhtml#module-logging "logging: Flexible event logging system for applications.")package does not use process shared locks so it is possible (depending on the handler type) for messages from different processes to get mixed up.
`multiprocessing.``get_logger`()Returns the logger used by [`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism."). If necessary, a new one will be created.
When first created the logger has level `logging.NOTSET` and no default handler. Messages sent to this logger will not by default propagate to the root logger.
Note that on Windows child processes will only inherit the level of the parent process's logger -- any other customization of the logger will not be inherited.
`multiprocessing.``log_to_stderr`()This function performs a call to [`get_logger()`](#multiprocessing.get_logger "multiprocessing.get_logger") but in addition to returning the logger created by get\_logger, it adds a handler which sends output to [`sys.stderr`](sys.xhtml#sys.stderr "sys.stderr") using format `'[%(levelname)s/%(processName)s] %(message)s'`.
Below is an example session with logging turned on:
```
>>> import multiprocessing, logging
>>> logger = multiprocessing.log_to_stderr()
>>> logger.setLevel(logging.INFO)
>>> logger.warning('doomed')
[WARNING/MainProcess] doomed
>>> m = multiprocessing.Manager()
[INFO/SyncManager-...] child process calling self.run()
[INFO/SyncManager-...] created temp directory /.../pymp-...
[INFO/SyncManager-...] manager serving at '/.../listener-...'
>>> del m
[INFO/MainProcess] sending shutdown message to manager
[INFO/SyncManager-...] manager exiting with exitcode 0
```
For a full table of logging levels, see the [`logging`](logging.xhtml#module-logging "logging: Flexible event logging system for applications.") module.
### The [`multiprocessing.dummy`](#module-multiprocessing.dummy "multiprocessing.dummy: Dumb wrapper around threading.") module
[`multiprocessing.dummy`](#module-multiprocessing.dummy "multiprocessing.dummy: Dumb wrapper around threading.") replicates the API of [`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") but is no more than a wrapper around the [`threading`](threading.xhtml#module-threading "threading: Thread-based parallelism.") module.
## Programming guidelines
There are certain guidelines and idioms which should be adhered to when using [`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.").
### All start methods
The following applies to all start methods.
Avoid shared state
> As far as possible one should try to avoid shifting large amounts of data between processes.
>
> It is probably best to stick to using queues or pipes for communication between processes rather than using the lower level synchronization primitives.
Picklability
> Ensure that the arguments to the methods of proxies are picklable.
Thread safety of proxies
> Do not use a proxy object from more than one thread unless you protect it with a lock.
>
> (There is never a problem with different processes using the *same* proxy.)
Joining zombie processes
> On Unix when a process finishes but has not been joined it becomes a zombie. There should never be very many because each time a new process starts (or [`active_children()`](#multiprocessing.active_children "multiprocessing.active_children") is called) all completed processes which have not yet been joined will be joined. Also calling a finished process's [`Process.is_alive`](#multiprocessing.Process.is_alive "multiprocessing.Process.is_alive") will join the process. Even so it is probably good practice to explicitly join all the processes that you start.
Better to inherit than pickle/unpickle
> When using the *spawn* or *forkserver* start methods many types from [`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") need to be picklable so that child processes can use them. However, one should generally avoid sending shared objects to other processes using pipes or queues. Instead you should arrange the program so that a process which needs access to a shared resource created elsewhere can inherit it from an ancestor process.
Avoid terminating processes
> Using the [`Process.terminate`](#multiprocessing.Process.terminate "multiprocessing.Process.terminate")method to stop a process is liable to cause any shared resources (such as locks, semaphores, pipes and queues) currently being used by the process to become broken or unavailable to other processes.
>
> Therefore it is probably best to only consider using [`Process.terminate`](#multiprocessing.Process.terminate "multiprocessing.Process.terminate") on processes which never use any shared resources.
Joining processes that use queues
> Bear in mind that a process that has put items in a queue will wait before terminating until all the buffered items are fed by the "feeder" thread to the underlying pipe. (The child process can call the [`Queue.cancel_join_thread`](#multiprocessing.Queue.cancel_join_thread "multiprocessing.Queue.cancel_join_thread")method of the queue to avoid this behaviour.)
>
> This means that whenever you use a queue you need to make sure that all items which have been put on the queue will eventually be removed before the process is joined. Otherwise you cannot be sure that processes which have put items on the queue will terminate. Remember also that non-daemonic processes will be joined automatically.
>
> An example which will deadlock is the following:
>
>
> ```
> from multiprocessing import Process, Queue
>
> def f(q):
> q.put('X' * 1000000)
>
> if __name__ == '__main__':
> queue = Queue()
> p = Process(target=f, args=(queue,))
> p.start()
> p.join() # this deadlocks
> obj = queue.get()
>
> ```
>
>
>
>
> A fix here would be to swap the last two lines (or simply remove the `p.join()` line).
Explicitly pass resources to child processes
> On Unix using the *fork* start method, a child process can make use of a shared resource created in a parent process using a global resource. However, it is better to pass the object as an argument to the constructor for the child process.
>
> Apart from making the code (potentially) compatible with Windows and the other start methods this also ensures that as long as the child process is still alive the object will not be garbage collected in the parent process. This might be important if some resource is freed when the object is garbage collected in the parent process.
>
> 所以對于實例:
>
>
> ```
> from multiprocessing import Process, Lock
>
> def f():
> ... do something using "lock" ...
>
> if __name__ == '__main__':
> lock = Lock()
> for i in range(10):
> Process(target=f).start()
>
> ```
>
>
>
>
> 應當重寫成這樣:
>
>
> ```
> from multiprocessing import Process, Lock
>
> def f(l):
> ... do something using "l" ...
>
> if __name__ == '__main__':
> lock = Lock()
> for i in range(10):
> Process(target=f, args=(lock,)).start()
>
> ```
Beware of replacing [`sys.stdin`](sys.xhtml#sys.stdin "sys.stdin") with a "file like object"
> [`multiprocessing`](#module-multiprocessing "multiprocessing: Process-based parallelism.") originally unconditionally called:
>
>
> ```
> os.close(sys.stdin.fileno())
>
> ```
>
>
>
>
> in the `multiprocessing.Process._bootstrap()` method --- this resulted in issues with processes-in-processes. This has been changed to:
>
>
> ```
> sys.stdin.close()
> sys.stdin = open(os.open(os.devnull, os.O_RDONLY), closefd=False)
>
> ```
>
>
>
>
> Which solves the fundamental issue of processes colliding with each other resulting in a bad file descriptor error, but introduces a potential danger to applications which replace [`sys.stdin()`](sys.xhtml#sys.stdin "sys.stdin") with a "file-like object" with output buffering. This danger is that if multiple processes call [`close()`](io.xhtml#io.IOBase.close "io.IOBase.close") on this file-like object, it could result in the same data being flushed to the object multiple times, resulting in corruption.
>
> If you write a file-like object and implement your own caching, you can make it fork-safe by storing the pid whenever you append to the cache, and discarding the cache when the pid changes. For example:
>
>
> ```
> @property
> def cache(self):
> pid = os.getpid()
> if pid != self._pid:
> self._pid = pid
> self._cache = []
> return self._cache
>
> ```
>
>
>
>
> For more information, see [bpo-5155](https://bugs.python.org/issue5155) \[https://bugs.python.org/issue5155\], [bpo-5313](https://bugs.python.org/issue5313) \[https://bugs.python.org/issue5313\] and [bpo-5331](https://bugs.python.org/issue5331) \[https://bugs.python.org/issue5331\]
### The *spawn* and *forkserver* start methods
There are a few extra restriction which don't apply to the *fork*start method.
More picklability
> Ensure that all arguments to `Process.__init__()` are picklable. Also, if you subclass [`Process`](#multiprocessing.Process "multiprocessing.Process") then make sure that instances will be picklable when the [`Process.start`](#multiprocessing.Process.start "multiprocessing.Process.start") method is called.
Global variables
> Bear in mind that if code run in a child process tries to access a global variable, then the value it sees (if any) may not be the same as the value in the parent process at the time that [`Process.start`](#multiprocessing.Process.start "multiprocessing.Process.start") was called.
>
> However, global variables which are just module level constants cause no problems.
Safe importing of main module
> Make sure that the main module can be safely imported by a new Python interpreter without causing unintended side effects (such a starting a new process).
>
> For example, using the *spawn* or *forkserver* start method running the following module would fail with a [`RuntimeError`](exceptions.xhtml#RuntimeError "RuntimeError"):
>
>
> ```
> from multiprocessing import Process
>
> def foo():
> print('hello')
>
> p = Process(target=foo)
> p.start()
>
> ```
>
>
>
>
> Instead one should protect the "entry point" of the program by using
> ```
> if
> __name__ == '__main__':
> ```
> as follows:
>
>
> ```
> from multiprocessing import Process, freeze_support, set_start_method
>
> def foo():
> print('hello')
>
> if __name__ == '__main__':
> freeze_support()
> set_start_method('spawn')
> p = Process(target=foo)
> p.start()
>
> ```
>
>
>
>
> (The `freeze_support()` line can be omitted if the program will be run normally instead of frozen.)
>
> This allows the newly spawned Python interpreter to safely import the module and then run the module's `foo()` function.
>
> Similar restrictions apply if a pool or manager is created in the main module.
## 示例
Demonstration of how to create and use customized managers and proxies:
```
from multiprocessing import freeze_support
from multiprocessing.managers import BaseManager, BaseProxy
import operator
##
class Foo:
def f(self):
print('you called Foo.f()')
def g(self):
print('you called Foo.g()')
def _h(self):
print('you called Foo._h()')
# A simple generator function
def baz():
for i in range(10):
yield i*i
# Proxy type for generator objects
class GeneratorProxy(BaseProxy):
_exposed_ = ['__next__']
def __iter__(self):
return self
def __next__(self):
return self._callmethod('__next__')
# Function to return the operator module
def get_operator_module():
return operator
##
class MyManager(BaseManager):
pass
# register the Foo class; make `f()` and `g()` accessible via proxy
MyManager.register('Foo1', Foo)
# register the Foo class; make `g()` and `_h()` accessible via proxy
MyManager.register('Foo2', Foo, exposed=('g', '_h'))
# register the generator function baz; use `GeneratorProxy` to make proxies
MyManager.register('baz', baz, proxytype=GeneratorProxy)
# register get_operator_module(); make public functions accessible via proxy
MyManager.register('operator', get_operator_module)
##
def test():
manager = MyManager()
manager.start()
print('-' * 20)
f1 = manager.Foo1()
f1.f()
f1.g()
assert not hasattr(f1, '_h')
assert sorted(f1._exposed_) == sorted(['f', 'g'])
print('-' * 20)
f2 = manager.Foo2()
f2.g()
f2._h()
assert not hasattr(f2, 'f')
assert sorted(f2._exposed_) == sorted(['g', '_h'])
print('-' * 20)
it = manager.baz()
for i in it:
print('<%d>' % i, end=' ')
print()
print('-' * 20)
op = manager.operator()
print('op.add(23, 45) =', op.add(23, 45))
print('op.pow(2, 94) =', op.pow(2, 94))
print('op._exposed_ =', op._exposed_)
##
if __name__ == '__main__':
freeze_support()
test()
```
Using [`Pool`](#multiprocessing.pool.Pool "multiprocessing.pool.Pool"):
```
import multiprocessing
import time
import random
import sys
#
# Functions used by test code
#
def calculate(func, args):
result = func(*args)
return '%s says that %s%s = %s' % (
multiprocessing.current_process().name,
func.__name__, args, result
)
def calculatestar(args):
return calculate(*args)
def mul(a, b):
time.sleep(0.5 * random.random())
return a * b
def plus(a, b):
time.sleep(0.5 * random.random())
return a + b
def f(x):
return 1.0 / (x - 5.0)
def pow3(x):
return x ** 3
def noop(x):
pass
#
# Test code
#
def test():
PROCESSES = 4
print('Creating pool with %d processes\n' % PROCESSES)
with multiprocessing.Pool(PROCESSES) as pool:
#
# Tests
#
TASKS = [(mul, (i, 7)) for i in range(10)] + \
[(plus, (i, 8)) for i in range(10)]
results = [pool.apply_async(calculate, t) for t in TASKS]
imap_it = pool.imap(calculatestar, TASKS)
imap_unordered_it = pool.imap_unordered(calculatestar, TASKS)
print('Ordered results using pool.apply_async():')
for r in results:
print('\t', r.get())
print()
print('Ordered results using pool.imap():')
for x in imap_it:
print('\t', x)
print()
print('Unordered results using pool.imap_unordered():')
for x in imap_unordered_it:
print('\t', x)
print()
print('Ordered results using pool.map() --- will block till complete:')
for x in pool.map(calculatestar, TASKS):
print('\t', x)
print()
#
# Test error handling
#
print('Testing error handling:')
try:
print(pool.apply(f, (5,)))
except ZeroDivisionError:
print('\tGot ZeroDivisionError as expected from pool.apply()')
else:
raise AssertionError('expected ZeroDivisionError')
try:
print(pool.map(f, list(range(10))))
except ZeroDivisionError:
print('\tGot ZeroDivisionError as expected from pool.map()')
else:
raise AssertionError('expected ZeroDivisionError')
try:
print(list(pool.imap(f, list(range(10)))))
except ZeroDivisionError:
print('\tGot ZeroDivisionError as expected from list(pool.imap())')
else:
raise AssertionError('expected ZeroDivisionError')
it = pool.imap(f, list(range(10)))
for i in range(10):
try:
x = next(it)
except ZeroDivisionError:
if i == 5:
pass
except StopIteration:
break
else:
if i == 5:
raise AssertionError('expected ZeroDivisionError')
assert i == 9
print('\tGot ZeroDivisionError as expected from IMapIterator.next()')
print()
#
# Testing timeouts
#
print('Testing ApplyResult.get() with timeout:', end=' ')
res = pool.apply_async(calculate, TASKS[0])
while 1:
sys.stdout.flush()
try:
sys.stdout.write('\n\t%s' % res.get(0.02))
break
except multiprocessing.TimeoutError:
sys.stdout.write('.')
print()
print()
print('Testing IMapIterator.next() with timeout:', end=' ')
it = pool.imap(calculatestar, TASKS)
while 1:
sys.stdout.flush()
try:
sys.stdout.write('\n\t%s' % it.next(0.02))
except StopIteration:
break
except multiprocessing.TimeoutError:
sys.stdout.write('.')
print()
print()
if __name__ == '__main__':
multiprocessing.freeze_support()
test()
```
An example showing how to use queues to feed tasks to a collection of worker processes and collect the results:
```
import time
import random
from multiprocessing import Process, Queue, current_process, freeze_support
#
# Function run by worker processes
#
def worker(input, output):
for func, args in iter(input.get, 'STOP'):
result = calculate(func, args)
output.put(result)
#
# Function used to calculate result
#
def calculate(func, args):
result = func(*args)
return '%s says that %s%s = %s' % \
(current_process().name, func.__name__, args, result)
#
# Functions referenced by tasks
#
def mul(a, b):
time.sleep(0.5*random.random())
return a * b
def plus(a, b):
time.sleep(0.5*random.random())
return a + b
#
#
#
def test():
NUMBER_OF_PROCESSES = 4
TASKS1 = [(mul, (i, 7)) for i in range(20)]
TASKS2 = [(plus, (i, 8)) for i in range(10)]
# Create queues
task_queue = Queue()
done_queue = Queue()
# Submit tasks
for task in TASKS1:
task_queue.put(task)
# Start worker processes
for i in range(NUMBER_OF_PROCESSES):
Process(target=worker, args=(task_queue, done_queue)).start()
# Get and print results
print('Unordered results:')
for i in range(len(TASKS1)):
print('\t', done_queue.get())
# Add more tasks using `put()`
for task in TASKS2:
task_queue.put(task)
# Get and print some more results
for i in range(len(TASKS2)):
print('\t', done_queue.get())
# Tell child processes to stop
for i in range(NUMBER_OF_PROCESSES):
task_queue.put('STOP')
if __name__ == '__main__':
freeze_support()
test()
```
### 導航
- [索引](../genindex.xhtml "總目錄")
- [模塊](../py-modindex.xhtml "Python 模塊索引") |
- [下一頁](concurrent.xhtml "concurrent 包") |
- [上一頁](threading.xhtml "threading --- 基于線程的并行") |
- 
- [Python](https://www.python.org/) ?
- zh\_CN 3.7.3 [文檔](../index.xhtml) ?
- [Python 標準庫](index.xhtml) ?
- [并發執行](concurrency.xhtml) ?
- $('.inline-search').show(0); |
? [版權所有](../copyright.xhtml) 2001-2019, Python Software Foundation.
Python 軟件基金會是一個非盈利組織。 [請捐助。](https://www.python.org/psf/donations/)
最后更新于 5月 21, 2019. [發現了問題](../bugs.xhtml)?
使用[Sphinx](http://sphinx.pocoo.org/)1.8.4 創建。
- Python文檔內容
- Python 有什么新變化?
- Python 3.7 有什么新變化
- 摘要 - 發布重點
- 新的特性
- 其他語言特性修改
- 新增模塊
- 改進的模塊
- C API 的改變
- 構建的改變
- 性能優化
- 其他 CPython 實現的改變
- 已棄用的 Python 行為
- 已棄用的 Python 模塊、函數和方法
- 已棄用的 C API 函數和類型
- 平臺支持的移除
- API 與特性的移除
- 移除的模塊
- Windows 專屬的改變
- 移植到 Python 3.7
- Python 3.7.1 中的重要變化
- Python 3.7.2 中的重要變化
- Python 3.6 有什么新變化A
- 摘要 - 發布重點
- 新的特性
- 其他語言特性修改
- 新增模塊
- 改進的模塊
- 性能優化
- Build and C API Changes
- 其他改進
- 棄用
- 移除
- 移植到Python 3.6
- Python 3.6.2 中的重要變化
- Python 3.6.4 中的重要變化
- Python 3.6.5 中的重要變化
- Python 3.6.7 中的重要變化
- Python 3.5 有什么新變化
- 摘要 - 發布重點
- 新的特性
- 其他語言特性修改
- 新增模塊
- 改進的模塊
- Other module-level changes
- 性能優化
- Build and C API Changes
- 棄用
- 移除
- Porting to Python 3.5
- Notable changes in Python 3.5.4
- What's New In Python 3.4
- 摘要 - 發布重點
- 新的特性
- 新增模塊
- 改進的模塊
- CPython Implementation Changes
- 棄用
- 移除
- Porting to Python 3.4
- Changed in 3.4.3
- What's New In Python 3.3
- 摘要 - 發布重點
- PEP 405: Virtual Environments
- PEP 420: Implicit Namespace Packages
- PEP 3118: New memoryview implementation and buffer protocol documentation
- PEP 393: Flexible String Representation
- PEP 397: Python Launcher for Windows
- PEP 3151: Reworking the OS and IO exception hierarchy
- PEP 380: Syntax for Delegating to a Subgenerator
- PEP 409: Suppressing exception context
- PEP 414: Explicit Unicode literals
- PEP 3155: Qualified name for classes and functions
- PEP 412: Key-Sharing Dictionary
- PEP 362: Function Signature Object
- PEP 421: Adding sys.implementation
- Using importlib as the Implementation of Import
- 其他語言特性修改
- A Finer-Grained Import Lock
- Builtin functions and types
- 新增模塊
- 改進的模塊
- 性能優化
- Build and C API Changes
- 棄用
- Porting to Python 3.3
- What's New In Python 3.2
- PEP 384: Defining a Stable ABI
- PEP 389: Argparse Command Line Parsing Module
- PEP 391: Dictionary Based Configuration for Logging
- PEP 3148: The concurrent.futures module
- PEP 3147: PYC Repository Directories
- PEP 3149: ABI Version Tagged .so Files
- PEP 3333: Python Web Server Gateway Interface v1.0.1
- 其他語言特性修改
- New, Improved, and Deprecated Modules
- 多線程
- 性能優化
- Unicode
- Codecs
- 文檔
- IDLE
- Code Repository
- Build and C API Changes
- Porting to Python 3.2
- What's New In Python 3.1
- PEP 372: Ordered Dictionaries
- PEP 378: Format Specifier for Thousands Separator
- 其他語言特性修改
- New, Improved, and Deprecated Modules
- 性能優化
- IDLE
- Build and C API Changes
- Porting to Python 3.1
- What's New In Python 3.0
- Common Stumbling Blocks
- Overview Of Syntax Changes
- Changes Already Present In Python 2.6
- Library Changes
- PEP 3101: A New Approach To String Formatting
- Changes To Exceptions
- Miscellaneous Other Changes
- Build and C API Changes
- 性能
- Porting To Python 3.0
- What's New in Python 2.7
- The Future for Python 2.x
- Changes to the Handling of Deprecation Warnings
- Python 3.1 Features
- PEP 372: Adding an Ordered Dictionary to collections
- PEP 378: Format Specifier for Thousands Separator
- PEP 389: The argparse Module for Parsing Command Lines
- PEP 391: Dictionary-Based Configuration For Logging
- PEP 3106: Dictionary Views
- PEP 3137: The memoryview Object
- 其他語言特性修改
- New and Improved Modules
- Build and C API Changes
- Other Changes and Fixes
- Porting to Python 2.7
- New Features Added to Python 2.7 Maintenance Releases
- Acknowledgements
- Python 2.6 有什么新變化
- Python 3.0
- Changes to the Development Process
- PEP 343: The 'with' statement
- PEP 366: Explicit Relative Imports From a Main Module
- PEP 370: Per-user site-packages Directory
- PEP 371: The multiprocessing Package
- PEP 3101: Advanced String Formatting
- PEP 3105: print As a Function
- PEP 3110: Exception-Handling Changes
- PEP 3112: Byte Literals
- PEP 3116: New I/O Library
- PEP 3118: Revised Buffer Protocol
- PEP 3119: Abstract Base Classes
- PEP 3127: Integer Literal Support and Syntax
- PEP 3129: Class Decorators
- PEP 3141: A Type Hierarchy for Numbers
- 其他語言特性修改
- New and Improved Modules
- Deprecations and Removals
- Build and C API Changes
- Porting to Python 2.6
- Acknowledgements
- What's New in Python 2.5
- PEP 308: Conditional Expressions
- PEP 309: Partial Function Application
- PEP 314: Metadata for Python Software Packages v1.1
- PEP 328: Absolute and Relative Imports
- PEP 338: Executing Modules as Scripts
- PEP 341: Unified try/except/finally
- PEP 342: New Generator Features
- PEP 343: The 'with' statement
- PEP 352: Exceptions as New-Style Classes
- PEP 353: Using ssize_t as the index type
- PEP 357: The 'index' method
- 其他語言特性修改
- New, Improved, and Removed Modules
- Build and C API Changes
- Porting to Python 2.5
- Acknowledgements
- What's New in Python 2.4
- PEP 218: Built-In Set Objects
- PEP 237: Unifying Long Integers and Integers
- PEP 289: Generator Expressions
- PEP 292: Simpler String Substitutions
- PEP 318: Decorators for Functions and Methods
- PEP 322: Reverse Iteration
- PEP 324: New subprocess Module
- PEP 327: Decimal Data Type
- PEP 328: Multi-line Imports
- PEP 331: Locale-Independent Float/String Conversions
- 其他語言特性修改
- New, Improved, and Deprecated Modules
- Build and C API Changes
- Porting to Python 2.4
- Acknowledgements
- What's New in Python 2.3
- PEP 218: A Standard Set Datatype
- PEP 255: Simple Generators
- PEP 263: Source Code Encodings
- PEP 273: Importing Modules from ZIP Archives
- PEP 277: Unicode file name support for Windows NT
- PEP 278: Universal Newline Support
- PEP 279: enumerate()
- PEP 282: The logging Package
- PEP 285: A Boolean Type
- PEP 293: Codec Error Handling Callbacks
- PEP 301: Package Index and Metadata for Distutils
- PEP 302: New Import Hooks
- PEP 305: Comma-separated Files
- PEP 307: Pickle Enhancements
- Extended Slices
- 其他語言特性修改
- New, Improved, and Deprecated Modules
- Pymalloc: A Specialized Object Allocator
- Build and C API Changes
- Other Changes and Fixes
- Porting to Python 2.3
- Acknowledgements
- What's New in Python 2.2
- 概述
- PEPs 252 and 253: Type and Class Changes
- PEP 234: Iterators
- PEP 255: Simple Generators
- PEP 237: Unifying Long Integers and Integers
- PEP 238: Changing the Division Operator
- Unicode Changes
- PEP 227: Nested Scopes
- New and Improved Modules
- Interpreter Changes and Fixes
- Other Changes and Fixes
- Acknowledgements
- What's New in Python 2.1
- 概述
- PEP 227: Nested Scopes
- PEP 236: future Directives
- PEP 207: Rich Comparisons
- PEP 230: Warning Framework
- PEP 229: New Build System
- PEP 205: Weak References
- PEP 232: Function Attributes
- PEP 235: Importing Modules on Case-Insensitive Platforms
- PEP 217: Interactive Display Hook
- PEP 208: New Coercion Model
- PEP 241: Metadata in Python Packages
- New and Improved Modules
- Other Changes and Fixes
- Acknowledgements
- What's New in Python 2.0
- 概述
- What About Python 1.6?
- New Development Process
- Unicode
- 列表推導式
- Augmented Assignment
- 字符串的方法
- Garbage Collection of Cycles
- Other Core Changes
- Porting to 2.0
- Extending/Embedding Changes
- Distutils: Making Modules Easy to Install
- XML Modules
- Module changes
- New modules
- IDLE Improvements
- Deleted and Deprecated Modules
- Acknowledgements
- 更新日志
- Python 下一版
- Python 3.7.3 最終版
- Python 3.7.3 發布候選版 1
- Python 3.7.2 最終版
- Python 3.7.2 發布候選版 1
- Python 3.7.1 最終版
- Python 3.7.1 RC 2版本
- Python 3.7.1 發布候選版 1
- Python 3.7.0 正式版
- Python 3.7.0 release candidate 1
- Python 3.7.0 beta 5
- Python 3.7.0 beta 4
- Python 3.7.0 beta 3
- Python 3.7.0 beta 2
- Python 3.7.0 beta 1
- Python 3.7.0 alpha 4
- Python 3.7.0 alpha 3
- Python 3.7.0 alpha 2
- Python 3.7.0 alpha 1
- Python 3.6.6 final
- Python 3.6.6 RC 1
- Python 3.6.5 final
- Python 3.6.5 release candidate 1
- Python 3.6.4 final
- Python 3.6.4 release candidate 1
- Python 3.6.3 final
- Python 3.6.3 release candidate 1
- Python 3.6.2 final
- Python 3.6.2 release candidate 2
- Python 3.6.2 release candidate 1
- Python 3.6.1 final
- Python 3.6.1 release candidate 1
- Python 3.6.0 final
- Python 3.6.0 release candidate 2
- Python 3.6.0 release candidate 1
- Python 3.6.0 beta 4
- Python 3.6.0 beta 3
- Python 3.6.0 beta 2
- Python 3.6.0 beta 1
- Python 3.6.0 alpha 4
- Python 3.6.0 alpha 3
- Python 3.6.0 alpha 2
- Python 3.6.0 alpha 1
- Python 3.5.5 final
- Python 3.5.5 release candidate 1
- Python 3.5.4 final
- Python 3.5.4 release candidate 1
- Python 3.5.3 final
- Python 3.5.3 release candidate 1
- Python 3.5.2 final
- Python 3.5.2 release candidate 1
- Python 3.5.1 final
- Python 3.5.1 release candidate 1
- Python 3.5.0 final
- Python 3.5.0 release candidate 4
- Python 3.5.0 release candidate 3
- Python 3.5.0 release candidate 2
- Python 3.5.0 release candidate 1
- Python 3.5.0 beta 4
- Python 3.5.0 beta 3
- Python 3.5.0 beta 2
- Python 3.5.0 beta 1
- Python 3.5.0 alpha 4
- Python 3.5.0 alpha 3
- Python 3.5.0 alpha 2
- Python 3.5.0 alpha 1
- Python 教程
- 課前甜點
- 使用 Python 解釋器
- 調用解釋器
- 解釋器的運行環境
- Python 的非正式介紹
- Python 作為計算器使用
- 走向編程的第一步
- 其他流程控制工具
- if 語句
- for 語句
- range() 函數
- break 和 continue 語句,以及循環中的 else 子句
- pass 語句
- 定義函數
- 函數定義的更多形式
- 小插曲:編碼風格
- 數據結構
- 列表的更多特性
- del 語句
- 元組和序列
- 集合
- 字典
- 循環的技巧
- 深入條件控制
- 序列和其它類型的比較
- 模塊
- 有關模塊的更多信息
- 標準模塊
- dir() 函數
- 包
- 輸入輸出
- 更漂亮的輸出格式
- 讀寫文件
- 錯誤和異常
- 語法錯誤
- 異常
- 處理異常
- 拋出異常
- 用戶自定義異常
- 定義清理操作
- 預定義的清理操作
- 類
- 名稱和對象
- Python 作用域和命名空間
- 初探類
- 補充說明
- 繼承
- 私有變量
- 雜項說明
- 迭代器
- 生成器
- 生成器表達式
- 標準庫簡介
- 操作系統接口
- 文件通配符
- 命令行參數
- 錯誤輸出重定向和程序終止
- 字符串模式匹配
- 數學
- 互聯網訪問
- 日期和時間
- 數據壓縮
- 性能測量
- 質量控制
- 自帶電池
- 標準庫簡介 —— 第二部分
- 格式化輸出
- 模板
- 使用二進制數據記錄格式
- 多線程
- 日志
- 弱引用
- 用于操作列表的工具
- 十進制浮點運算
- 虛擬環境和包
- 概述
- 創建虛擬環境
- 使用pip管理包
- 接下來?
- 交互式編輯和編輯歷史
- Tab 補全和編輯歷史
- 默認交互式解釋器的替代品
- 浮點算術:爭議和限制
- 表示性錯誤
- 附錄
- 交互模式
- 安裝和使用 Python
- 命令行與環境
- 命令行
- 環境變量
- 在Unix平臺中使用Python
- 獲取最新版本的Python
- 構建Python
- 與Python相關的路徑和文件
- 雜項
- 編輯器和集成開發環境
- 在Windows上使用 Python
- 完整安裝程序
- Microsoft Store包
- nuget.org 安裝包
- 可嵌入的包
- 替代捆綁包
- 配置Python
- 適用于Windows的Python啟動器
- 查找模塊
- 附加模塊
- 在Windows上編譯Python
- 其他平臺
- 在蘋果系統上使用 Python
- 獲取和安裝 MacPython
- IDE
- 安裝額外的 Python 包
- Mac 上的圖形界面編程
- 在 Mac 上分發 Python 應用程序
- 其他資源
- Python 語言參考
- 概述
- 其他實現
- 標注
- 詞法分析
- 行結構
- 其他形符
- 標識符和關鍵字
- 字面值
- 運算符
- 分隔符
- 數據模型
- 對象、值與類型
- 標準類型層級結構
- 特殊方法名稱
- 協程
- 執行模型
- 程序的結構
- 命名與綁定
- 異常
- 導入系統
- importlib
- 包
- 搜索
- 加載
- 基于路徑的查找器
- 替換標準導入系統
- Package Relative Imports
- 有關 main 的特殊事項
- 開放問題項
- 參考文獻
- 表達式
- 算術轉換
- 原子
- 原型
- await 表達式
- 冪運算符
- 一元算術和位運算
- 二元算術運算符
- 移位運算
- 二元位運算
- 比較運算
- 布爾運算
- 條件表達式
- lambda 表達式
- 表達式列表
- 求值順序
- 運算符優先級
- 簡單語句
- 表達式語句
- 賦值語句
- assert 語句
- pass 語句
- del 語句
- return 語句
- yield 語句
- raise 語句
- break 語句
- continue 語句
- import 語句
- global 語句
- nonlocal 語句
- 復合語句
- if 語句
- while 語句
- for 語句
- try 語句
- with 語句
- 函數定義
- 類定義
- 協程
- 最高層級組件
- 完整的 Python 程序
- 文件輸入
- 交互式輸入
- 表達式輸入
- 完整的語法規范
- Python 標準庫
- 概述
- 可用性注釋
- 內置函數
- 內置常量
- 由 site 模塊添加的常量
- 內置類型
- 邏輯值檢測
- 布爾運算 — and, or, not
- 比較
- 數字類型 — int, float, complex
- 迭代器類型
- 序列類型 — list, tuple, range
- 文本序列類型 — str
- 二進制序列類型 — bytes, bytearray, memoryview
- 集合類型 — set, frozenset
- 映射類型 — dict
- 上下文管理器類型
- 其他內置類型
- 特殊屬性
- 內置異常
- 基類
- 具體異常
- 警告
- 異常層次結構
- 文本處理服務
- string — 常見的字符串操作
- re — 正則表達式操作
- 模塊 difflib 是一個計算差異的助手
- textwrap — Text wrapping and filling
- unicodedata — Unicode 數據庫
- stringprep — Internet String Preparation
- readline — GNU readline interface
- rlcompleter — GNU readline的完成函數
- 二進制數據服務
- struct — Interpret bytes as packed binary data
- codecs — Codec registry and base classes
- 數據類型
- datetime — 基礎日期/時間數據類型
- calendar — General calendar-related functions
- collections — 容器數據類型
- collections.abc — 容器的抽象基類
- heapq — 堆隊列算法
- bisect — Array bisection algorithm
- array — Efficient arrays of numeric values
- weakref — 弱引用
- types — Dynamic type creation and names for built-in types
- copy — 淺層 (shallow) 和深層 (deep) 復制操作
- pprint — 數據美化輸出
- reprlib — Alternate repr() implementation
- enum — Support for enumerations
- 數字和數學模塊
- numbers — 數字的抽象基類
- math — 數學函數
- cmath — Mathematical functions for complex numbers
- decimal — 十進制定點和浮點運算
- fractions — 分數
- random — 生成偽隨機數
- statistics — Mathematical statistics functions
- 函數式編程模塊
- itertools — 為高效循環而創建迭代器的函數
- functools — 高階函數和可調用對象上的操作
- operator — 標準運算符替代函數
- 文件和目錄訪問
- pathlib — 面向對象的文件系統路徑
- os.path — 常見路徑操作
- fileinput — Iterate over lines from multiple input streams
- stat — Interpreting stat() results
- filecmp — File and Directory Comparisons
- tempfile — Generate temporary files and directories
- glob — Unix style pathname pattern expansion
- fnmatch — Unix filename pattern matching
- linecache — Random access to text lines
- shutil — High-level file operations
- macpath — Mac OS 9 路徑操作函數
- 數據持久化
- pickle —— Python 對象序列化
- copyreg — Register pickle support functions
- shelve — Python object persistence
- marshal — Internal Python object serialization
- dbm — Interfaces to Unix “databases”
- sqlite3 — SQLite 數據庫 DB-API 2.0 接口模塊
- 數據壓縮和存檔
- zlib — 與 gzip 兼容的壓縮
- gzip — 對 gzip 格式的支持
- bz2 — 對 bzip2 壓縮算法的支持
- lzma — 用 LZMA 算法壓縮
- zipfile — 在 ZIP 歸檔中工作
- tarfile — Read and write tar archive files
- 文件格式
- csv — CSV 文件讀寫
- configparser — Configuration file parser
- netrc — netrc file processing
- xdrlib — Encode and decode XDR data
- plistlib — Generate and parse Mac OS X .plist files
- 加密服務
- hashlib — 安全哈希與消息摘要
- hmac — 基于密鑰的消息驗證
- secrets — Generate secure random numbers for managing secrets
- 通用操作系統服務
- os — 操作系統接口模塊
- io — 處理流的核心工具
- time — 時間的訪問和轉換
- argparse — 命令行選項、參數和子命令解析器
- getopt — C-style parser for command line options
- 模塊 logging — Python 的日志記錄工具
- logging.config — 日志記錄配置
- logging.handlers — Logging handlers
- getpass — 便攜式密碼輸入工具
- curses — 終端字符單元顯示的處理
- curses.textpad — Text input widget for curses programs
- curses.ascii — Utilities for ASCII characters
- curses.panel — A panel stack extension for curses
- platform — Access to underlying platform's identifying data
- errno — Standard errno system symbols
- ctypes — Python 的外部函數庫
- 并發執行
- threading — 基于線程的并行
- multiprocessing — 基于進程的并行
- concurrent 包
- concurrent.futures — 啟動并行任務
- subprocess — 子進程管理
- sched — 事件調度器
- queue — 一個同步的隊列類
- _thread — 底層多線程 API
- _dummy_thread — _thread 的替代模塊
- dummy_threading — 可直接替代 threading 模塊。
- contextvars — Context Variables
- Context Variables
- Manual Context Management
- asyncio support
- 網絡和進程間通信
- asyncio — 異步 I/O
- socket — 底層網絡接口
- ssl — TLS/SSL wrapper for socket objects
- select — Waiting for I/O completion
- selectors — 高級 I/O 復用庫
- asyncore — 異步socket處理器
- asynchat — 異步 socket 指令/響應 處理器
- signal — Set handlers for asynchronous events
- mmap — Memory-mapped file support
- 互聯網數據處理
- email — 電子郵件與 MIME 處理包
- json — JSON 編碼和解碼器
- mailcap — Mailcap file handling
- mailbox — Manipulate mailboxes in various formats
- mimetypes — Map filenames to MIME types
- base64 — Base16, Base32, Base64, Base85 數據編碼
- binhex — 對binhex4文件進行編碼和解碼
- binascii — 二進制和 ASCII 碼互轉
- quopri — Encode and decode MIME quoted-printable data
- uu — Encode and decode uuencode files
- 結構化標記處理工具
- html — 超文本標記語言支持
- html.parser — 簡單的 HTML 和 XHTML 解析器
- html.entities — HTML 一般實體的定義
- XML處理模塊
- xml.etree.ElementTree — The ElementTree XML API
- xml.dom — The Document Object Model API
- xml.dom.minidom — Minimal DOM implementation
- xml.dom.pulldom — Support for building partial DOM trees
- xml.sax — Support for SAX2 parsers
- xml.sax.handler — Base classes for SAX handlers
- xml.sax.saxutils — SAX Utilities
- xml.sax.xmlreader — Interface for XML parsers
- xml.parsers.expat — Fast XML parsing using Expat
- 互聯網協議和支持
- webbrowser — 方便的Web瀏覽器控制器
- cgi — Common Gateway Interface support
- cgitb — Traceback manager for CGI scripts
- wsgiref — WSGI Utilities and Reference Implementation
- urllib — URL 處理模塊
- urllib.request — 用于打開 URL 的可擴展庫
- urllib.response — Response classes used by urllib
- urllib.parse — Parse URLs into components
- urllib.error — Exception classes raised by urllib.request
- urllib.robotparser — Parser for robots.txt
- http — HTTP 模塊
- http.client — HTTP協議客戶端
- ftplib — FTP protocol client
- poplib — POP3 protocol client
- imaplib — IMAP4 protocol client
- nntplib — NNTP protocol client
- smtplib —SMTP協議客戶端
- smtpd — SMTP Server
- telnetlib — Telnet client
- uuid — UUID objects according to RFC 4122
- socketserver — A framework for network servers
- http.server — HTTP 服務器
- http.cookies — HTTP state management
- http.cookiejar — Cookie handling for HTTP clients
- xmlrpc — XMLRPC 服務端與客戶端模塊
- xmlrpc.client — XML-RPC client access
- xmlrpc.server — Basic XML-RPC servers
- ipaddress — IPv4/IPv6 manipulation library
- 多媒體服務
- audioop — Manipulate raw audio data
- aifc — Read and write AIFF and AIFC files
- sunau — 讀寫 Sun AU 文件
- wave — 讀寫WAV格式文件
- chunk — Read IFF chunked data
- colorsys — Conversions between color systems
- imghdr — 推測圖像類型
- sndhdr — 推測聲音文件的類型
- ossaudiodev — Access to OSS-compatible audio devices
- 國際化
- gettext — 多語種國際化服務
- locale — 國際化服務
- 程序框架
- turtle — 海龜繪圖
- cmd — 支持面向行的命令解釋器
- shlex — Simple lexical analysis
- Tk圖形用戶界面(GUI)
- tkinter — Tcl/Tk的Python接口
- tkinter.ttk — Tk themed widgets
- tkinter.tix — Extension widgets for Tk
- tkinter.scrolledtext — 滾動文字控件
- IDLE
- 其他圖形用戶界面(GUI)包
- 開發工具
- typing — 類型標注支持
- pydoc — Documentation generator and online help system
- doctest — Test interactive Python examples
- unittest — 單元測試框架
- unittest.mock — mock object library
- unittest.mock 上手指南
- 2to3 - 自動將 Python 2 代碼轉為 Python 3 代碼
- test — Regression tests package for Python
- test.support — Utilities for the Python test suite
- test.support.script_helper — Utilities for the Python execution tests
- 調試和分析
- bdb — Debugger framework
- faulthandler — Dump the Python traceback
- pdb — The Python Debugger
- The Python Profilers
- timeit — 測量小代碼片段的執行時間
- trace — Trace or track Python statement execution
- tracemalloc — Trace memory allocations
- 軟件打包和分發
- distutils — 構建和安裝 Python 模塊
- ensurepip — Bootstrapping the pip installer
- venv — 創建虛擬環境
- zipapp — Manage executable Python zip archives
- Python運行時服務
- sys — 系統相關的參數和函數
- sysconfig — Provide access to Python's configuration information
- builtins — 內建對象
- main — 頂層腳本環境
- warnings — Warning control
- dataclasses — 數據類
- contextlib — Utilities for with-statement contexts
- abc — 抽象基類
- atexit — 退出處理器
- traceback — Print or retrieve a stack traceback
- future — Future 語句定義
- gc — 垃圾回收器接口
- inspect — 檢查對象
- site — Site-specific configuration hook
- 自定義 Python 解釋器
- code — Interpreter base classes
- codeop — Compile Python code
- 導入模塊
- zipimport — Import modules from Zip archives
- pkgutil — Package extension utility
- modulefinder — 查找腳本使用的模塊
- runpy — Locating and executing Python modules
- importlib — The implementation of import
- Python 語言服務
- parser — Access Python parse trees
- ast — 抽象語法樹
- symtable — Access to the compiler's symbol tables
- symbol — 與 Python 解析樹一起使用的常量
- token — 與Python解析樹一起使用的常量
- keyword — 檢驗Python關鍵字
- tokenize — Tokenizer for Python source
- tabnanny — 模糊縮進檢測
- pyclbr — Python class browser support
- py_compile — Compile Python source files
- compileall — Byte-compile Python libraries
- dis — Python 字節碼反匯編器
- pickletools — Tools for pickle developers
- 雜項服務
- formatter — Generic output formatting
- Windows系統相關模塊
- msilib — Read and write Microsoft Installer files
- msvcrt — Useful routines from the MS VC++ runtime
- winreg — Windows 注冊表訪問
- winsound — Sound-playing interface for Windows
- Unix 專有服務
- posix — The most common POSIX system calls
- pwd — 用戶密碼數據庫
- spwd — The shadow password database
- grp — The group database
- crypt — Function to check Unix passwords
- termios — POSIX style tty control
- tty — 終端控制功能
- pty — Pseudo-terminal utilities
- fcntl — The fcntl and ioctl system calls
- pipes — Interface to shell pipelines
- resource — Resource usage information
- nis — Interface to Sun's NIS (Yellow Pages)
- Unix syslog 庫例程
- 被取代的模塊
- optparse — Parser for command line options
- imp — Access the import internals
- 未創建文檔的模塊
- 平臺特定模塊
- 擴展和嵌入 Python 解釋器
- 推薦的第三方工具
- 不使用第三方工具創建擴展
- 使用 C 或 C++ 擴展 Python
- 自定義擴展類型:教程
- 定義擴展類型:已分類主題
- 構建C/C++擴展
- 在Windows平臺編譯C和C++擴展
- 在更大的應用程序中嵌入 CPython 運行時
- Embedding Python in Another Application
- Python/C API 參考手冊
- 概述
- 代碼標準
- 包含文件
- 有用的宏
- 對象、類型和引用計數
- 異常
- 嵌入Python
- 調試構建
- 穩定的應用程序二進制接口
- The Very High Level Layer
- Reference Counting
- 異常處理
- Printing and clearing
- 拋出異常
- Issuing warnings
- Querying the error indicator
- Signal Handling
- Exception Classes
- Exception Objects
- Unicode Exception Objects
- Recursion Control
- 標準異常
- 標準警告類別
- 工具
- 操作系統實用程序
- 系統功能
- 過程控制
- 導入模塊
- Data marshalling support
- 語句解釋及變量編譯
- 字符串轉換與格式化
- 反射
- 編解碼器注冊與支持功能
- 抽象對象層
- Object Protocol
- 數字協議
- Sequence Protocol
- Mapping Protocol
- 迭代器協議
- 緩沖協議
- Old Buffer Protocol
- 具體的對象層
- 基本對象
- 數值對象
- 序列對象
- 容器對象
- 函數對象
- 其他對象
- Initialization, Finalization, and Threads
- 在Python初始化之前
- 全局配置變量
- Initializing and finalizing the interpreter
- Process-wide parameters
- Thread State and the Global Interpreter Lock
- Sub-interpreter support
- Asynchronous Notifications
- Profiling and Tracing
- Advanced Debugger Support
- Thread Local Storage Support
- 內存管理
- 概述
- 原始內存接口
- Memory Interface
- 對象分配器
- 默認內存分配器
- Customize Memory Allocators
- The pymalloc allocator
- tracemalloc C API
- 示例
- 對象實現支持
- 在堆中分配對象
- Common Object Structures
- Type 對象
- Number Object Structures
- Mapping Object Structures
- Sequence Object Structures
- Buffer Object Structures
- Async Object Structures
- 使對象類型支持循環垃圾回收
- API 和 ABI 版本管理
- 分發 Python 模塊
- 關鍵術語
- 開源許可與協作
- 安裝工具
- 閱讀指南
- 我該如何...?
- ...為我的項目選擇一個名字?
- ...創建和分發二進制擴展?
- 安裝 Python 模塊
- 關鍵術語
- 基本使用
- 我應如何 ...?
- ... 在 Python 3.4 之前的 Python 版本中安裝 pip ?
- ... 只為當前用戶安裝軟件包?
- ... 安裝科學計算類 Python 軟件包?
- ... 使用并行安裝的多個 Python 版本?
- 常見的安裝問題
- 在 Linux 的系統 Python 版本上安裝
- 未安裝 pip
- 安裝二進制編譯擴展
- Python 常用指引
- 將 Python 2 代碼遷移到 Python 3
- 簡要說明
- 詳情
- 將擴展模塊移植到 Python 3
- 條件編譯
- 對象API的更改
- 模塊初始化和狀態
- CObject 替換為 Capsule
- 其他選項
- Curses Programming with Python
- What is curses?
- Starting and ending a curses application
- Windows and Pads
- Displaying Text
- User Input
- For More Information
- 實現描述器
- 摘要
- 定義和簡介
- 描述器協議
- 發起調用描述符
- 描述符示例
- Properties
- 函數和方法
- Static Methods and Class Methods
- 函數式編程指引
- 概述
- 迭代器
- 生成器表達式和列表推導式
- 生成器
- 內置函數
- itertools 模塊
- The functools module
- Small functions and the lambda expression
- Revision History and Acknowledgements
- 引用文獻
- 日志 HOWTO
- 日志基礎教程
- 進階日志教程
- 日志級別
- 有用的處理程序
- 記錄日志中引發的異常
- 使用任意對象作為消息
- 優化
- 日志操作手冊
- 在多個模塊中使用日志
- 在多線程中使用日志
- 使用多個日志處理器和多種格式化
- 在多個地方記錄日志
- 日志服務器配置示例
- 處理日志處理器的阻塞
- Sending and receiving logging events across a network
- Adding contextual information to your logging output
- Logging to a single file from multiple processes
- Using file rotation
- Use of alternative formatting styles
- Customizing LogRecord
- Subclassing QueueHandler - a ZeroMQ example
- Subclassing QueueListener - a ZeroMQ example
- An example dictionary-based configuration
- Using a rotator and namer to customize log rotation processing
- A more elaborate multiprocessing example
- Inserting a BOM into messages sent to a SysLogHandler
- Implementing structured logging
- Customizing handlers with dictConfig()
- Using particular formatting styles throughout your application
- Configuring filters with dictConfig()
- Customized exception formatting
- Speaking logging messages
- Buffering logging messages and outputting them conditionally
- Formatting times using UTC (GMT) via configuration
- Using a context manager for selective logging
- 正則表達式HOWTO
- 概述
- 簡單模式
- 使用正則表達式
- 更多模式能力
- 修改字符串
- 常見問題
- 反饋
- 套接字編程指南
- 套接字
- 創建套接字
- 使用一個套接字
- 斷開連接
- 非阻塞的套接字
- 排序指南
- 基本排序
- 關鍵函數
- Operator 模塊函數
- 升序和降序
- 排序穩定性和排序復雜度
- 使用裝飾-排序-去裝飾的舊方法
- 使用 cmp 參數的舊方法
- 其它
- Unicode 指南
- Unicode 概述
- Python's Unicode Support
- Reading and Writing Unicode Data
- Acknowledgements
- 如何使用urllib包獲取網絡資源
- 概述
- Fetching URLs
- 處理異常
- info and geturl
- Openers and Handlers
- Basic Authentication
- Proxies
- Sockets and Layers
- 腳注
- Argparse 教程
- 概念
- 基礎
- 位置參數介紹
- Introducing Optional arguments
- Combining Positional and Optional arguments
- Getting a little more advanced
- Conclusion
- ipaddress模塊介紹
- 創建 Address/Network/Interface 對象
- 審查 Address/Network/Interface 對象
- Network 作為 Address 列表
- 比較
- 將IP地址與其他模塊一起使用
- 實例創建失敗時獲取更多詳細信息
- Argument Clinic How-To
- The Goals Of Argument Clinic
- Basic Concepts And Usage
- Converting Your First Function
- Advanced Topics
- 使用 DTrace 和 SystemTap 檢測CPython
- Enabling the static markers
- Static DTrace probes
- Static SystemTap markers
- Available static markers
- SystemTap Tapsets
- 示例
- Python 常見問題
- Python常見問題
- 一般信息
- 現實世界中的 Python
- 編程常見問題
- 一般問題
- 核心語言
- 數字和字符串
- 性能
- 序列(元組/列表)
- 對象
- 模塊
- 設計和歷史常見問題
- 為什么Python使用縮進來分組語句?
- 為什么簡單的算術運算得到奇怪的結果?
- 為什么浮點計算不準確?
- 為什么Python字符串是不可變的?
- 為什么必須在方法定義和調用中顯式使用“self”?
- 為什么不能在表達式中賦值?
- 為什么Python對某些功能(例如list.index())使用方法來實現,而其他功能(例如len(List))使用函數實現?
- 為什么 join()是一個字符串方法而不是列表或元組方法?
- 異常有多快?
- 為什么Python中沒有switch或case語句?
- 難道不能在解釋器中模擬線程,而非得依賴特定于操作系統的線程實現嗎?
- 為什么lambda表達式不能包含語句?
- 可以將Python編譯為機器代碼,C或其他語言嗎?
- Python如何管理內存?
- 為什么CPython不使用更傳統的垃圾回收方案?
- CPython退出時為什么不釋放所有內存?
- 為什么有單獨的元組和列表數據類型?
- 列表是如何在CPython中實現的?
- 字典是如何在CPython中實現的?
- 為什么字典key必須是不可變的?
- 為什么 list.sort() 沒有返回排序列表?
- 如何在Python中指定和實施接口規范?
- 為什么沒有goto?
- 為什么原始字符串(r-strings)不能以反斜杠結尾?
- 為什么Python沒有屬性賦值的“with”語句?
- 為什么 if/while/def/class語句需要冒號?
- 為什么Python在列表和元組的末尾允許使用逗號?
- 代碼庫和插件 FAQ
- 通用的代碼庫問題
- 通用任務
- 線程相關
- 輸入輸出
- 網絡 / Internet 編程
- 數據庫
- 數學和數字
- 擴展/嵌入常見問題
- 可以使用C語言中創建自己的函數嗎?
- 可以使用C++語言中創建自己的函數嗎?
- C很難寫,有沒有其他選擇?
- 如何從C執行任意Python語句?
- 如何從C中評估任意Python表達式?
- 如何從Python對象中提取C的值?
- 如何使用Py_BuildValue()創建任意長度的元組?
- 如何從C調用對象的方法?
- 如何捕獲PyErr_Print()(或打印到stdout / stderr的任何內容)的輸出?
- 如何從C訪問用Python編寫的模塊?
- 如何從Python接口到C ++對象?
- 我使用Setup文件添加了一個模塊,為什么make失敗了?
- 如何調試擴展?
- 我想在Linux系統上編譯一個Python模塊,但是缺少一些文件。為什么?
- 如何區分“輸入不完整”和“輸入無效”?
- 如何找到未定義的g++符號__builtin_new或__pure_virtual?
- 能否創建一個對象類,其中部分方法在C中實現,而其他方法在Python中實現(例如通過繼承)?
- Python在Windows上的常見問題
- 我怎樣在Windows下運行一個Python程序?
- 我怎么讓 Python 腳本可執行?
- 為什么有時候 Python 程序會啟動緩慢?
- 我怎樣使用Python腳本制作可執行文件?
- *.pyd 文件和DLL文件相同嗎?
- 我怎樣將Python嵌入一個Windows程序?
- 如何讓編輯器不要在我的 Python 源代碼中插入 tab ?
- 如何在不阻塞的情況下檢查按鍵?
- 圖形用戶界面(GUI)常見問題
- 圖形界面常見問題
- Python 是否有平臺無關的圖形界面工具包?
- 有哪些Python的GUI工具是某個平臺專用的?
- 有關Tkinter的問題
- “為什么我的電腦上安裝了 Python ?”
- 什么是Python?
- 為什么我的電腦上安裝了 Python ?
- 我能刪除 Python 嗎?
- 術語對照表
- 文檔說明
- Python 文檔貢獻者
- 解決 Bug
- 文檔錯誤
- 使用 Python 的錯誤追蹤系統
- 開始為 Python 貢獻您的知識
- 版權
- 歷史和許可證
- 軟件歷史
- 訪問Python或以其他方式使用Python的條款和條件
- Python 3.7.3 的 PSF 許可協議
- Python 2.0 的 BeOpen.com 許可協議
- Python 1.6.1 的 CNRI 許可協議
- Python 0.9.0 至 1.2 的 CWI 許可協議
- 集成軟件的許可和認可
- Mersenne Twister
- 套接字
- Asynchronous socket services
- Cookie management
- Execution tracing
- UUencode and UUdecode functions
- XML Remote Procedure Calls
- test_epoll
- Select kqueue
- SipHash24
- strtod and dtoa
- OpenSSL
- expat
- libffi
- zlib
- cfuhash
- libmpdec