Reference

Subpackages

Submodules

urllib3.connection module

class urllib3.connection.DummyConnection[source]

Bases: object

Used to detect a failed ConnectionCls import.

class urllib3.connection.HTTPConnection(host: str, port: int | None = None, *, timeout: float | _TYPE_DEFAULT | None = _TYPE_DEFAULT.token, source_address: tuple[str, int] | None = None, blocksize: int = 8192, socket_options: None | Sequence[Tuple[int, int, int | bytes]] = [(6, 1, 1)], proxy: Url | None = None, proxy_config: ProxyConfig | None = None)[source]

Bases: HTTPConnection

Based on http.client.HTTPConnection but provides an extra constructor backwards-compatibility layer between older and newer Pythons.

Additional keyword parameters are used to configure attributes of the connection. Accepted parameters include:

  • source_address: Set the source address for the current connection.

  • socket_options: Set specific options on the underlying socket. If not specified, then defaults are loaded from HTTPConnection.default_socket_options which includes disabling Nagle’s algorithm (sets TCP_NODELAY to 1) unless the connection is behind a proxy.

    For example, if you wish to enable TCP Keep Alive in addition to the defaults, you might pass:

    HTTPConnection.default_socket_options + [
        (socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1),
    ]
    

    Or you may want to disable the defaults by passing an empty list (e.g., []).

blocksize: int
close() None[source]

Close the connection to the HTTP server.

connect() None[source]

Connect to the host and port specified in __init__.

default_port: ClassVar[int] = 80
default_socket_options: ClassVar[Sequence[Tuple[int, int, int | bytes]]] = [(6, 1, 1)]

Disable Nagle’s algorithm by default. [(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)]

getresponse() HTTPResponse[source]

Get the response from the server.

If the HTTPConnection is in the correct state, returns an instance of HTTPResponse or of whatever object is returned by the response_class variable.

If a request has not been sent or if a previous response has not be handled, ResponseNotReady is raised. If the HTTP response indicates that the connection should be closed, then it will be closed before the response is returned. When the connection is closed, the underlying socket is closed.

property has_connected_to_proxy: bool
property host: str

Getter method to remove any trailing dots that indicate the hostname is an FQDN.

In general, SSL certificates don’t include the trailing dot indicating a fully-qualified domain name, and thus, they don’t validate properly when checked against a domain name that includes the dot. In addition, some servers may not expect to receive the trailing dot when provided.

However, the hostname with trailing dot is critical to DNS resolution; doing a lookup with the trailing dot will properly only resolve the appropriate FQDN, whereas a lookup without a trailing dot will search the system’s search domain list. Thus, it’s important to keep the original host around for use only in those cases where it’s appropriate (i.e., when doing DNS lookup to establish the actual TCP connection across which we’re going to send HTTP requests).

property is_closed: bool
property is_connected: bool
is_verified: bool = False

Whether this connection verifies the host’s certificate.

proxy_is_verified: bool | None = None
putheader(header: str, *values: str) None[source]
putrequest(method: str, url: str, skip_host: bool = False, skip_accept_encoding: bool = False) None[source]
request(method: str, url: str, body: bytes | IO[Any] | Iterable[bytes] | str | None = None, headers: Mapping[str, str] | None = None, *, chunked: bool = False, preload_content: bool = True, decode_content: bool = True, enforce_content_length: bool = True) None[source]

Send a complete request to the server.

request_chunked(method: str, url: str, body: bytes | IO[Any] | Iterable[bytes] | str | None = None, headers: Mapping[str, str] | None = None) None[source]

Alternative to the common request method, which sends the body with chunked encoding and not as one block

set_tunnel(host: str, port: int | None = None, headers: Mapping[str, str] | None = None, scheme: str = 'http') None[source]

Set up host and port for HTTP CONNECT tunnelling.

In a connection that uses HTTP CONNECT tunneling, the host passed to the constructor is used as a proxy server that relays all communication to the endpoint passed to set_tunnel. This done by sending an HTTP CONNECT request to the proxy server when the connection is established.

This method must be called before the HTTP connection has been established.

The headers argument should be a mapping of extra HTTP headers to send with the CONNECT request.

socket_options: Sequence[Tuple[int, int, int | bytes]] | None
source_address: tuple[str, int] | None
class urllib3.connection.HTTPSConnection(host: str, port: int | None = None, *, timeout: _TYPE_TIMEOUT = _TYPE_DEFAULT.token, source_address: tuple[str, int] | None = None, blocksize: int = 8192, socket_options: None | connection._TYPE_SOCKET_OPTIONS = [(6, 1, 1)], proxy: Url | None = None, proxy_config: ProxyConfig | None = None, cert_reqs: int | str | None = None, assert_hostname: None | str | Literal[False] = None, assert_fingerprint: str | None = None, server_hostname: str | None = None, ssl_context: ssl.SSLContext | None = None, ca_certs: str | None = None, ca_cert_dir: str | None = None, ca_cert_data: None | str | bytes = None, ssl_minimum_version: int | None = None, ssl_maximum_version: int | None = None, ssl_version: int | str | None = None, cert_file: str | None = None, key_file: str | None = None, key_password: str | None = None)[source]

Bases: HTTPConnection

Many of the parameters to this constructor are passed to the underlying SSL socket by means of urllib3.util.ssl_wrap_socket().

assert_fingerprint: str | None = None
ca_cert_data: None | str | bytes = None
ca_cert_dir: str | None = None
ca_certs: str | None = None
cert_reqs: int | str | None = None
connect() None[source]

Connect to the host and port specified in __init__.

default_port: ClassVar[int] = 443
set_cert(key_file: str | None = None, cert_file: str | None = None, cert_reqs: int | str | None = None, key_password: str | None = None, ca_certs: str | None = None, assert_hostname: None | str | Literal[False] = None, assert_fingerprint: str | None = None, ca_cert_dir: str | None = None, ca_cert_data: None | str | bytes = None) None[source]

This method should only be called once, before the connection is used.

ssl_maximum_version: int | None = None
ssl_minimum_version: int | None = None
ssl_version: int | str | None = None
urllib3.connection.VerifiedHTTPSConnection

alias of HTTPSConnection

urllib3.connectionpool module

class urllib3.connectionpool.ConnectionPool(host: str, port: int | None = None)[source]

Bases: object

Base class for all connection pools, such as HTTPConnectionPool and HTTPSConnectionPool.

Note

ConnectionPool.urlopen() does not normalize or percent-encode target URIs which is useful if your target server doesn’t support percent-encoded target URIs.

QueueCls

alias of LifoQueue

close() None[source]

Close all pooled connections and disable the pool.

scheme: str | None = None
class urllib3.connectionpool.HTTPConnectionPool(host: str, port: int | None = None, timeout: Timeout | float | _TYPE_DEFAULT | None = _TYPE_DEFAULT.token, maxsize: int = 1, block: bool = False, headers: Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, _proxy: Url | None = None, _proxy_headers: Mapping[str, str] | None = None, _proxy_config: ProxyConfig | None = None, **conn_kw: Any)[source]

Bases: ConnectionPool, RequestMethods

Thread-safe connection pool for one host.

Parameters:
  • host – Host used for this HTTP Connection (e.g. “localhost”), passed into http.client.HTTPConnection.

  • port – Port used for this HTTP Connection (None is equivalent to 80), passed into http.client.HTTPConnection.

  • timeout – Socket timeout in seconds for each individual connection. This can be a float or integer, which sets the timeout for the HTTP request, or an instance of urllib3.util.Timeout which gives you more fine-grained control over request timeouts. After the constructor has been parsed, this is always a urllib3.util.Timeout object.

  • maxsize – Number of connections to save that can be reused. More than 1 is useful in multithreaded situations. If block is set to False, more connections will be created but they will not be saved once they’ve been used.

  • block – If set to True, no more than maxsize connections will be used at a time. When no free connections are available, the call will block until a connection has been released. This is a useful side effect for particular multithreaded situations where one does not want to use more than maxsize connections per host to prevent flooding.

  • headers – Headers to include with all requests, unless other headers are given explicitly.

  • retries – Retry configuration to use by default with requests in this pool.

  • _proxy – Parsed proxy URL, should not be used directly, instead, see urllib3.ProxyManager

  • _proxy_headers – A dictionary with proxy headers, should not be used directly, instead, see urllib3.ProxyManager

  • **conn_kw – Additional parameters are used to create fresh urllib3.connection.HTTPConnection, urllib3.connection.HTTPSConnection instances.

ConnectionCls

alias of HTTPConnection

close() None[source]

Close all pooled connections and disable the pool.

is_same_host(url: str) bool[source]

Check if the given url is a member of the same host as this connection pool.

scheme: str | None = 'http'
urlopen(method: str, url: str, body: bytes | IO[Any] | Iterable[bytes] | str | None = None, headers: Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: Timeout | float | _TYPE_DEFAULT | None = _TYPE_DEFAULT.token, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: int | _TYPE_FAILEDTELL | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: Any) BaseHTTPResponse[source]

Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you’ll need to specify all the raw details.

Note

More commonly, it’s appropriate to use a convenience method such as request().

Note

release_conn will only behave as expected if preload_content=False because we want to make preload_content=False the default behaviour someday soon without breaking backwards compatibility.

Parameters:
  • method – HTTP request method (such as GET, POST, PUT, etc.)

  • url – The URL to perform the request on.

  • body – Data to send in the request body, either str, bytes, an iterable of str/bytes, or a file-like object.

  • headers – Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers.

  • retries (Retry, False, or an int.) –

    Configure the number of retries to allow before raising a MaxRetryError exception.

    Pass None to retry until you receive a response. Pass a Retry object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry.

    If False, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned.

  • redirect – If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too.

  • assert_same_host – If True, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When False, you can use the pool on an HTTP proxy and request foreign hosts.

  • timeout – If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of urllib3.util.Timeout.

  • pool_timeout – If set and the pool is set to block=True, then this method will block for pool_timeout seconds and raise EmptyPoolError if no connection is available within the time period.

  • preload_content (bool) – If True, the response’s body will be preloaded into memory.

  • decode_content (bool) – If True, will attempt to decode the body based on the ‘content-encoding’ header.

  • release_conn – If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when preload_content=True). This is useful if you’re not preloading the response’s content immediately. You will need to call r.release_conn() on the response r to return the connection back into the pool. If None, it takes the value of preload_content which defaults to True.

  • chunked (bool) – If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False.

  • body_pos (int) – Position to seek to in file-like body in the event of a retry or redirect. Typically this won’t need to be set because urllib3 will auto-populate the value when needed.

class urllib3.connectionpool.HTTPSConnectionPool(host: str, port: int | None = None, timeout: _TYPE_TIMEOUT | None = _TYPE_DEFAULT.token, maxsize: int = 1, block: bool = False, headers: Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, _proxy: Url | None = None, _proxy_headers: Mapping[str, str] | None = None, key_file: str | None = None, cert_file: str | None = None, cert_reqs: int | str | None = None, key_password: str | None = None, ca_certs: str | None = None, ssl_version: int | str | None = None, ssl_minimum_version: ssl.TLSVersion | None = None, ssl_maximum_version: ssl.TLSVersion | None = None, assert_hostname: str | Literal[False] | None = None, assert_fingerprint: str | None = None, ca_cert_dir: str | None = None, **conn_kw: Any)[source]

Bases: HTTPConnectionPool

Same as HTTPConnectionPool, but HTTPS.

HTTPSConnection uses one of assert_fingerprint, assert_hostname and host in this order to verify connections. If assert_hostname is False, no verification is done.

The key_file, cert_file, cert_reqs, ca_certs, ca_cert_dir, ssl_version, key_password are only used if ssl is available and are fed into urllib3.util.ssl_wrap_socket() to upgrade the connection socket into an SSL socket.

ConnectionCls

alias of HTTPSConnection

scheme: str | None = 'https'
urllib3.connectionpool.connection_from_url(url: str, **kw: Any) HTTPConnectionPool[source]

Given a url, return an ConnectionPool instance of its host.

This is a shortcut for not having to parse out the scheme, host, and port of the url before creating an ConnectionPool instance.

Parameters:
  • url – Absolute URL string that must include the scheme. Port is optional.

  • **kw – Passes additional parameters to the constructor of the appropriate ConnectionPool. Useful for specifying things like timeout, maxsize, headers, etc.

Example:

>>> conn = connection_from_url('http://google.com/')
>>> r = conn.request('GET', '/')

urllib3.exceptions module

exception urllib3.exceptions.BodyNotHttplibCompatible[source]

Bases: HTTPError

Body should be http.client.HTTPResponse like (have an fp attribute which returns raw chunks) for read_chunked().

exception urllib3.exceptions.ClosedPoolError(pool: ConnectionPool, message: str)[source]

Bases: PoolError

Raised when a request enters a pool after the pool has been closed.

exception urllib3.exceptions.ConnectTimeoutError[source]

Bases: TimeoutError

Raised when a socket timeout occurs while connecting to a server

urllib3.exceptions.ConnectionError

Renamed to ProtocolError but aliased for backwards compatibility.

exception urllib3.exceptions.DecodeError[source]

Bases: HTTPError

Raised when automatic decoding based on Content-Type fails.

exception urllib3.exceptions.DependencyWarning[source]

Bases: HTTPWarning

Warned when an attempt is made to import a module with missing optional dependencies.

exception urllib3.exceptions.EmptyPoolError(pool: ConnectionPool, message: str)[source]

Bases: PoolError

Raised when a pool runs out of connections and no more are allowed.

exception urllib3.exceptions.FullPoolError(pool: ConnectionPool, message: str)[source]

Bases: PoolError

Raised when we try to add a connection to a full pool in blocking mode.

exception urllib3.exceptions.HTTPError[source]

Bases: Exception

Base exception used by this module.

exception urllib3.exceptions.HTTPWarning[source]

Bases: Warning

Base warning used by this module.

exception urllib3.exceptions.HeaderParsingError(defects: list[email.errors.MessageDefect], unparsed_data: bytes | str | None)[source]

Bases: HTTPError

Raised by assert_header_parsing, but we convert it to a log.warning statement.

exception urllib3.exceptions.HostChangedError(pool: ConnectionPool, url: str, retries: Retry | int = 3)[source]

Bases: RequestError

Raised when an existing pool gets a request for a foreign host.

exception urllib3.exceptions.IncompleteRead(partial: int, expected: int)[source]

Bases: HTTPError, IncompleteRead

Response length doesn’t match expected Content-Length

Subclass of http.client.IncompleteRead to allow int value for partial to avoid creating large objects on streamed reads.

exception urllib3.exceptions.InsecurePlatformWarning[source]

Bases: SecurityWarning

Warned when certain TLS/SSL configuration is not available on a platform.

exception urllib3.exceptions.InsecureRequestWarning[source]

Bases: SecurityWarning

Warned when making an unverified HTTPS request.

exception urllib3.exceptions.InvalidChunkLength(response: HTTPResponse, length: bytes)[source]

Bases: HTTPError, IncompleteRead

Invalid chunk length in a chunked response.

exception urllib3.exceptions.InvalidHeader[source]

Bases: HTTPError

The header provided was somehow invalid.

exception urllib3.exceptions.LocationParseError(location: str)[source]

Bases: LocationValueError

Raised when get_host or similar fails to parse the URL input.

exception urllib3.exceptions.LocationValueError[source]

Bases: ValueError, HTTPError

Raised when there is something wrong with a given URL input.

exception urllib3.exceptions.MaxRetryError(pool: ConnectionPool, url: str, reason: Exception | None = None)[source]

Bases: RequestError

Raised when the maximum number of retries is exceeded.

Parameters:
  • pool (HTTPConnectionPool) – The connection pool

  • url (str) – The requested Url

  • reason (Exception) – The underlying error

exception urllib3.exceptions.NameResolutionError(host: str, conn: HTTPConnection, reason: socket.gaierror)[source]

Bases: NewConnectionError

Raised when host name resolution fails.

exception urllib3.exceptions.NewConnectionError(conn: HTTPConnection, message: str)[source]

Bases: ConnectTimeoutError, HTTPError

Raised when we fail to establish a new connection. Usually ECONNREFUSED.

property pool: HTTPConnection
exception urllib3.exceptions.NotOpenSSLWarning[source]

Bases: SecurityWarning

Warned when using unsupported SSL library

exception urllib3.exceptions.PoolError(pool: ConnectionPool, message: str)[source]

Bases: HTTPError

Base exception for errors caused within a pool.

exception urllib3.exceptions.ProtocolError[source]

Bases: HTTPError

Raised when something unexpected happens mid-request/response.

exception urllib3.exceptions.ProxyError(message: str, error: Exception)[source]

Bases: HTTPError

Raised when the connection to a proxy fails.

original_error: Exception
exception urllib3.exceptions.ProxySchemeUnknown(scheme: str | None)[source]

Bases: AssertionError, URLSchemeUnknown

ProxyManager does not support the supplied scheme

exception urllib3.exceptions.ProxySchemeUnsupported[source]

Bases: ValueError

Fetching HTTPS resources through HTTPS proxies is unsupported

exception urllib3.exceptions.ReadTimeoutError(pool: ConnectionPool, url: str, message: str)[source]

Bases: TimeoutError, RequestError

Raised when a socket timeout occurs while receiving data from a server

exception urllib3.exceptions.RequestError(pool: ConnectionPool, url: str, message: str)[source]

Bases: PoolError

Base exception for PoolErrors that have associated URLs.

exception urllib3.exceptions.ResponseError[source]

Bases: HTTPError

Used as a container for an error reason supplied in a MaxRetryError.

GENERIC_ERROR = 'too many error responses'
SPECIFIC_ERROR = 'too many {status_code} error responses'
exception urllib3.exceptions.ResponseNotChunked[source]

Bases: ProtocolError, ValueError

Response needs to be chunked in order to read it as chunks.

exception urllib3.exceptions.SSLError[source]

Bases: HTTPError

Raised when SSL certificate fails in an HTTPS connection.

exception urllib3.exceptions.SecurityWarning[source]

Bases: HTTPWarning

Warned when performing security reducing actions

exception urllib3.exceptions.SystemTimeWarning[source]

Bases: SecurityWarning

Warned when system time is suspected to be wrong

exception urllib3.exceptions.TimeoutError[source]

Bases: HTTPError

Raised when a socket timeout error occurs.

Catching this error will catch both ReadTimeoutErrors and ConnectTimeoutErrors.

exception urllib3.exceptions.TimeoutStateError[source]

Bases: HTTPError

Raised when passing an invalid state to a timeout

exception urllib3.exceptions.URLSchemeUnknown(scheme: str)[source]

Bases: LocationValueError

Raised when a URL input has an unsupported scheme.

exception urllib3.exceptions.UnrewindableBodyError[source]

Bases: HTTPError

urllib3 encountered an error when trying to rewind a body

urllib3.fields module

class urllib3.fields.RequestField(name: str, data: str | bytes, filename: str | None = None, headers: Mapping[str, str] | None = None, header_formatter: Callable[[str, str | bytes], str] | None = None)[source]

Bases: object

A data container for request body parameters.

Parameters:
  • name – The name of this request field. Must be unicode.

  • data – The data/value body.

  • filename – An optional filename of the request field. Must be unicode.

  • headers – An optional dict-like object of headers to initially use for the field.

Changed in version 2.0.0: The header_formatter parameter is deprecated and will be removed in urllib3 v2.1.0.

_render_part(name: str, value: str | bytes) str[source]

Override this method to change how each multipart header parameter is formatted. By default, this calls format_multipart_header_param().

Parameters:
  • name – The name of the parameter, an ASCII-only str.

  • value – The value of the parameter, a str or UTF-8 encoded bytes.

classmethod from_tuples(fieldname: str, value: str | bytes | Tuple[str, str | bytes] | Tuple[str, str | bytes, str], header_formatter: Callable[[str, str | bytes], str] | None = None) RequestField[source]

A RequestField factory from old-style tuple parameters.

Supports constructing RequestField from parameter of key/value strings AND key/filetuple. A filetuple is a (filename, data, MIME type) tuple where the MIME type is optional. For example:

'foo': 'bar',
'fakefile': ('foofile.txt', 'contents of foofile'),
'realfile': ('barfile.txt', open('realfile').read()),
'typedfile': ('bazfile.bin', open('bazfile').read(), 'image/jpeg'),
'nonamefile': 'contents of nonamefile field',

Field names and filenames must be unicode.

make_multipart(content_disposition: str | None = None, content_type: str | None = None, content_location: str | None = None) None[source]

Makes this request field into a multipart request field.

This method overrides “Content-Disposition”, “Content-Type” and “Content-Location” headers to the request parameter.

Parameters:
  • content_disposition – The ‘Content-Disposition’ of the request body. Defaults to ‘form-data’

  • content_type – The ‘Content-Type’ of the request body.

  • content_location – The ‘Content-Location’ of the request body.

render_headers() str[source]

Renders the headers for this request field.

urllib3.fields.format_header_param(name: str, value: str | bytes) str[source]

Deprecated since version 2.0.0: Renamed to format_multipart_header_param(). Will be removed in urllib3 v2.1.0.

urllib3.fields.format_header_param_html5(name: str, value: str | bytes) str[source]

Deprecated since version 2.0.0: Renamed to format_multipart_header_param(). Will be removed in urllib3 v2.1.0.

urllib3.fields.format_header_param_rfc2231(name: str, value: str | bytes) str[source]

Helper function to format and quote a single header parameter using the strategy defined in RFC 2231.

Particularly useful for header parameters which might contain non-ASCII values, like file names. This follows RFC 2388 Section 4.4.

Parameters:
  • name – The name of the parameter, a string expected to be ASCII only.

  • value – The value of the parameter, provided as bytes or str`.

Returns:

An RFC-2231-formatted unicode string.

Deprecated since version 2.0.0: Will be removed in urllib3 v2.1.0. This is not valid for multipart/form-data header parameters.

urllib3.fields.format_multipart_header_param(name: str, value: str | bytes) str[source]

Format and quote a single multipart header parameter.

This follows the WHATWG HTML Standard as of 2021/06/10, matching the behavior of current browser and curl versions. Values are assumed to be UTF-8. The \n, \r, and " characters are percent encoded.

Parameters:
  • name – The name of the parameter, an ASCII-only str.

  • value – The value of the parameter, a str or UTF-8 encoded bytes.

Returns:

A string name="value" with the escaped value.

Changed in version 2.0.0: Matches the WHATWG HTML Standard as of 2021/06/10. Control characters are no longer percent encoded.

Changed in version 2.0.0: Renamed from format_header_param_html5 and format_header_param. The old names will be removed in urllib3 v2.1.0.

urllib3.fields.guess_content_type(filename: str | None, default: str = 'application/octet-stream') str[source]

Guess the “Content-Type” of a file.

Parameters:
  • filename – The filename to guess the “Content-Type” of using mimetypes.

  • default – If no “Content-Type” can be guessed, default to default.

urllib3.filepost module

urllib3.filepost.choose_boundary() str[source]

Our embarrassingly-simple replacement for mimetools.choose_boundary.

urllib3.filepost.encode_multipart_formdata(fields: Sequence[Tuple[str, str | bytes | Tuple[str, str | bytes] | Tuple[str, str | bytes, str]] | RequestField] | Mapping[str, str | bytes | Tuple[str, str | bytes] | Tuple[str, str | bytes, str]], boundary: str | None = None) tuple[bytes, str][source]

Encode a dictionary of fields using the multipart/form-data MIME format.

Parameters:
urllib3.filepost.iter_field_objects(fields: Sequence[Tuple[str, str | bytes | Tuple[str, str | bytes] | Tuple[str, str | bytes, str]] | RequestField] | Mapping[str, str | bytes | Tuple[str, str | bytes] | Tuple[str, str | bytes, str]]) Iterable[RequestField][source]

Iterate over fields.

Supports list of (k, v) tuples and dicts, and lists of RequestField.

urllib3.poolmanager module

class urllib3.poolmanager.PoolManager(num_pools: int = 10, headers: Mapping[str, str] | None = None, **connection_pool_kw: Any)[source]

Bases: RequestMethods

Allows for arbitrary requests while transparently keeping track of necessary connection pools for you.

Parameters:
  • num_pools – Number of connection pools to cache before discarding the least recently used pool.

  • headers – Headers to include with all requests, unless other headers are given explicitly.

  • **connection_pool_kw – Additional parameters are used to create fresh urllib3.connectionpool.ConnectionPool instances.

Example:

import urllib3

http = urllib3.PoolManager(num_pools=2)

resp1 = http.request("GET", "https://google.com/")
resp2 = http.request("GET", "https://google.com/mail")
resp3 = http.request("GET", "https://yahoo.com/")

print(len(http.pools))
# 2
clear() None[source]

Empty our store of pools and direct them all to close.

This will not affect in-flight connections, but they will not be re-used after completion.

connection_from_context(request_context: dict[str, Any]) HTTPConnectionPool[source]

Get a urllib3.connectionpool.ConnectionPool based on the request context.

request_context must at least contain the scheme key and its value must be a key in key_fn_by_scheme instance variable.

connection_from_host(host: str | None, port: int | None = None, scheme: str | None = 'http', pool_kwargs: dict[str, Any] | None = None) HTTPConnectionPool[source]

Get a urllib3.connectionpool.ConnectionPool based on the host, port, and scheme.

If port isn’t given, it will be derived from the scheme using urllib3.connectionpool.port_by_scheme. If pool_kwargs is provided, it is merged with the instance’s connection_pool_kw variable and used to create the new connection pool, if one is needed.

connection_from_pool_key(pool_key: PoolKey, request_context: dict[str, Any]) HTTPConnectionPool[source]

Get a urllib3.connectionpool.ConnectionPool based on the provided pool key.

pool_key should be a namedtuple that only contains immutable objects. At a minimum it must have the scheme, host, and port fields.

connection_from_url(url: str, pool_kwargs: dict[str, Any] | None = None) HTTPConnectionPool[source]

Similar to urllib3.connectionpool.connection_from_url().

If pool_kwargs is not provided and a new pool needs to be constructed, self.connection_pool_kw is used to initialize the urllib3.connectionpool.ConnectionPool. If pool_kwargs is provided, it is used instead. Note that if a new pool does not need to be created for the request, the provided pool_kwargs are not used.

proxy: Url | None = None
proxy_config: ProxyConfig | None = None
urlopen(method: str, url: str, redirect: bool = True, **kw: Any) BaseHTTPResponse[source]

Same as urllib3.HTTPConnectionPool.urlopen() with custom cross-host redirect logic and only sends the request-uri portion of the url.

The given url parameter must be absolute, such that an appropriate urllib3.connectionpool.ConnectionPool can be chosen for it.

class urllib3.poolmanager.ProxyManager(proxy_url: str, num_pools: int = 10, headers: Mapping[str, str] | None = None, proxy_headers: Mapping[str, str] | None = None, proxy_ssl_context: ssl.SSLContext | None = None, use_forwarding_for_https: bool = False, proxy_assert_hostname: None | str | Literal[False] = None, proxy_assert_fingerprint: str | None = None, **connection_pool_kw: Any)[source]

Bases: PoolManager

Behaves just like PoolManager, but sends all requests through the defined proxy, using the CONNECT method for HTTPS URLs.

Parameters:
  • proxy_url – The URL of the proxy to be used.

  • proxy_headers – A dictionary containing headers that will be sent to the proxy. In case of HTTP they are being sent with each request, while in the HTTPS/CONNECT case they are sent only once. Could be used for proxy authentication.

  • proxy_ssl_context – The proxy SSL context is used to establish the TLS connection to the proxy when using HTTPS proxies.

  • use_forwarding_for_https – (Defaults to False) If set to True will forward requests to the HTTPS proxy to be made on behalf of the client instead of creating a TLS tunnel via the CONNECT method. Enabling this flag means that request and response headers and content will be visible from the HTTPS proxy whereas tunneling keeps request and response headers and content private. IP address, target hostname, SNI, and port are always visible to an HTTPS proxy even when this flag is disabled.

  • proxy_assert_hostname – The hostname of the certificate to verify against.

  • proxy_assert_fingerprint – The fingerprint of the certificate to verify against.

Example:

import urllib3

proxy = urllib3.ProxyManager("https://localhost:3128/")

resp1 = proxy.request("GET", "https://google.com/")
resp2 = proxy.request("GET", "https://httpbin.org/")

print(len(proxy.pools))
# 1

resp3 = proxy.request("GET", "https://httpbin.org/")
resp4 = proxy.request("GET", "https://twitter.com/")

print(len(proxy.pools))
# 3
connection_from_host(host: str | None, port: int | None = None, scheme: str | None = 'http', pool_kwargs: dict[str, Any] | None = None) HTTPConnectionPool[source]

Get a urllib3.connectionpool.ConnectionPool based on the host, port, and scheme.

If port isn’t given, it will be derived from the scheme using urllib3.connectionpool.port_by_scheme. If pool_kwargs is provided, it is merged with the instance’s connection_pool_kw variable and used to create the new connection pool, if one is needed.

urlopen(method: str, url: str, redirect: bool = True, **kw: Any) BaseHTTPResponse[source]

Same as HTTP(S)ConnectionPool.urlopen, url must be absolute.

urllib3.poolmanager.proxy_from_url(url: str, **kw: Any) ProxyManager[source]

urllib3.request module

A convenience, top-level request method. It uses a module-global PoolManager instance. Therefore, its side effects could be shared across dependencies relying on it. To avoid side effects create a new PoolManager instance and use it instead. The method does not accept low-level **urlopen_kw keyword arguments.

urllib3.response module

class urllib3.response.BaseHTTPResponse(*, headers: Mapping[str, str] | Mapping[bytes, bytes] | None = None, status: int, version: int, reason: str | None, decode_content: bool, request_url: str | None, retries: Retry | None = None)[source]

Bases: IOBase

CONTENT_DECODERS = ['gzip', 'deflate']
DECODER_ERROR_CLASSES: tuple[type[Exception], ...] = (<class 'OSError'>, <class 'zlib.error'>)
REDIRECT_STATUSES = [301, 302, 303, 307, 308]
close() None[source]

Flush and close the IO object.

This method has no effect if the file is already closed.

property connection: HTTPConnection | None
property data: bytes
drain_conn() None[source]
get_redirect_location() str | None | Literal[False][source]

Should we redirect and where to?

Returns:

Truthy redirect location string if we got a redirect status code and valid location. None if redirect status and no location. False if not a redirect status code.

getheader(name: str, default: str | None = None) str | None[source]
getheaders() HTTPHeaderDict[source]
geturl() str | None[source]
info() HTTPHeaderDict[source]
json() Any[source]

Parses the body of the HTTP response as JSON.

To use a custom JSON decoder pass the result of HTTPResponse.data to the decoder.

This method can raise either UnicodeDecodeError or json.JSONDecodeError.

Read more here.

read(amt: int | None = None, decode_content: bool | None = None, cache_content: bool = False) bytes[source]
read_chunked(amt: int | None = None, decode_content: bool | None = None) Iterator[bytes][source]
readinto(b: bytearray) int[source]
release_conn() None[source]
property retries: Retry | None
stream(amt: int | None = 65536, decode_content: bool | None = None) Iterator[bytes][source]
property url: str | None
class urllib3.response.BytesQueueBuffer[source]

Bases: object

Memory-efficient bytes buffer

To return decoded data in read() and still follow the BufferedIOBase API, we need a buffer to always return the correct amount of bytes.

This buffer should be filled using calls to put()

Our maximum memory usage is determined by the sum of the size of:

  • self.buffer, which contains the full data

  • the largest chunk that we will copy in get()

The worst case scenario is a single chunk, in which case we’ll make a full copy of the data inside get().

get(n: int) bytes[source]
put(data: bytes) None[source]
class urllib3.response.ContentDecoder[source]

Bases: object

decompress(data: bytes) bytes[source]
flush() bytes[source]
class urllib3.response.DeflateDecoder[source]

Bases: ContentDecoder

decompress(data: bytes) bytes[source]
flush() bytes[source]
class urllib3.response.GzipDecoder[source]

Bases: ContentDecoder

decompress(data: bytes) bytes[source]
flush() bytes[source]
class urllib3.response.GzipDecoderState[source]

Bases: object

FIRST_MEMBER = 0
OTHER_MEMBERS = 1
SWALLOW_DATA = 2
class urllib3.response.HTTPResponse(body: _TYPE_BODY = '', headers: Mapping[str, str] | Mapping[bytes, bytes] | None = None, status: int = 0, version: int = 0, reason: str | None = None, preload_content: bool = True, decode_content: bool = True, original_response: _HttplibHTTPResponse | None = None, pool: HTTPConnectionPool | None = None, connection: HTTPConnection | None = None, msg: _HttplibHTTPMessage | None = None, retries: Retry | None = None, enforce_content_length: bool = True, request_method: str | None = None, request_url: str | None = None, auto_close: bool = True)[source]

Bases: BaseHTTPResponse

HTTP Response container.

Backwards-compatible with http.client.HTTPResponse but the response body is loaded and decoded on-demand when the data property is accessed. This class is also compatible with the Python standard library’s io module, and can hence be treated as a readable object in the context of that framework.

Extra parameters for behaviour not present in http.client.HTTPResponse:

Parameters:
  • preload_content – If True, the response’s body will be preloaded during construction.

  • decode_content – If True, will attempt to decode the body based on the ‘content-encoding’ header.

  • original_response – When this HTTPResponse wrapper is generated from an http.client.HTTPResponse object, it’s convenient to include the original for debug purposes. It’s otherwise unused.

  • retries – The retries contains the last Retry that was used during the request.

  • enforce_content_length – Enforce content length checking. Body returned by server must match value of Content-Length header, if present. Otherwise, raise error.

close() None[source]

Flush and close the IO object.

This method has no effect if the file is already closed.

property closed: bool
property connection: HTTPConnection | None
property data: bytes
drain_conn() None[source]

Read and discard any remaining HTTP response data in the response connection.

Unread data in the HTTPResponse connection blocks the connection from being released back to the pool.

fileno() int[source]

Returns underlying file descriptor if one exists.

OSError is raised if the IO object does not use a file descriptor.

flush() None[source]

Flush write buffers, if applicable.

This is not implemented for read-only and non-blocking streams.

isclosed() bool[source]
read(amt: int | None = None, decode_content: bool | None = None, cache_content: bool = False) bytes[source]

Similar to http.client.HTTPResponse.read(), but with two additional parameters: decode_content and cache_content.

Parameters:
  • amt – How much of the content to read. If specified, caching is skipped because it doesn’t make sense to cache partial content as the full response.

  • decode_content – If True, will attempt to decode the body based on the ‘content-encoding’ header.

  • cache_content – If True, will save the returned data such that the same result is returned despite of the state of the underlying file object. This is useful if you want the .data property to continue working after having .read() the file object. (Overridden if amt is set.)

read_chunked(amt: int | None = None, decode_content: bool | None = None) Generator[bytes, None, None][source]

Similar to HTTPResponse.read(), but with an additional parameter: decode_content.

Parameters:
  • amt – How much of the content to read. If specified, caching is skipped because it doesn’t make sense to cache partial content as the full response.

  • decode_content – If True, will attempt to decode the body based on the ‘content-encoding’ header.

readable() bool[source]

Return whether object was opened for reading.

If False, read() will raise OSError.

release_conn() None[source]
stream(amt: int | None = 65536, decode_content: bool | None = None) Generator[bytes, None, None][source]

A generator wrapper for the read() method. A call will block until amt bytes have been read from the connection or until the connection is closed.

Parameters:
  • amt – How much of the content to read. The generator will return up to much data per iteration, but may return less. This is particularly likely when using compressed data. However, the empty string will never be returned.

  • decode_content – If True, will attempt to decode the body based on the ‘content-encoding’ header.

supports_chunked_reads() bool[source]

Checks if the underlying file-like object looks like a http.client.HTTPResponse object. We do this by testing for the fp attribute. If it is present we assume it returns raw chunks as processed by read_chunked().

tell() int[source]

Obtain the number of bytes pulled over the wire so far. May differ from the amount of content returned by :meth:urllib3.response.HTTPResponse.read if bytes are encoded on the wire (e.g, compressed).

property url: str | None

Returns the URL that was the source of this response. If the request that generated this response redirected, this method will return the final redirect location.

class urllib3.response.MultiDecoder(modes: str)[source]

Bases: ContentDecoder

From RFC7231:

If one or more encodings have been applied to a representation, the sender that applied the encodings MUST generate a Content-Encoding header field that lists the content codings in the order in which they were applied.

decompress(data: bytes) bytes[source]
flush() bytes[source]

Module contents

Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more

class urllib3.BaseHTTPResponse(*, headers: Mapping[str, str] | Mapping[bytes, bytes] | None = None, status: int, version: int, reason: str | None, decode_content: bool, request_url: str | None, retries: Retry | None = None)[source]

Bases: IOBase

CONTENT_DECODERS = ['gzip', 'deflate']
DECODER_ERROR_CLASSES: tuple[type[Exception], ...] = (<class 'OSError'>, <class 'zlib.error'>)
REDIRECT_STATUSES = [301, 302, 303, 307, 308]
close() None[source]

Flush and close the IO object.

This method has no effect if the file is already closed.

property connection: HTTPConnection | None
property data: bytes
drain_conn() None[source]
get_redirect_location() str | None | Literal[False][source]

Should we redirect and where to?

Returns:

Truthy redirect location string if we got a redirect status code and valid location. None if redirect status and no location. False if not a redirect status code.

getheader(name: str, default: str | None = None) str | None[source]
getheaders() HTTPHeaderDict[source]
geturl() str | None[source]
info() HTTPHeaderDict[source]
json() Any[source]

Parses the body of the HTTP response as JSON.

To use a custom JSON decoder pass the result of HTTPResponse.data to the decoder.

This method can raise either UnicodeDecodeError or json.JSONDecodeError.

Read more here.

read(amt: int | None = None, decode_content: bool | None = None, cache_content: bool = False) bytes[source]
read_chunked(amt: int | None = None, decode_content: bool | None = None) Iterator[bytes][source]
readinto(b: bytearray) int[source]
release_conn() None[source]
property retries: Retry | None
stream(amt: int | None = 65536, decode_content: bool | None = None) Iterator[bytes][source]
property url: str | None
class urllib3.HTTPConnectionPool(host: str, port: int | None = None, timeout: Timeout | float | _TYPE_DEFAULT | None = _TYPE_DEFAULT.token, maxsize: int = 1, block: bool = False, headers: Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, _proxy: Url | None = None, _proxy_headers: Mapping[str, str] | None = None, _proxy_config: ProxyConfig | None = None, **conn_kw: Any)[source]

Bases: ConnectionPool, RequestMethods

Thread-safe connection pool for one host.

Parameters:
  • host – Host used for this HTTP Connection (e.g. “localhost”), passed into http.client.HTTPConnection.

  • port – Port used for this HTTP Connection (None is equivalent to 80), passed into http.client.HTTPConnection.

  • timeout – Socket timeout in seconds for each individual connection. This can be a float or integer, which sets the timeout for the HTTP request, or an instance of urllib3.util.Timeout which gives you more fine-grained control over request timeouts. After the constructor has been parsed, this is always a urllib3.util.Timeout object.

  • maxsize – Number of connections to save that can be reused. More than 1 is useful in multithreaded situations. If block is set to False, more connections will be created but they will not be saved once they’ve been used.

  • block – If set to True, no more than maxsize connections will be used at a time. When no free connections are available, the call will block until a connection has been released. This is a useful side effect for particular multithreaded situations where one does not want to use more than maxsize connections per host to prevent flooding.

  • headers – Headers to include with all requests, unless other headers are given explicitly.

  • retries – Retry configuration to use by default with requests in this pool.

  • _proxy – Parsed proxy URL, should not be used directly, instead, see urllib3.ProxyManager

  • _proxy_headers – A dictionary with proxy headers, should not be used directly, instead, see urllib3.ProxyManager

  • **conn_kw – Additional parameters are used to create fresh urllib3.connection.HTTPConnection, urllib3.connection.HTTPSConnection instances.

ConnectionCls

alias of HTTPConnection

close() None[source]

Close all pooled connections and disable the pool.

is_same_host(url: str) bool[source]

Check if the given url is a member of the same host as this connection pool.

pool: queue.LifoQueue[Any] | None
scheme: str | None = 'http'
urlopen(method: str, url: str, body: bytes | IO[Any] | Iterable[bytes] | str | None = None, headers: Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: Timeout | float | _TYPE_DEFAULT | None = _TYPE_DEFAULT.token, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: int | _TYPE_FAILEDTELL | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: Any) BaseHTTPResponse[source]

Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you’ll need to specify all the raw details.

Note

More commonly, it’s appropriate to use a convenience method such as request().

Note

release_conn will only behave as expected if preload_content=False because we want to make preload_content=False the default behaviour someday soon without breaking backwards compatibility.

Parameters:
  • method – HTTP request method (such as GET, POST, PUT, etc.)

  • url – The URL to perform the request on.

  • body – Data to send in the request body, either str, bytes, an iterable of str/bytes, or a file-like object.

  • headers – Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers.

  • retries (Retry, False, or an int.) –

    Configure the number of retries to allow before raising a MaxRetryError exception.

    Pass None to retry until you receive a response. Pass a Retry object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry.

    If False, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned.

  • redirect – If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too.

  • assert_same_host – If True, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When False, you can use the pool on an HTTP proxy and request foreign hosts.

  • timeout – If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of urllib3.util.Timeout.

  • pool_timeout – If set and the pool is set to block=True, then this method will block for pool_timeout seconds and raise EmptyPoolError if no connection is available within the time period.

  • preload_content (bool) – If True, the response’s body will be preloaded into memory.

  • decode_content (bool) – If True, will attempt to decode the body based on the ‘content-encoding’ header.

  • release_conn – If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when preload_content=True). This is useful if you’re not preloading the response’s content immediately. You will need to call r.release_conn() on the response r to return the connection back into the pool. If None, it takes the value of preload_content which defaults to True.

  • chunked (bool) – If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False.

  • body_pos (int) – Position to seek to in file-like body in the event of a retry or redirect. Typically this won’t need to be set because urllib3 will auto-populate the value when needed.

class urllib3.HTTPHeaderDict(headers: ValidHTTPHeaderSource | None = None, **kwargs: str)[source]

Bases: MutableMapping[str, str]

Parameters:
  • headers – An iterable of field-value pairs. Must not contain multiple field names when compared case-insensitively.

  • kwargs – Additional field-value pairs to pass in to dict.update.

A dict like container for storing HTTP Headers.

Field names are stored and compared case-insensitively in compliance with RFC 7230. Iteration provides the first case-sensitive key seen for each case-insensitive pair.

Using __setitem__ syntax overwrites fields that compare equal case-insensitively in order to maintain dict’s api. For fields that compare equal, instead create a new HTTPHeaderDict and use .add in a loop.

If multiple fields that are equal case-insensitively are passed to the constructor or .update, the behavior is undefined and some will be lost.

>>> headers = HTTPHeaderDict()
>>> headers.add('Set-Cookie', 'foo=bar')
>>> headers.add('set-cookie', 'baz=quxx')
>>> headers['content-length'] = '7'
>>> headers['SET-cookie']
'foo=bar, baz=quxx'
>>> headers['Content-Length']
'7'
add(key: str, val: str, *, combine: bool = False) None[source]

Adds a (name, value) pair, doesn’t overwrite the value if it already exists.

If this is called with combine=True, instead of adding a new header value as a distinct item during iteration, this will instead append the value to any existing header value with a comma. If no existing header value exists for the key, then the value will simply be added, ignoring the combine parameter.

>>> headers = HTTPHeaderDict(foo='bar')
>>> headers.add('Foo', 'baz')
>>> headers['foo']
'bar, baz'
>>> list(headers.items())
[('foo', 'bar'), ('foo', 'baz')]
>>> headers.add('foo', 'quz', combine=True)
>>> list(headers.items())
[('foo', 'bar, baz, quz')]
copy() HTTPHeaderDict[source]
discard(key: str) None[source]
extend(*args: ValidHTTPHeaderSource, **kwargs: str) None[source]

Generic import function for any type of header-like object. Adapted version of MutableMapping.update in order to insert items with self.add instead of self.__setitem__

get_all(key: str, default: _Sentinel | _DT = _Sentinel.not_passed) list[str] | _DT

Returns a list of all the values for the named field. Returns an empty list if the key doesn’t exist.

getallmatchingheaders(key: str, default: _Sentinel | _DT = _Sentinel.not_passed) list[str] | _DT

Returns a list of all the values for the named field. Returns an empty list if the key doesn’t exist.

getheaders(key: str, default: _Sentinel | _DT = _Sentinel.not_passed) list[str] | _DT

Returns a list of all the values for the named field. Returns an empty list if the key doesn’t exist.

getlist(key: str) list[str][source]
getlist(key: str, default: _DT) list[str] | _DT

Returns a list of all the values for the named field. Returns an empty list if the key doesn’t exist.

iget(key: str, default: _Sentinel | _DT = _Sentinel.not_passed) list[str] | _DT

Returns a list of all the values for the named field. Returns an empty list if the key doesn’t exist.

items() a set-like object providing a view on D's items[source]
iteritems() Iterator[tuple[str, str]][source]

Iterate over all header lines, including duplicate ones.

itermerged() Iterator[tuple[str, str]][source]

Iterate over all headers, merging duplicate ones together.

setdefault(k[, d]) D.get(k,d), also set D[k]=d if k not in D[source]
class urllib3.HTTPResponse(body: _TYPE_BODY = '', headers: Mapping[str, str] | Mapping[bytes, bytes] | None = None, status: int = 0, version: int = 0, reason: str | None = None, preload_content: bool = True, decode_content: bool = True, original_response: _HttplibHTTPResponse | None = None, pool: HTTPConnectionPool | None = None, connection: HTTPConnection | None = None, msg: _HttplibHTTPMessage | None = None, retries: Retry | None = None, enforce_content_length: bool = True, request_method: str | None = None, request_url: str | None = None, auto_close: bool = True)[source]

Bases: BaseHTTPResponse

HTTP Response container.

Backwards-compatible with http.client.HTTPResponse but the response body is loaded and decoded on-demand when the data property is accessed. This class is also compatible with the Python standard library’s io module, and can hence be treated as a readable object in the context of that framework.

Extra parameters for behaviour not present in http.client.HTTPResponse:

Parameters:
  • preload_content – If True, the response’s body will be preloaded during construction.

  • decode_content – If True, will attempt to decode the body based on the ‘content-encoding’ header.

  • original_response – When this HTTPResponse wrapper is generated from an http.client.HTTPResponse object, it’s convenient to include the original for debug purposes. It’s otherwise unused.

  • retries – The retries contains the last Retry that was used during the request.

  • enforce_content_length – Enforce content length checking. Body returned by server must match value of Content-Length header, if present. Otherwise, raise error.

close() None[source]

Flush and close the IO object.

This method has no effect if the file is already closed.

property closed: bool
property connection: HTTPConnection | None
property data: bytes
drain_conn() None[source]

Read and discard any remaining HTTP response data in the response connection.

Unread data in the HTTPResponse connection blocks the connection from being released back to the pool.

fileno() int[source]

Returns underlying file descriptor if one exists.

OSError is raised if the IO object does not use a file descriptor.

flush() None[source]

Flush write buffers, if applicable.

This is not implemented for read-only and non-blocking streams.

isclosed() bool[source]
read(amt: int | None = None, decode_content: bool | None = None, cache_content: bool = False) bytes[source]

Similar to http.client.HTTPResponse.read(), but with two additional parameters: decode_content and cache_content.

Parameters:
  • amt – How much of the content to read. If specified, caching is skipped because it doesn’t make sense to cache partial content as the full response.

  • decode_content – If True, will attempt to decode the body based on the ‘content-encoding’ header.

  • cache_content – If True, will save the returned data such that the same result is returned despite of the state of the underlying file object. This is useful if you want the .data property to continue working after having .read() the file object. (Overridden if amt is set.)

read_chunked(amt: int | None = None, decode_content: bool | None = None) Generator[bytes, None, None][source]

Similar to HTTPResponse.read(), but with an additional parameter: decode_content.

Parameters:
  • amt – How much of the content to read. If specified, caching is skipped because it doesn’t make sense to cache partial content as the full response.

  • decode_content – If True, will attempt to decode the body based on the ‘content-encoding’ header.

readable() bool[source]

Return whether object was opened for reading.

If False, read() will raise OSError.

release_conn() None[source]
stream(amt: int | None = 65536, decode_content: bool | None = None) Generator[bytes, None, None][source]

A generator wrapper for the read() method. A call will block until amt bytes have been read from the connection or until the connection is closed.

Parameters:
  • amt – How much of the content to read. The generator will return up to much data per iteration, but may return less. This is particularly likely when using compressed data. However, the empty string will never be returned.

  • decode_content – If True, will attempt to decode the body based on the ‘content-encoding’ header.

supports_chunked_reads() bool[source]

Checks if the underlying file-like object looks like a http.client.HTTPResponse object. We do this by testing for the fp attribute. If it is present we assume it returns raw chunks as processed by read_chunked().

tell() int[source]

Obtain the number of bytes pulled over the wire so far. May differ from the amount of content returned by :meth:urllib3.response.HTTPResponse.read if bytes are encoded on the wire (e.g, compressed).

property url: str | None

Returns the URL that was the source of this response. If the request that generated this response redirected, this method will return the final redirect location.

class urllib3.HTTPSConnectionPool(host: str, port: int | None = None, timeout: _TYPE_TIMEOUT | None = _TYPE_DEFAULT.token, maxsize: int = 1, block: bool = False, headers: Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, _proxy: Url | None = None, _proxy_headers: Mapping[str, str] | None = None, key_file: str | None = None, cert_file: str | None = None, cert_reqs: int | str | None = None, key_password: str | None = None, ca_certs: str | None = None, ssl_version: int | str | None = None, ssl_minimum_version: ssl.TLSVersion | None = None, ssl_maximum_version: ssl.TLSVersion | None = None, assert_hostname: str | Literal[False] | None = None, assert_fingerprint: str | None = None, ca_cert_dir: str | None = None, **conn_kw: Any)[source]

Bases: HTTPConnectionPool

Same as HTTPConnectionPool, but HTTPS.

HTTPSConnection uses one of assert_fingerprint, assert_hostname and host in this order to verify connections. If assert_hostname is False, no verification is done.

The key_file, cert_file, cert_reqs, ca_certs, ca_cert_dir, ssl_version, key_password are only used if ssl is available and are fed into urllib3.util.ssl_wrap_socket() to upgrade the connection socket into an SSL socket.

ConnectionCls

alias of HTTPSConnection

pool: queue.LifoQueue[Any] | None
scheme: str | None = 'https'
class urllib3.PoolManager(num_pools: int = 10, headers: Mapping[str, str] | None = None, **connection_pool_kw: Any)[source]

Bases: RequestMethods

Allows for arbitrary requests while transparently keeping track of necessary connection pools for you.

Parameters:
  • num_pools – Number of connection pools to cache before discarding the least recently used pool.

  • headers – Headers to include with all requests, unless other headers are given explicitly.

  • **connection_pool_kw – Additional parameters are used to create fresh urllib3.connectionpool.ConnectionPool instances.

Example:

import urllib3

http = urllib3.PoolManager(num_pools=2)

resp1 = http.request("GET", "https://google.com/")
resp2 = http.request("GET", "https://google.com/mail")
resp3 = http.request("GET", "https://yahoo.com/")

print(len(http.pools))
# 2
clear() None[source]

Empty our store of pools and direct them all to close.

This will not affect in-flight connections, but they will not be re-used after completion.

connection_from_context(request_context: dict[str, Any]) HTTPConnectionPool[source]

Get a urllib3.connectionpool.ConnectionPool based on the request context.

request_context must at least contain the scheme key and its value must be a key in key_fn_by_scheme instance variable.

connection_from_host(host: str | None, port: int | None = None, scheme: str | None = 'http', pool_kwargs: dict[str, Any] | None = None) HTTPConnectionPool[source]

Get a urllib3.connectionpool.ConnectionPool based on the host, port, and scheme.

If port isn’t given, it will be derived from the scheme using urllib3.connectionpool.port_by_scheme. If pool_kwargs is provided, it is merged with the instance’s connection_pool_kw variable and used to create the new connection pool, if one is needed.

connection_from_pool_key(pool_key: PoolKey, request_context: dict[str, Any]) HTTPConnectionPool[source]

Get a urllib3.connectionpool.ConnectionPool based on the provided pool key.

pool_key should be a namedtuple that only contains immutable objects. At a minimum it must have the scheme, host, and port fields.

connection_from_url(url: str, pool_kwargs: dict[str, Any] | None = None) HTTPConnectionPool[source]

Similar to urllib3.connectionpool.connection_from_url().

If pool_kwargs is not provided and a new pool needs to be constructed, self.connection_pool_kw is used to initialize the urllib3.connectionpool.ConnectionPool. If pool_kwargs is provided, it is used instead. Note that if a new pool does not need to be created for the request, the provided pool_kwargs are not used.

pools: RecentlyUsedContainer[PoolKey, HTTPConnectionPool]
proxy: Url | None = None
proxy_config: ProxyConfig | None = None
urlopen(method: str, url: str, redirect: bool = True, **kw: Any) BaseHTTPResponse[source]

Same as urllib3.HTTPConnectionPool.urlopen() with custom cross-host redirect logic and only sends the request-uri portion of the url.

The given url parameter must be absolute, such that an appropriate urllib3.connectionpool.ConnectionPool can be chosen for it.

class urllib3.ProxyManager(proxy_url: str, num_pools: int = 10, headers: Mapping[str, str] | None = None, proxy_headers: Mapping[str, str] | None = None, proxy_ssl_context: ssl.SSLContext | None = None, use_forwarding_for_https: bool = False, proxy_assert_hostname: None | str | Literal[False] = None, proxy_assert_fingerprint: str | None = None, **connection_pool_kw: Any)[source]

Bases: PoolManager

Behaves just like PoolManager, but sends all requests through the defined proxy, using the CONNECT method for HTTPS URLs.

Parameters:
  • proxy_url – The URL of the proxy to be used.

  • proxy_headers – A dictionary containing headers that will be sent to the proxy. In case of HTTP they are being sent with each request, while in the HTTPS/CONNECT case they are sent only once. Could be used for proxy authentication.

  • proxy_ssl_context – The proxy SSL context is used to establish the TLS connection to the proxy when using HTTPS proxies.

  • use_forwarding_for_https – (Defaults to False) If set to True will forward requests to the HTTPS proxy to be made on behalf of the client instead of creating a TLS tunnel via the CONNECT method. Enabling this flag means that request and response headers and content will be visible from the HTTPS proxy whereas tunneling keeps request and response headers and content private. IP address, target hostname, SNI, and port are always visible to an HTTPS proxy even when this flag is disabled.

  • proxy_assert_hostname – The hostname of the certificate to verify against.

  • proxy_assert_fingerprint – The fingerprint of the certificate to verify against.

Example:

import urllib3

proxy = urllib3.ProxyManager("https://localhost:3128/")

resp1 = proxy.request("GET", "https://google.com/")
resp2 = proxy.request("GET", "https://httpbin.org/")

print(len(proxy.pools))
# 1

resp3 = proxy.request("GET", "https://httpbin.org/")
resp4 = proxy.request("GET", "https://twitter.com/")

print(len(proxy.pools))
# 3
connection_from_host(host: str | None, port: int | None = None, scheme: str | None = 'http', pool_kwargs: dict[str, Any] | None = None) HTTPConnectionPool[source]

Get a urllib3.connectionpool.ConnectionPool based on the host, port, and scheme.

If port isn’t given, it will be derived from the scheme using urllib3.connectionpool.port_by_scheme. If pool_kwargs is provided, it is merged with the instance’s connection_pool_kw variable and used to create the new connection pool, if one is needed.

urlopen(method: str, url: str, redirect: bool = True, **kw: Any) BaseHTTPResponse[source]

Same as HTTP(S)ConnectionPool.urlopen, url must be absolute.

class urllib3.Retry(total: bool | int | None = 10, connect: int | None = None, read: int | None = None, redirect: bool | int | None = None, status: int | None = None, other: int | None = None, allowed_methods: Collection[str] | None = frozenset({'DELETE', 'GET', 'HEAD', 'OPTIONS', 'PUT', 'TRACE'}), status_forcelist: Collection[int] | None = None, backoff_factor: float = 0, backoff_max: float = 120, raise_on_redirect: bool = True, raise_on_status: bool = True, history: tuple[urllib3.util.retry.RequestHistory, ...] | None = None, respect_retry_after_header: bool = True, remove_headers_on_redirect: Collection[str] = frozenset({'Authorization'}), backoff_jitter: float = 0.0)[source]

Bases: object

Retry configuration.

Each retry attempt will create a new Retry object with updated values, so they can be safely reused.

Retries can be defined as a default for a pool:

retries = Retry(connect=5, read=2, redirect=5)
http = PoolManager(retries=retries)
response = http.request("GET", "https://example.com/")

Or per-request (which overrides the default for the pool):

response = http.request("GET", "https://example.com/", retries=Retry(10))

Retries can be disabled by passing False:

response = http.request("GET", "https://example.com/", retries=False)

Errors will be wrapped in MaxRetryError unless retries are disabled, in which case the causing exception will be raised.

Parameters:
  • total (int) –

    Total number of retries to allow. Takes precedence over other counts.

    Set to None to remove this constraint and fall back on other counts.

    Set to 0 to fail on the first retry.

    Set to False to disable and imply raise_on_redirect=False.

  • connect (int) –

    How many connection-related errors to retry on.

    These are errors raised before the request is sent to the remote server, which we assume has not triggered the server to process the request.

    Set to 0 to fail on the first retry of this type.

  • read (int) –

    How many times to retry on read errors.

    These errors are raised after the request was sent to the server, so the request may have side-effects.

    Set to 0 to fail on the first retry of this type.

  • redirect (int) –

    How many redirects to perform. Limit this to avoid infinite redirect loops.

    A redirect is a HTTP response with a status code 301, 302, 303, 307 or 308.

    Set to 0 to fail on the first retry of this type.

    Set to False to disable and imply raise_on_redirect=False.

  • status (int) –

    How many times to retry on bad status codes.

    These are retries made on responses, where status code matches status_forcelist.

    Set to 0 to fail on the first retry of this type.

  • other (int) –

    How many times to retry on other errors.

    Other errors are errors that are not connect, read, redirect or status errors. These errors might be raised after the request was sent to the server, so the request might have side-effects.

    Set to 0 to fail on the first retry of this type.

    If total is not set, it’s a good idea to set this to 0 to account for unexpected edge cases and avoid infinite retry loops.

  • allowed_methods (Collection) –

    Set of uppercased HTTP method verbs that we should retry on.

    By default, we only retry on methods which are considered to be idempotent (multiple requests with the same parameters end with the same state). See Retry.DEFAULT_ALLOWED_METHODS.

    Set to a None value to retry on any verb.

  • status_forcelist (Collection) –

    A set of integer HTTP status codes that we should force a retry on. A retry is initiated if the request method is in allowed_methods and the response status code is in status_forcelist.

    By default, this is disabled with None.

  • backoff_factor (float) –

    A backoff factor to apply between attempts after the second try (most errors are resolved immediately by a second try without a delay). urllib3 will sleep for:

    {backoff factor} * (2 ** ({number of previous retries}))
    

    seconds. If backoff_jitter is non-zero, this sleep is extended by:

    random.uniform(0, {backoff jitter})
    

    seconds. For example, if the backoff_factor is 0.1, then Retry.sleep() will sleep for [0.0s, 0.2s, 0.4s, 0.8s, …] between retries. No backoff will ever be longer than backoff_max.

    By default, backoff is disabled (factor set to 0).

  • raise_on_redirect (bool) – Whether, if the number of redirects is exhausted, to raise a MaxRetryError, or to return a response with a response code in the 3xx range.

  • raise_on_status (bool) – Similar meaning to raise_on_redirect: whether we should raise an exception, or return a response, if status falls in status_forcelist range and retries have been exhausted.

  • history (tuple) – The history of the request encountered during each call to increment(). The list is in the order the requests occurred. Each list item is of class RequestHistory.

  • respect_retry_after_header (bool) – Whether to respect Retry-After header on status codes defined as Retry.RETRY_AFTER_STATUS_CODES or not.

  • remove_headers_on_redirect (Collection) – Sequence of headers to remove from the request when a response indicating a redirect is returned before firing off the redirected request.

DEFAULT: ClassVar[Retry] = Retry(total=3, connect=None, read=None, redirect=None, status=None)
DEFAULT_ALLOWED_METHODS = frozenset({'DELETE', 'GET', 'HEAD', 'OPTIONS', 'PUT', 'TRACE'})

Default methods to be used for allowed_methods

DEFAULT_BACKOFF_MAX = 120

Default maximum backoff time.

DEFAULT_REMOVE_HEADERS_ON_REDIRECT = frozenset({'Authorization'})

Default headers to be used for remove_headers_on_redirect

RETRY_AFTER_STATUS_CODES = frozenset({413, 429, 503})

Default status codes to be used for status_forcelist

classmethod from_int(retries: Retry | bool | int | None, redirect: bool | int | None = True, default: Retry | bool | int | None = None) Retry[source]

Backwards-compatibility for the old retries format.

get_backoff_time() float[source]

Formula for computing the current backoff

Return type:

float

get_retry_after(response: BaseHTTPResponse) float | None[source]

Get the value of Retry-After in seconds.

increment(method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None) Retry[source]

Return a new Retry object with incremented retry counters.

Parameters:
  • response (BaseHTTPResponse) – A response object, or None, if the server did not return a response.

  • error (Exception) – An error encountered during the request, or None if the response was received successfully.

Returns:

A new Retry object.

is_exhausted() bool[source]

Are we out of retries?

is_retry(method: str, status_code: int, has_retry_after: bool = False) bool[source]

Is this method/status code retryable? (Based on allowlists and control variables such as the number of total retries to allow, whether to respect the Retry-After header, whether this header is present, and whether the returned status code is on the list of status codes to be retried upon on the presence of the aforementioned header)

new(**kw: Any) Retry[source]
parse_retry_after(retry_after: str) float[source]
sleep(response: BaseHTTPResponse | None = None) None[source]

Sleep between retry attempts.

This method will respect a server’s Retry-After response header and sleep the duration of the time requested. If that is not present, it will use an exponential backoff. By default, the backoff factor is 0 and this method will return immediately.

sleep_for_retry(response: BaseHTTPResponse) bool[source]
class urllib3.Timeout(total: float | _TYPE_DEFAULT | None = None, connect: float | _TYPE_DEFAULT | None = _TYPE_DEFAULT.token, read: float | _TYPE_DEFAULT | None = _TYPE_DEFAULT.token)[source]

Bases: object

Timeout configuration.

Timeouts can be defined as a default for a pool:

import urllib3

timeout = urllib3.util.Timeout(connect=2.0, read=7.0)

http = urllib3.PoolManager(timeout=timeout)

resp = http.request("GET", "https://example.com/")

print(resp.status)

Or per-request (which overrides the default for the pool):

response = http.request("GET", "https://example.com/", timeout=Timeout(10))

Timeouts can be disabled by setting all the parameters to None:

no_timeout = Timeout(connect=None, read=None)
response = http.request("GET", "https://example.com/", timeout=no_timeout)
Parameters:
  • total (int, float, or None) –

    This combines the connect and read timeouts into one; the read timeout will be set to the time leftover from the connect attempt. In the event that both a connect timeout and a total are specified, or a read timeout and a total are specified, the shorter timeout will be applied.

    Defaults to None.

  • connect (int, float, or None) – The maximum amount of time (in seconds) to wait for a connection attempt to a server to succeed. Omitting the parameter will default the connect timeout to the system default, probably the global default timeout in socket.py. None will set an infinite timeout for connection attempts.

  • read (int, float, or None) –

    The maximum amount of time (in seconds) to wait between consecutive read operations for a response from the server. Omitting the parameter will default the read timeout to the system default, probably the global default timeout in socket.py. None will set an infinite timeout.

Note

Many factors can affect the total amount of time for urllib3 to return an HTTP response.

For example, Python’s DNS resolver does not obey the timeout specified on the socket. Other factors that can affect total request time include high CPU load, high swap, the program running at a low priority level, or other behaviors.

In addition, the read and total timeouts only measure the time between read operations on the socket connecting the client and the server, not the total amount of time for the request to return a complete response. For most requests, the timeout is raised because the server has not sent the first byte in the specified time. This is not always the case; if a server streams one byte every fifteen seconds, a timeout of 20 seconds will not trigger, even though the request will take several minutes to complete.

If your goal is to cut off any request after a set amount of wall clock time, consider having a second “watcher” thread to cut off a slow request.

DEFAULT_TIMEOUT: float | _TYPE_DEFAULT | None = -1

A sentinel object representing the default timeout value

clone() Timeout[source]

Create a copy of the timeout object

Timeout properties are stored per-pool but each request needs a fresh Timeout object to ensure each one has its own start/stop configured.

Returns:

a copy of the timeout object

Return type:

Timeout

property connect_timeout: float | _TYPE_DEFAULT | None

Get the value to use when setting a connection timeout.

This will be a positive float or integer, the value None (never timeout), or the default system timeout.

Returns:

Connect timeout.

Return type:

int, float, Timeout.DEFAULT_TIMEOUT or None

classmethod from_float(timeout: float | _TYPE_DEFAULT | None) Timeout[source]

Create a new Timeout from a legacy timeout value.

The timeout value used by httplib.py sets the same timeout on the connect(), and recv() socket requests. This creates a Timeout object that sets the individual timeouts to the timeout value passed to this function.

Parameters:

timeout (integer, float, urllib3.util.Timeout.DEFAULT_TIMEOUT, or None) – The legacy timeout value.

Returns:

Timeout object

Return type:

Timeout

get_connect_duration() float[source]

Gets the time elapsed since the call to start_connect().

Returns:

Elapsed time in seconds.

Return type:

float

Raises:

urllib3.exceptions.TimeoutStateError – if you attempt to get duration for a timer that hasn’t been started.

property read_timeout: float | None

Get the value for the read timeout.

This assumes some time has elapsed in the connection timeout and computes the read timeout appropriately.

If self.total is set, the read timeout is dependent on the amount of time taken by the connect timeout. If the connection time has not been established, a TimeoutStateError will be raised.

Returns:

Value to use for the read timeout.

Return type:

int, float or None

Raises:

urllib3.exceptions.TimeoutStateError – If start_connect() has not yet been called on this object.

static resolve_default_timeout(timeout: float | _TYPE_DEFAULT | None) float | None[source]
start_connect() float[source]

Start the timeout clock, used during a connect() attempt

Raises:

urllib3.exceptions.TimeoutStateError – if you attempt to start a timer that has been started already.

urllib3.add_stderr_logger(level: int = 10) StreamHandler[TextIO][source]

Helper for quickly adding a StreamHandler to the logger. Useful for debugging.

Returns the handler after adding it.

urllib3.connection_from_url(url: str, **kw: Any) HTTPConnectionPool[source]

Given a url, return an ConnectionPool instance of its host.

This is a shortcut for not having to parse out the scheme, host, and port of the url before creating an ConnectionPool instance.

Parameters:
  • url – Absolute URL string that must include the scheme. Port is optional.

  • **kw – Passes additional parameters to the constructor of the appropriate ConnectionPool. Useful for specifying things like timeout, maxsize, headers, etc.

Example:

>>> conn = connection_from_url('http://google.com/')
>>> r = conn.request('GET', '/')
urllib3.disable_warnings(category: type[Warning] = <class 'urllib3.exceptions.HTTPWarning'>) None[source]

Helper for quickly disabling all urllib3 warnings.

urllib3.encode_multipart_formdata(fields: Sequence[Tuple[str, str | bytes | Tuple[str, str | bytes] | Tuple[str, str | bytes, str]] | RequestField] | Mapping[str, str | bytes | Tuple[str, str | bytes] | Tuple[str, str | bytes, str]], boundary: str | None = None) tuple[bytes, str][source]

Encode a dictionary of fields using the multipart/form-data MIME format.

Parameters:
urllib3.make_headers(keep_alive: bool | None = None, accept_encoding: bool | list[str] | str | None = None, user_agent: str | None = None, basic_auth: str | None = None, proxy_basic_auth: str | None = None, disable_cache: bool | None = None) dict[str, str][source]

Shortcuts for generating request headers.

Parameters:
  • keep_alive – If True, adds ‘connection: keep-alive’ header.

  • accept_encoding – Can be a boolean, list, or string. True translates to ‘gzip,deflate’. If either the brotli or brotlicffi package is installed ‘gzip,deflate,br’ is used instead. List will get joined by comma. String will be used as provided.

  • user_agent – String representing the user-agent you want, such as “python-urllib3/0.6”

  • basic_auth – Colon-separated username:password string for ‘authorization: basic …’ auth header.

  • proxy_basic_auth – Colon-separated username:password string for ‘proxy-authorization: basic …’ auth header.

  • disable_cache – If True, adds ‘cache-control: no-cache’ header.

Example:

import urllib3

print(urllib3.util.make_headers(keep_alive=True, user_agent="Batman/1.0"))
# {'connection': 'keep-alive', 'user-agent': 'Batman/1.0'}
print(urllib3.util.make_headers(accept_encoding=True))
# {'accept-encoding': 'gzip,deflate'}
urllib3.proxy_from_url(url: str, **kw: Any) ProxyManager[source]
urllib3.request(method: str, url: str, *, body: bytes | IO[Any] | Iterable[bytes] | str | None = None, fields: Sequence[Tuple[str, str | bytes | Tuple[str, str | bytes] | Tuple[str, str | bytes, str]] | RequestField] | Mapping[str, str | bytes | Tuple[str, str | bytes] | Tuple[str, str | bytes, str]] | None = None, headers: Mapping[str, str] | None = None, preload_content: bool | None = True, decode_content: bool | None = True, redirect: bool | None = True, retries: Retry | bool | int | None = None, timeout: Timeout | float | int | None = 3, json: Any | None = None) BaseHTTPResponse[source]

A convenience, top-level request method. It uses a module-global PoolManager instance. Therefore, its side effects could be shared across dependencies relying on it. To avoid side effects create a new PoolManager instance and use it instead. The method does not accept low-level **urlopen_kw keyword arguments.