Downloader Middleware¶
The downloader middleware is a framework of hooks into Scrapy’s request/response processing. It’s a light, low-level system for globally altering Scrapy’s requests and responses.
Activating a downloader middleware¶
To activate a downloader middleware component, add it to the
DOWNLOADER_MIDDLEWARES setting, which is a dict whose keys are the
middleware class paths and their values are the middleware orders.
Here’s an example:
DOWNLOADER_MIDDLEWARES = {
"myproject.middlewares.CustomDownloaderMiddleware": 543,
}
The DOWNLOADER_MIDDLEWARES setting is merged with the
DOWNLOADER_MIDDLEWARES_BASE setting defined in Scrapy (and not meant
to be overridden) and then sorted by order to get the final sorted list of
enabled middlewares: the first middleware is the one closer to the engine and
the last is the one closer to the downloader. In other words,
the process_request()
method of each middleware will be invoked in increasing
middleware order (100, 200, 300, …) and the process_response() method
of each middleware will be invoked in decreasing order.
To decide which order to assign to your middleware see the
DOWNLOADER_MIDDLEWARES_BASE setting and pick a value according to
where you want to insert the middleware. The order does matter because each
middleware performs a different action and your middleware could depend on some
previous (or subsequent) middleware being applied.
If you want to disable a built-in middleware (the ones defined in
DOWNLOADER_MIDDLEWARES_BASE and enabled by default) you must define it
in your project’s DOWNLOADER_MIDDLEWARES setting and assign None
as its value. For example, if you want to disable the user-agent middleware:
DOWNLOADER_MIDDLEWARES = {
"myproject.middlewares.CustomDownloaderMiddleware": 543,
"scrapy.downloadermiddlewares.useragent.UserAgentMiddleware": None,
}
Finally, keep in mind that some middlewares may need to be enabled through a particular setting. See each middleware documentation for more info.
Writing your own downloader middleware¶
Each downloader middleware is a Python class that defines one or more of the methods defined below.
The main entry point is the from_crawler class method, which receives a
Crawler instance. The Crawler
object gives you access, for example, to the settings.
- class scrapy.downloadermiddlewares.DownloaderMiddleware¶
Note
Any of the downloader middleware methods may also return a deferred.
- process_request(request, spider)¶
This method is called for each request that goes through the download middleware.
process_request()should either: returnNone, return aResponseobject, return aRequestobject, or raiseIgnoreRequest.If it returns
None, Scrapy will continue processing this request, executing all other middlewares until, finally, the appropriate downloader handler is called the request performed (and its response downloaded).If it returns a
Responseobject, Scrapy won’t bother calling any otherprocess_request()orprocess_exception()methods, or the appropriate download function; it’ll return that response. Theprocess_response()methods of installed middleware is always called on every response.If it returns a
Requestobject, Scrapy will stop callingprocess_request()methods and reschedule the returned request. Once the newly returned request is performed, the appropriate middleware chain will be called on the downloaded response.If it raises an
IgnoreRequestexception, theprocess_exception()methods of installed downloader middleware will be called. If none of them handle the exception, the errback function of the request (Request.errback) is called. If no code handles the raised exception, it is ignored and not logged (unlike other exceptions).- Parameters
request (
Requestobject) – the request being processedspider (
Spiderobject) – the spider for which this request is intended
- process_response(request, response, spider)¶
process_response()should either: return aResponseobject, return aRequestobject or raise aIgnoreRequestexception.If it returns a
Response(it could be the same given response, or a brand-new one), that response will continue to be processed with theprocess_response()of the next middleware in the chain.If it returns a
Requestobject, the middleware chain is halted and the returned request is rescheduled to be downloaded in the future. This is the same behavior as if a request is returned fromprocess_request().If it raises an
IgnoreRequestexception, the errback function of the request (Request.errback) is called. If no code handles the raised exception, it is ignored and not logged (unlike other exceptions).
- process_exception(request, exception, spider)¶
Scrapy calls
process_exception()when a download handler or aprocess_request()(from a downloader middleware) raises an exception (including anIgnoreRequestexception)process_exception()should return: eitherNone, aResponseobject, or aRequestobject.If it returns
None, Scrapy will continue processing this exception, executing any otherprocess_exception()methods of installed middleware, until no middleware is left and the default exception handling kicks in.If it returns a
Responseobject, theprocess_response()method chain of installed middleware is started, and Scrapy won’t bother calling any otherprocess_exception()methods of middleware.If it returns a
Requestobject, the returned request is rescheduled to be downloaded in the future. This stops the execution ofprocess_exception()methods of the middleware the same as returning a response would.- Parameters
request (is a
Requestobject) – the request that generated the exceptionexception (an
Exceptionobject) – the raised exceptionspider (
Spiderobject) – the spider for which this request is intended
- from_crawler(cls, crawler)¶
If present, this classmethod is called to create a middleware instance from a
Crawler. It must return a new instance of the middleware. Crawler object provides access to all Scrapy core components like settings and signals; it is a way for middleware to access them and hook its functionality into Scrapy.- Parameters
crawler (
Crawlerobject) – crawler that uses this middleware
Built-in downloader middleware reference¶
This page describes all downloader middleware components that come with Scrapy. For information on how to use them and how to write your own downloader middleware, see the downloader middleware usage guide.
For a list of the components enabled by default (and their orders) see the
DOWNLOADER_MIDDLEWARES_BASE setting.
DefaultHeadersMiddleware¶
- class scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware[source]¶
This middleware sets all default requests headers specified in the
DEFAULT_REQUEST_HEADERSsetting.
DownloadTimeoutMiddleware¶
- class scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware[source]¶
This middleware sets the download timeout for requests specified in the
DOWNLOAD_TIMEOUTsetting ordownload_timeoutspider attribute.
Note
You can also set download timeout per-request using
download_timeout Request.meta key; this is supported
even when DownloadTimeoutMiddleware is disabled.
HttpAuthMiddleware¶
- class scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware[source]¶
This middleware authenticates all requests generated from certain spiders using Basic access authentication (aka. HTTP auth).
To enable HTTP authentication for a spider, set the
http_userandhttp_passspider attributes to the authentication data and thehttp_auth_domainspider attribute to the domain which requires this authentication (its subdomains will be also handled in the same way). You can sethttp_auth_domaintoNoneto enable the authentication for all requests but you risk leaking your authentication credentials to unrelated domains.Warning
In previous Scrapy versions HttpAuthMiddleware sent the authentication data with all requests, which is a security problem if the spider makes requests to several different domains. Currently if the
http_auth_domainattribute is not set, the middleware will use the domain of the first request, which will work for some spiders but not for others. In the future the middleware will produce an error instead.Example:
from scrapy.spiders import CrawlSpider class SomeIntranetSiteSpider(CrawlSpider): http_user = "someuser" http_pass = "somepass" http_auth_domain = "intranet.example.com" name = "intranet.example.com" # .. rest of the spider code omitted ...
HttpCacheMiddleware¶
- class scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware[source]¶
This middleware provides low-level cache to all HTTP requests and responses. It has to be combined with a cache storage backend as well as a cache policy.
Scrapy ships with the following HTTP cache storage backends:
You can change the HTTP cache storage backend with the
HTTPCACHE_STORAGEsetting. Or you can also implement your own storage backend.Scrapy ships with two HTTP cache policies:
You can change the HTTP cache policy with the
HTTPCACHE_POLICYsetting. Or you can also implement your own policy.You can also avoid caching a response on every policy using
dont_cachemeta key equalsTrue.
Dummy policy (default)¶
- class scrapy.extensions.httpcache.DummyPolicy[source]¶
This policy has no awareness of any HTTP Cache-Control directives. Every request and its corresponding response are cached. When the same request is seen again, the response is returned without transferring anything from the Internet.
The Dummy policy is useful for testing spiders faster (without having to wait for downloads every time) and for trying your spider offline, when an Internet connection is not available. The goal is to be able to “replay” a spider run exactly as it ran before.
RFC2616 policy¶
- class scrapy.extensions.httpcache.RFC2616Policy[source]¶
This policy provides a RFC2616 compliant HTTP cache, i.e. with HTTP Cache-Control awareness, aimed at production and used in continuous runs to avoid downloading unmodified data (to save bandwidth and speed up crawls).
What is implemented:
Do not attempt to store responses/requests with
no-storecache-control directive setDo not serve responses from cache if
no-cachecache-control directive is set even for fresh responsesCompute freshness lifetime from
max-agecache-control directiveCompute freshness lifetime from
Expiresresponse headerCompute freshness lifetime from
Last-Modifiedresponse header (heuristic used by Firefox)Compute current age from
Ageresponse headerCompute current age from
DateheaderRevalidate stale responses based on
Last-Modifiedresponse headerRevalidate stale responses based on
ETagresponse headerSet
Dateheader for any received response missing itSupport
max-stalecache-control directive in requests
This allows spiders to be configured with the full RFC2616 cache policy, but avoid revalidation on a request-by-request basis, while remaining conformant with the HTTP spec.
Example:
Add
Cache-Control: max-stale=600to Request headers to accept responses that have exceeded their expiration time by no more than 600 seconds.See also: RFC2616, 14.9.3
What is missing:
Pragma: no-cachesupport https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.1Varyheader support https://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.6Invalidation after updates or deletes https://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.10
… probably others ..
Filesystem storage backend (default)¶
- class scrapy.extensions.httpcache.FilesystemCacheStorage[source]¶
File system storage backend is available for the HTTP cache middleware.
Each request/response pair is stored in a different directory containing the following files:
request_body- the plain request bodyrequest_headers- the request headers (in raw HTTP format)response_body- the plain response bodyresponse_headers- the request headers (in raw HTTP format)meta- some metadata of this cache resource in Pythonrepr()format (grep-friendly format)pickled_meta- the same metadata inmetabut pickled for more efficient deserialization
The directory name is made from the request fingerprint (see
scrapy.utils.request.fingerprint), and one level of subdirectories is used to avoid creating too many files into the same directory (which is inefficient in many file systems). An example directory could be:/path/to/cache/dir/example.com/72/72811f648e718090f041317756c03adb0ada46c7
DBM storage backend¶
- class scrapy.extensions.httpcache.DbmCacheStorage[source]¶
A DBM storage backend is also available for the HTTP cache middleware.
By default, it uses the
dbm, but you can change it with theHTTPCACHE_DBM_MODULEsetting.
Writing your own storage backend¶
You can implement a cache storage backend by creating a Python class that defines the methods described below.
- class scrapy.extensions.httpcache.CacheStorage¶
- open_spider(spider)¶
This method gets called after a spider has been opened for crawling. It handles the
open_spidersignal.- Parameters
spider (
Spiderobject) – the spider which has been opened
- close_spider(spider)¶
This method gets called after a spider has been closed. It handles the
close_spidersignal.- Parameters
spider (
Spiderobject) – the spider which has been closed
- retrieve_response(spider, request)¶
Return response if present in cache, or
Noneotherwise.- Parameters
spider (
Spiderobject) – the spider which generated the requestrequest (
Requestobject) – the request to find cached response for
- store_response(spider, request, response)¶
Store the given response in the cache.
In order to use your storage backend, set:
HTTPCACHE_STORAGEto the Python import path of your custom storage class.
HTTPCache middleware settings¶
The HttpCacheMiddleware can be configured through the following
settings:
HTTPCACHE_ENABLED¶
Default: False
Whether the HTTP cache will be enabled.
HTTPCACHE_EXPIRATION_SECS¶
Default: 0
Expiration time for cached requests, in seconds.
Cached requests older than this time will be re-downloaded. If zero, cached requests will never expire.
HTTPCACHE_DIR¶
Default: 'httpcache'
The directory to use for storing the (low-level) HTTP cache. If empty, the HTTP cache will be disabled. If a relative path is given, is taken relative to the project data dir. For more info see: Default structure of Scrapy projects.
HTTPCACHE_IGNORE_HTTP_CODES¶
Default: []
Don’t cache response with these HTTP codes.
HTTPCACHE_IGNORE_MISSING¶
Default: False
If enabled, requests not found in the cache will be ignored instead of downloaded.
HTTPCACHE_IGNORE_SCHEMES¶
Default: ['file']
Don’t cache responses with these URI schemes.
HTTPCACHE_STORAGE¶
Default: 'scrapy.extensions.httpcache.FilesystemCacheStorage'
The class which implements the cache storage backend.
HTTPCACHE_DBM_MODULE¶
Default: 'dbm'
The database module to use in the DBM storage backend. This setting is specific to the DBM backend.
HTTPCACHE_POLICY¶
Default: 'scrapy.extensions.httpcache.DummyPolicy'
The class which implements the cache policy.
HTTPCACHE_GZIP¶
Default: False
If enabled, will compress all cached data with gzip. This setting is specific to the Filesystem backend.
HTTPCACHE_ALWAYS_STORE¶
Default: False
If enabled, will cache pages unconditionally.
A spider may wish to have all responses available in the cache, for
future use with Cache-Control: max-stale, for instance. The
DummyPolicy caches all responses but never revalidates them, and
sometimes a more nuanced policy is desirable.
This setting still respects Cache-Control: no-store directives in responses.
If you don’t want that, filter no-store out of the Cache-Control headers in
responses you feed to the cache middleware.
HTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS¶
Default: []
List of Cache-Control directives in responses to be ignored.
Sites often set “no-store”, “no-cache”, “must-revalidate”, etc., but get upset at the traffic a spider can generate if it actually respects those directives. This allows to selectively ignore Cache-Control directives that are known to be unimportant for the sites being crawled.
We assume that the spider will not issue Cache-Control directives in requests unless it actually needs them, so directives in requests are not filtered.
HttpCompressionMiddleware¶
- class scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware[source]¶
This middleware allows compressed (gzip, deflate) traffic to be sent/received from web sites.
This middleware also supports decoding brotli-compressed as well as zstd-compressed responses, provided that brotli or zstandard is installed, respectively.
HttpCompressionMiddleware Settings¶
COMPRESSION_ENABLED¶
Default: True
Whether the Compression middleware will be enabled.
HttpProxyMiddleware¶
- class scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware[source]¶
This middleware sets the HTTP proxy to use for requests, by setting the
proxymeta value forRequestobjects.Like the Python standard library module
urllib.request, it obeys the following environment variables:http_proxyhttps_proxyno_proxy
You can also set the meta key
proxyper-request, to a value likehttp://some_proxy_server:portorhttp://username:password@some_proxy_server:port. Keep in mind this value will take precedence overhttp_proxy/https_proxyenvironment variables, and it will also ignoreno_proxyenvironment variable.
RedirectMiddleware¶
- class scrapy.downloadermiddlewares.redirect.RedirectMiddleware[source]¶
This middleware handles redirection of requests based on response status.
The urls which the request goes through (while being redirected) can be found
in the redirect_urls Request.meta key.
The reason behind each redirect in redirect_urls can be found in the
redirect_reasons Request.meta key. For
example: [301, 302, 307, 'meta refresh'].
The format of a reason depends on the middleware that handled the corresponding
redirect. For example, RedirectMiddleware indicates the triggering
response status code as an integer, while MetaRefreshMiddleware
always uses the 'meta refresh' string as reason.
The RedirectMiddleware can be configured through the following
settings (see the settings documentation for more info):
If Request.meta has dont_redirect
key set to True, the request will be ignored by this middleware.
If you want to handle some redirect status codes in your spider, you can
specify these in the handle_httpstatus_list spider attribute.
For example, if you want the redirect middleware to ignore 301 and 302 responses (and pass them through to your spider) you can do this:
class MySpider(CrawlSpider):
handle_httpstatus_list = [301, 302]
The handle_httpstatus_list key of Request.meta can also be used to specify which response codes to
allow on a per-request basis. You can also set the meta key
handle_httpstatus_all to True if you want to allow any response code
for a request.
RedirectMiddleware settings¶
REDIRECT_ENABLED¶
Default: True
Whether the Redirect middleware will be enabled.
REDIRECT_MAX_TIMES¶
Default: 20
The maximum number of redirections that will be followed for a single request. After this maximum, the request’s response is returned as is.
MetaRefreshMiddleware¶
- class scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware[source]¶
This middleware handles redirection of requests based on meta-refresh html tag.
The MetaRefreshMiddleware can be configured through the following
settings (see the settings documentation for more info):
This middleware obey REDIRECT_MAX_TIMES setting, dont_redirect,
redirect_urls and redirect_reasons request meta keys as described
for RedirectMiddleware
MetaRefreshMiddleware settings¶
METAREFRESH_ENABLED¶
Default: True
Whether the Meta Refresh middleware will be enabled.
METAREFRESH_MAXDELAY¶
Default: 100
The maximum meta-refresh delay (in seconds) to follow the redirection. Some sites use meta-refresh for redirecting to a session expired page, so we restrict automatic redirection to the maximum delay.
RetryMiddleware¶
- class scrapy.downloadermiddlewares.retry.RetryMiddleware[source]¶
A middleware to retry failed requests that are potentially caused by temporary problems such as a connection timeout or HTTP 500 error.
Failed pages are collected on the scraping process and rescheduled at the end, once the spider has finished crawling all regular (non failed) pages.
The RetryMiddleware can be configured through the following
settings (see the settings documentation for more info):
If Request.meta has dont_retry key
set to True, the request will be ignored by this middleware.
To retry requests from a spider callback, you can use the
get_retry_request() function:
- scrapy.downloadermiddlewares.retry.get_retry_request(request: ~scrapy.http.request.Request, *, spider: ~scrapy.spiders.Spider, reason: ~typing.Union[str, Exception] = 'unspecified', max_retry_times: ~typing.Optional[int] = None, priority_adjust: ~typing.Optional[int] = None, logger: ~logging.Logger = <Logger scrapy.downloadermiddlewares.retry (WARNING)>, stats_base_key: str = 'retry')[source]¶
Returns a new
Requestobject to retry the specified request, orNoneif retries of the specified request have been exhausted.For example, in a
Spidercallback, you could use it as follows:def parse(self, response): if not response.text: new_request_or_none = get_retry_request( response.request, spider=self, reason='empty', ) return new_request_or_none
spider is the
Spiderinstance which is asking for the retry request. It is used to access the settings and stats, and to provide extra logging context (seelogging.debug()).reason is a string or an
Exceptionobject that indicates the reason why the request needs to be retried. It is used to name retry stats.max_retry_times is a number that determines the maximum number of times that request can be retried. If not specified or
None, the number is read from themax_retry_timesmeta key of the request. If themax_retry_timesmeta key is not defined orNone, the number is read from theRETRY_TIMESsetting.priority_adjust is a number that determines how the priority of the new request changes in relation to request. If not specified, the number is read from the
RETRY_PRIORITY_ADJUSTsetting.logger is the logging.Logger object to be used when logging messages
stats_base_key is a string to be used as the base key for the retry-related job stats
RetryMiddleware Settings¶
RETRY_ENABLED¶
Default: True
Whether the Retry middleware will be enabled.
RETRY_TIMES¶
Default: 2
Maximum number of times to retry, in addition to the first download.
Maximum number of retries can also be specified per-request using
max_retry_times attribute of Request.meta.
When initialized, the max_retry_times meta key takes higher
precedence over the RETRY_TIMES setting.
RETRY_HTTP_CODES¶
Default: [500, 502, 503, 504, 522, 524, 408, 429]
Which HTTP response codes to retry. Other errors (DNS lookup issues, connections lost, etc) are always retried.
In some cases you may want to add 400 to RETRY_HTTP_CODES because
it is a common code used to indicate server overload. It is not included by
default because HTTP specs say so.
RETRY_PRIORITY_ADJUST¶
Default: -1
Adjust retry request priority relative to original request:
a positive priority adjust means higher priority.
a negative priority adjust (default) means lower priority.
RobotsTxtMiddleware¶
- class scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware[source]¶
This middleware filters out requests forbidden by the robots.txt exclusion standard.
To make sure Scrapy respects robots.txt make sure the middleware is enabled and the
ROBOTSTXT_OBEYsetting is enabled.The
ROBOTSTXT_USER_AGENTsetting can be used to specify the user agent string to use for matching in the robots.txt file. If it isNone, the User-Agent header you are sending with the request or theUSER_AGENTsetting (in that order) will be used for determining the user agent to use in the robots.txt file.This middleware has to be combined with a robots.txt parser.
Scrapy ships with support for the following robots.txt parsers:
Protego (default)
You can change the robots.txt parser with the
ROBOTSTXT_PARSERsetting. Or you can also implement support for a new parser.
If Request.meta has
dont_obey_robotstxt key set to True
the request will be ignored by this middleware even if
ROBOTSTXT_OBEY is enabled.
Parsers vary in several aspects:
Language of implementation
Supported specification
Support for wildcard matching
Usage of length based rule: in particular for
AllowandDisallowdirectives, where the most specific rule based on the length of the path trumps the less specific (shorter) rule
Performance comparison of different parsers is available at the following link.
Protego parser¶
Based on Protego:
implemented in Python
is compliant with Google’s Robots.txt Specification
supports wildcard matching
uses the length based rule
Scrapy uses this parser by default.
RobotFileParser¶
Based on RobotFileParser:
is Python’s built-in robots.txt parser
is compliant with Martijn Koster’s 1996 draft specification
lacks support for wildcard matching
doesn’t use the length based rule
It is faster than Protego and backward-compatible with versions of Scrapy before 1.8.0.
In order to use this parser, set:
ROBOTSTXT_PARSERtoscrapy.robotstxt.PythonRobotParser
Reppy parser¶
Based on Reppy:
is a Python wrapper around Robots Exclusion Protocol Parser for C++
is compliant with Martijn Koster’s 1996 draft specification
supports wildcard matching
uses the length based rule
Native implementation, provides better speed than Protego.
In order to use this parser:
Install Reppy by running
pip install reppyWarning
Upstream issue #122 prevents reppy usage in Python 3.9+.
Set
ROBOTSTXT_PARSERsetting toscrapy.robotstxt.ReppyRobotParser
Robotexclusionrulesparser¶
Based on Robotexclusionrulesparser:
implemented in Python
is compliant with Martijn Koster’s 1996 draft specification
supports wildcard matching
doesn’t use the length based rule
In order to use this parser:
Install Robotexclusionrulesparser by running
pip install robotexclusionrulesparserSet
ROBOTSTXT_PARSERsetting toscrapy.robotstxt.RerpRobotParser
Implementing support for a new parser¶
You can implement support for a new robots.txt parser by subclassing
the abstract base class RobotParser and
implementing the methods described below.
- class scrapy.robotstxt.RobotParser[source]¶
- abstract allowed(url, user_agent)[source]¶
Return
Trueifuser_agentis allowed to crawlurl, otherwise returnFalse.
- abstract classmethod from_crawler(crawler, robotstxt_body)[source]¶
Parse the content of a robots.txt file as bytes. This must be a class method. It must return a new instance of the parser backend.
- Parameters
crawler (
Crawlerinstance) – crawler which made the requestrobotstxt_body (bytes) – content of a robots.txt file.
DownloaderStats¶
- class scrapy.downloadermiddlewares.stats.DownloaderStats[source]¶
Middleware that stores stats of all requests, responses and exceptions that pass through it.
To use this middleware you must enable the
DOWNLOADER_STATSsetting.
UserAgentMiddleware¶
AjaxCrawlMiddleware¶
- class scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware[source]¶
Middleware that finds ‘AJAX crawlable’ page variants based on meta-fragment html tag. See https://developers.google.com/search/docs/ajax-crawling/docs/getting-started for more info.
Note
Scrapy finds ‘AJAX crawlable’ pages for URLs like
'http://example.com/!#foo=bar'even without this middleware. AjaxCrawlMiddleware is necessary when URL doesn’t contain'!#'. This is often a case for ‘index’ or ‘main’ website pages.
AjaxCrawlMiddleware Settings¶
AJAXCRAWL_ENABLED¶
Default: False
Whether the AjaxCrawlMiddleware will be enabled. You may want to enable it for broad crawls.
HttpProxyMiddleware settings¶
HTTPPROXY_ENABLED¶
Default: True
Whether or not to enable the HttpProxyMiddleware.
HTTPPROXY_AUTH_ENCODING¶
Default: "latin-1"
The default encoding for proxy authentication on HttpProxyMiddleware.