Core API¶
This section documents the Scrapy core API, and it’s intended for developers of extensions and middlewares.
Crawler API¶
The main entry point to Scrapy API is the Crawler
object, passed to extensions through the from_crawler class method. This
object provides access to all Scrapy core components, and it’s the only way for
extensions to access them and hook their functionality into Scrapy.
The Extension Manager is responsible for loading and keeping track of installed
extensions and it’s configured through the EXTENSIONS setting which
contains a dictionary of all available extensions and their order similar to
how you configure the downloader middlewares.
- class scrapy.crawler.Crawler(spidercls, settings)[source]¶
The Crawler object must be instantiated with a
scrapy.Spidersubclass and ascrapy.settings.Settingsobject.- request_fingerprinter¶
The request fingerprint builder of this crawler.
This is used from extensions and middlewares to build short, unique identifiers for requests. See Request fingerprints.
- settings¶
The settings manager of this crawler.
This is used by extensions & middlewares to access the Scrapy settings of this crawler.
For an introduction on Scrapy settings see Settings.
For the API see
Settingsclass.
- signals¶
The signals manager of this crawler.
This is used by extensions & middlewares to hook themselves into Scrapy functionality.
For an introduction on signals see Signals.
For the API see
SignalManagerclass.
- stats¶
The stats collector of this crawler.
This is used from extensions & middlewares to record stats of their behaviour, or access stats collected by other extensions.
For an introduction on stats collection see Stats Collection.
For the API see
StatsCollectorclass.
- extensions¶
The extension manager that keeps track of enabled extensions.
Most extensions won’t need to access this attribute.
For an introduction on extensions and a list of available extensions on Scrapy see Extensions.
- engine¶
The execution engine, which coordinates the core crawling logic between the scheduler, downloader and spiders.
Some extension may want to access the Scrapy engine, to inspect or modify the downloader and scheduler behaviour, although this is an advanced use and this API is not yet stable.
- spider¶
Spider currently being crawled. This is an instance of the spider class provided while constructing the crawler, and it is created after the arguments given in the
crawl()method.
- class scrapy.crawler.CrawlerRunner(settings=None)[source]¶
This is a convenient helper class that keeps track of, manages and runs crawlers inside an already setup
reactor.The CrawlerRunner object must be instantiated with a
Settingsobject.This class shouldn’t be needed (since Scrapy is responsible of using it accordingly) unless writing scripts that manually handle the crawling process. See Run Scrapy from a script for an example.
- crawl(crawler_or_spidercls, *args, **kwargs)[source]¶
Run a crawler with the provided arguments.
It will call the given Crawler’s
crawl()method, while keeping track of it so it can be stopped later.If
crawler_or_spiderclsisn’t aCrawlerinstance, this method will try to create one using this parameter as the spider class given to it.Returns a deferred that is fired when the crawling is finished.
- create_crawler(crawler_or_spidercls)[source]¶
Return a
Crawlerobject.If
crawler_or_spiderclsis a Crawler, it is returned as-is.If
crawler_or_spiderclsis a Spider subclass, a new Crawler is constructed for it.If
crawler_or_spiderclsis a string, this function finds a spider with this name in a Scrapy project (using spider loader), then creates a Crawler instance for it.
- class scrapy.crawler.CrawlerProcess(settings=None, install_root_handler=True)[source]¶
Bases:
CrawlerRunnerA class to run multiple scrapy crawlers in a process simultaneously.
This class extends
CrawlerRunnerby adding support for starting areactorand handling shutdown signals, like the keyboard interrupt command Ctrl-C. It also configures top-level logging.This utility should be a better fit than
CrawlerRunnerif you aren’t running anotherreactorwithin your application.The CrawlerProcess object must be instantiated with a
Settingsobject.- Parameters
install_root_handler – whether to install root logging handler (default: True)
This class shouldn’t be needed (since Scrapy is responsible of using it accordingly) unless writing scripts that manually handle the crawling process. See Run Scrapy from a script for an example.
- crawl(crawler_or_spidercls, *args, **kwargs)¶
Run a crawler with the provided arguments.
It will call the given Crawler’s
crawl()method, while keeping track of it so it can be stopped later.If
crawler_or_spiderclsisn’t aCrawlerinstance, this method will try to create one using this parameter as the spider class given to it.Returns a deferred that is fired when the crawling is finished.
- create_crawler(crawler_or_spidercls)¶
Return a
Crawlerobject.If
crawler_or_spiderclsis a Crawler, it is returned as-is.If
crawler_or_spiderclsis a Spider subclass, a new Crawler is constructed for it.If
crawler_or_spiderclsis a string, this function finds a spider with this name in a Scrapy project (using spider loader), then creates a Crawler instance for it.
- start(stop_after_crawl=True, install_signal_handlers=True)[source]¶
This method starts a
reactor, adjusts its pool size toREACTOR_THREADPOOL_MAXSIZE, and installs a DNS cache based onDNSCACHE_ENABLEDandDNSCACHE_SIZE.If
stop_after_crawlis True, the reactor will be stopped after all crawlers have finished, usingjoin().
- stop()¶
Stops simultaneously all the crawling jobs taking place.
Returns a deferred that is fired when they all have ended.
Settings API¶
- scrapy.settings.SETTINGS_PRIORITIES¶
Dictionary that sets the key name and priority level of the default settings priorities used in Scrapy.
Each item defines a settings entry point, giving it a code name for identification and an integer priority. Greater priorities take more precedence over lesser ones when setting and retrieving values in the
Settingsclass.SETTINGS_PRIORITIES = { "default": 0, "command": 10, "project": 20, "spider": 30, "cmdline": 40, }
For a detailed explanation on each settings sources, see: Settings.
- scrapy.settings.get_settings_priority(priority)[source]¶
Small helper function that looks up a given string priority in the
SETTINGS_PRIORITIESdictionary and returns its numerical value, or directly returns a given numerical priority.
- class scrapy.settings.Settings(values=None, priority='project')[source]¶
Bases:
BaseSettingsThis object stores Scrapy settings for the configuration of internal components, and can be used for any further customization.
It is a direct subclass and supports all methods of
BaseSettings. Additionally, after instantiation of this class, the new object will have the global default settings described on Built-in settings reference already populated.
- class scrapy.settings.BaseSettings(values=None, priority='project')[source]¶
Instances of this class behave like dictionaries, but store priorities along with their
(key, value)pairs, and can be frozen (i.e. marked immutable).Key-value entries can be passed on initialization with the
valuesargument, and they would take theprioritylevel (unlessvaluesis already an instance ofBaseSettings, in which case the existing priority levels will be kept). If thepriorityargument is a string, the priority name will be looked up inSETTINGS_PRIORITIES. Otherwise, a specific integer should be provided.Once the object is created, new settings can be loaded or updated with the
set()method, and can be accessed with the square bracket notation of dictionaries, or with theget()method of the instance and its value conversion variants. When requesting a stored key, the value with the highest priority will be retrieved.- copy()[source]¶
Make a deep copy of current settings.
This method returns a new instance of the
Settingsclass, populated with the same values and their priorities.Modifications to the new object won’t be reflected on the original settings.
- copy_to_dict()[source]¶
Make a copy of current settings and convert to a dict.
This method returns a new dict populated with the same values and their priorities as the current settings.
Modifications to the returned dict won’t be reflected on the original settings.
This method can be useful for example for printing settings in Scrapy shell.
- freeze()[source]¶
Disable further changes to the current settings.
After calling this method, the present state of the settings will become immutable. Trying to change values through the
set()method and its variants won’t be possible and will be alerted.
- getbool(name, default=False)[source]¶
Get a setting value as a boolean.
1,'1', True` and'True'returnTrue, while0,'0',False,'False'andNonereturnFalse.For example, settings populated through environment variables set to
'0'will returnFalsewhen using this method.
- getdict(name, default=None)[source]¶
Get a setting value as a dictionary. If the setting original type is a dictionary, a copy of it will be returned. If it is a string it will be evaluated as a JSON dictionary. In the case that it is a
BaseSettingsinstance itself, it will be converted to a dictionary, containing all its current settings values as they would be returned byget(), and losing all information about priority and mutability.
- getdictorlist(name, default=None)[source]¶
Get a setting value as either a
dictor alist.If the setting is already a dict or a list, a copy of it will be returned.
If it is a string it will be evaluated as JSON, or as a comma-separated list of strings as a fallback.
For example, settings populated from the command line will return:
{'key1': 'value1', 'key2': 'value2'}if set to'{"key1": "value1", "key2": "value2"}'['one', 'two']if set to'["one", "two"]'or'one,two'
- Parameters
name (string) – the setting name
default (any) – the value to return if no setting is found
- getlist(name, default=None)[source]¶
Get a setting value as a list. If the setting original type is a list, a copy of it will be returned. If it’s a string it will be split by “,”.
For example, settings populated through environment variables set to
'one,two'will return a list [‘one’, ‘two’] when using this method.
- getpriority(name)[source]¶
Return the current numerical priority value of a setting, or
Noneif the givennamedoes not exist.- Parameters
name (str) – the setting name
- getwithbase(name)[source]¶
Get a composition of a dictionary-like setting and its _BASE counterpart.
- Parameters
name (str) – name of the dictionary-like setting
- maxpriority()[source]¶
Return the numerical value of the highest priority present throughout all settings, or the numerical value for
defaultfromSETTINGS_PRIORITIESif there are no settings stored.
- set(name, value, priority='project')[source]¶
Store a key/value attribute with a given priority.
Settings should be populated before configuring the Crawler object (through the
configure()method), otherwise they won’t have any effect.- Parameters
name (str) – the setting name
value (object) – the value to associate with the setting
priority (str or int) – the priority of the setting. Should be a key of
SETTINGS_PRIORITIESor an integer
- setmodule(module, priority='project')[source]¶
Store settings from a module with a given priority.
This is a helper function that calls
set()for every globally declared uppercase variable ofmodulewith the providedpriority.- Parameters
module (types.ModuleType or str) – the module or the path of the module
priority (str or int) – the priority of the settings. Should be a key of
SETTINGS_PRIORITIESor an integer
- update(values, priority='project')[source]¶
Store key/value pairs with a given priority.
This is a helper function that calls
set()for every item ofvalueswith the providedpriority.If
valuesis a string, it is assumed to be JSON-encoded and parsed into a dict withjson.loads()first. If it is aBaseSettingsinstance, the per-key priorities will be used and thepriorityparameter ignored. This allows inserting/updating settings with different priorities with a single command.- Parameters
values (dict or string or
BaseSettings) – the settings names and valuespriority (str or int) – the priority of the settings. Should be a key of
SETTINGS_PRIORITIESor an integer
SpiderLoader API¶
- class scrapy.spiderloader.SpiderLoader[source]¶
This class is in charge of retrieving and handling the spider classes defined across the project.
Custom spider loaders can be employed by specifying their path in the
SPIDER_LOADER_CLASSproject setting. They must fully implement thescrapy.interfaces.ISpiderLoaderinterface to guarantee an errorless execution.- from_settings(settings)[source]¶
This class method is used by Scrapy to create an instance of the class. It’s called with the current project settings, and it loads the spiders found recursively in the modules of the
SPIDER_MODULESsetting.- Parameters
settings (
Settingsinstance) – project settings
Signals API¶
- class scrapy.signalmanager.SignalManager(sender: Any = _Anonymous)[source]¶
- connect(receiver: Any, signal: Any, **kwargs: Any) None[source]¶
Connect a receiver function to a signal.
The signal can be any object, although Scrapy comes with some predefined signals that are documented in the Signals section.
- Parameters
receiver (collections.abc.Callable) – the function to be connected
signal (object) – the signal to connect to
- disconnect(receiver: Any, signal: Any, **kwargs: Any) None[source]¶
Disconnect a receiver function from a signal. This has the opposite effect of the
connect()method, and the arguments are the same.
- disconnect_all(signal: Any, **kwargs: Any) None[source]¶
Disconnect all receivers from the given signal.
- Parameters
signal (object) – the signal to disconnect from
- send_catch_log(signal: Any, **kwargs: Any) List[Tuple[Any, Any]][source]¶
Send a signal, catch exceptions and log them.
The keyword arguments are passed to the signal handlers (connected through the
connect()method).
- send_catch_log_deferred(signal: Any, **kwargs: Any) Deferred[source]¶
Like
send_catch_log()but supports returningDeferredobjects from signal handlers.Returns a Deferred that gets fired once all signal handlers deferreds were fired. Send a signal, catch exceptions and log them.
The keyword arguments are passed to the signal handlers (connected through the
connect()method).
Stats Collector API¶
There are several Stats Collectors available under the
scrapy.statscollectors module and they all implement the Stats
Collector API defined by the StatsCollector
class (which they all inherit from).
- class scrapy.statscollectors.StatsCollector[source]¶
- get_value(key, default=None)[source]¶
Return the value for the given stats key or default if it doesn’t exist.
- inc_value(key, count=1, start=0)[source]¶
Increment the value of the given stats key, by the given count, assuming the start value given (when it’s not set).
- max_value(key, value)[source]¶
Set the given value for the given key only if current value for the same key is lower than value. If there is no current value for the given key, the value is always set.
- min_value(key, value)[source]¶
Set the given value for the given key only if current value for the same key is greater than value. If there is no current value for the given key, the value is always set.
The following methods are not part of the stats collection api but instead used when implementing custom stats collectors: