SQLite#
SQLite is a fast and lightweight SQL database engine that stores data either in memory or in a single file on disk.
Despite its simplicity, SQLite is a powerful tool. For example, it’s the primary storage system for a number of common applications including Dropbox, Firefox, and Chrome. It’s well suited for caching, and requires no extra configuration or dependencies, which is why it’s the default backend for requests-cache.
Cache Files#
See Cache Files for general info on specifying cache paths
If you specify a name without an extension, the default extension
.sqlite
will be used
In-Memory Caching#
SQLite also supports in-memory databases.
You can enable this (in “shared” memory mode) with the use_memory
option:
>>> session = CachedSession('http_cache', use_memory=True)
Or specify a memory URI with additional options:
>>> session = CachedSession(':file:memdb1?mode=memory')
Or just :memory:
, if you are only using the cache from a single thread:
>>> session = CachedSession(':memory:')
Performance#
When working with average-sized HTTP responses (< 1MB) and using a modern SSD for file storage, you can expect speeds of around:
Write: 2-8ms
Read: 0.2-0.6ms
Of course, this will vary based on hardware specs, response size, and other factors.
Concurrency#
SQLite supports concurrent access, so it is safe to use from a multi-threaded and/or multi-process
application. It supports unlimited concurrent reads. Writes, however, are queued and run in serial,
so if you need to make large volumes of concurrent requests, you may want to consider a different
backend that’s specifically made for that kind of workload, like RedisCache
.
Connection Options#
The SQLite backend accepts any keyword arguments for sqlite3.connect()
. These can be passed
via CachedSession
:
>>> session = CachedSession('http_cache', timeout=30)
Or via SQLiteCache
:
>>> backend = SQLiteCache('http_cache', timeout=30)
>>> session = CachedSession(backend=backend)
API Reference#
SQLite cache backend. |
|
A dictionary-like interface for SQLite |
|
Same as |
- requests_cache.backends.sqlite.DbCache#
- requests_cache.backends.sqlite.DbDict#
- requests_cache.backends.sqlite.DbPickeDict#
- class requests_cache.backends.sqlite.SQLiteCache(db_path='http_cache', **kwargs)[source]#
Bases:
requests_cache.backends.base.BaseCache
SQLite cache backend.
- Parameters
use_cache_dir – Store datebase in a user cache directory (e.g., ~/.cache/http_cache.sqlite)
use_temp – Store database in a temp directory (e.g.,
/tmp/http_cache.sqlite
)use_memory – Store database in memory instead of in a file
fast_save – Significantly increases cache write performance, but with the possibility of data loss. See pragma: synchronous for details.
kwargs – Additional keyword arguments for
sqlite3.connect()
- bulk_delete(keys)[source]#
Remove multiple responses and their associated redirects from the cache, with additional cleanup
- clear()[source]#
Clear the cache. If this fails due to a corrupted cache or other I/O error, this will attempt to delete the cache file and re-initialize.
- create_key(request=None, **kwargs)#
Create a normalized cache key from a request object
- Parameters
request (
Union
[PreparedRequest
,CachedRequest
,None
]) –- Return type
- delete(key)#
Delete a response or redirect from the cache, as well any associated redirect history
- Parameters
key (
str
) –
- delete_url(url, method='GET', **kwargs)#
Delete a cached response for the specified request
- delete_urls(urls, method='GET', **kwargs)#
Delete all cached responses for the specified requests
- get_response(key, default=None)#
Retrieve a response from the cache, if it exists
- Parameters
key (
str
) – Cache key for the responsedefault – Value to return if key is not in the cache
- Return type
- has_url(url, method='GET', **kwargs)#
Returns
True
if the specified request is cached
- keys(check_expiry=False)#
Get all cache keys for redirects and valid responses combined
- remove_expired_responses(expire_after=None)#
Remove expired and invalid responses from the cache, optionally with revalidation
- response_count(check_expiry=False)#
Get the number of responses in the cache, excluding invalid (unusable) responses. Can also optionally exclude expired responses.
- Return type
- save_response(response, cache_key=None, expires=None)#
Save a response to the cache
- values(check_expiry=False)#
Get all valid response objects from the cache
- Return type
- class requests_cache.backends.sqlite.SQLiteDict(db_path, table_name='http_cache', fast_save=False, use_cache_dir=False, use_memory=False, use_temp=False, **kwargs)[source]#
Bases:
requests_cache.backends.base.BaseStorage
A dictionary-like interface for SQLite
- bulk_commit()[source]#
Context manager used to speed up insertion of a large number of records
Example
>>> d1 = SQLiteDict('test') >>> with d1.bulk_commit(): ... for i in range(1000): ... d1[i] = i * 2
- bulk_delete(keys=None, values=None)[source]#
Delete multiple keys from the cache, without raising errors for any missing keys. Also supports deleting by value.
- get(k[, d]) D[k] if k in D, else d. d defaults to None. #
- items() a set-like object providing a view on D’s items #
- keys() a set-like object providing a view on D’s keys #
- pop(k[, d]) v, remove specified key and return the corresponding value. #
If key is not found, d is returned if given, otherwise KeyError is raised.
- popitem() (k, v), remove and return some (key, value) pair #
as a 2-tuple; but raise KeyError if D is empty.
- setdefault(k[, d]) D.get(k,d), also set D[k]=d if k not in D #
- update([E, ]**F) None. Update D from mapping/iterable E and F. #
If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v
- values() an object providing a view on D’s values #
- class requests_cache.backends.sqlite.SQLitePickleDict(db_path, table_name='http_cache', fast_save=False, use_cache_dir=False, use_memory=False, use_temp=False, **kwargs)[source]#
Bases:
requests_cache.backends.sqlite.SQLiteDict
Same as
SQLiteDict
, but serializes values before saving- bulk_commit()#
Context manager used to speed up insertion of a large number of records
Example
>>> d1 = SQLiteDict('test') >>> with d1.bulk_commit(): ... for i in range(1000): ... d1[i] = i * 2
- bulk_delete(keys=None, values=None)#
Delete multiple keys from the cache, without raising errors for any missing keys. Also supports deleting by value.
- clear() None. Remove all items from D. #
- close()#
Close any active connections
- connection(commit=False)#
Get a thread-local database connection
- Return type
- get(k[, d]) D[k] if k in D, else d. d defaults to None. #
- init_db()#
Initialize the database, if it hasn’t already been
- items() a set-like object providing a view on D’s items #
- keys() a set-like object providing a view on D’s keys #
- pop(k[, d]) v, remove specified key and return the corresponding value. #
If key is not found, d is returned if given, otherwise KeyError is raised.
- popitem() (k, v), remove and return some (key, value) pair #
as a 2-tuple; but raise KeyError if D is empty.
- setdefault(k[, d]) D.get(k,d), also set D[k]=d if k not in D #
- update([E, ]**F) None. Update D from mapping/iterable E and F. #
If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v
- vacuum()#
- values() an object providing a view on D’s values #
- requests_cache.backends.sqlite.chunkify(iterable, max_size=999)[source]#
Split an iterable into chunks of a max size
- requests_cache.backends.sqlite.get_cache_path(db_path, use_cache_dir=False, use_temp=False)[source]#
Get a resolved cache path
- requests_cache.backends.sqlite.sqlite_template(timeout=5.0, detect_types=0, isolation_level=None, check_same_thread=True, factory=None, cached_statements=100, uri=False)[source]#
Template function to get an accurate signature for the builtin
sqlite3.connect()