With Python#

The Python programming language#

Python can be easy to pick up whether you’re a first time programmer or you’re experienced with other languages:

Step-by-step#

Quickstart#

>>> from trafilatura import fetch_url, extract
>>> url = 'https://github.blog/2019-03-29-leader-spotlight-erin-spiceland/'
>>> downloaded = fetch_url(url)
>>> downloaded is None  # assuming the download was successful
False
>>> result = extract(downloaded)
>>> print(result)
# newlines preserved, TXT output ...

The only required argument is the input document (here a downloaded HTML file), the rest is optional.

Note

For a hands-on tutorial see also the Python Notebook Trafilatura Overview.

Formats#

Default output is set to TXT (bare text) without metadata.

The following formats are available: bare text, text with Markdown formatting, CSV, JSON, XML, and XML following the guidelines of the Text Encoding Initiative (TEI).

Hint

Combining TXT, CSV and JSON formats with certain structural elements (e.g. formatting or links) triggers output in TXT+Markdown format.

The variables from the example above can be used further:

# newlines preserved, TXT output
>>> extract(downloaded)
# TXT/Markdown output
>>> extract(downloaded, include_links=True)
# some formatting preserved in basic XML structure
>>> extract(downloaded, output_format='xml')
# source URL provided for inclusion in metadata
>>> extract(downloaded, output_format='xml', url=url)
# links preserved in XML
>>> extract(downloaded, output_format='xml', include_links=True)

Choice of HTML elements#

Several elements can be included or discarded:

  • Text elements: comments, tables

  • Structural elements: formatting, images, links

Their inclusion can be activated or deactivated using paramaters passed to the extract() function:

# no comments in output
>>> result = extract(downloaded, include_comments=False)
# skip tables examination
>>> result = extract(downloaded, include_tables=False)
# output with links
>>> result = extract(downloaded, include_links=True)
# and so on...

Note

Including extra elements works best with conversion to XML formats (output_format="xml") or bare_extraction(). Both ways allow for direct display and manipulation of the elements. Certain elements are only visible in the output if the chosen format allows it (e.g. images and XML).

include_formatting=True

Keep structural elements related to formatting (<b>/<strong>, <i>/<emph> etc.)

include_links=True

Keep link targets (in href="...")

include_images=True

Keep track of images along with their targets (<img> attributes: alt, src, title)

include_tables=True

Extract text from HTML <table> elements.

Only include_tables is activated by default.

Hint

If the output is buggy removing a constraint (e.g. formatting) can greatly improve the result.

Optimizing for precision and recall#

The parameters favor_precision & favor_recall can be passed to the extract() & bare_extraction() functions:

>>> result = extract(downloaded, url, favor_precision=True)

They slightly affect processing and volume of textual output, respectively concerning precision/accuracy (i.e. more selective extraction, yielding less and more central elements) and recall (i.e. more opportunistic extraction, taking more elements into account).

Language identification#

The target language can also be set using 2-letter codes (ISO 639-1), there will be no output if the detected language of the result does not match and no such filtering if the identification component has not been installed (see above installation instructions) or if the target language is not available.

>>> result = extract(downloaded, url, target_language='de')

Note

Additional components are required: pip install trafilatura[all]

Optimizing for speed#

Execution speed not only depends on the platform and on supplementary packages (trafilatura[all], htmldate[speed]), but also on the extraction strategy.

The available fallbacks make extraction more precise but also slower. The use of fallback algorithms can also be bypassed in fast mode, which should make extraction about twice as fast:

# skip algorithms used as fallback
>>> result = extract(downloaded, no_fallback=True)

The following combination can lead to shorter processing times:

>>> result = extract(downloaded, include_comments=False, include_tables=False, no_fallback=True)

Extraction settings#

Text extraction#

Text extraction can be parametrized by providing a custom configuration file (that is a variant of settings.cfg) with the config parameter in bare_extraction or extract, which overrides the standard settings:

# load the required functions
>>> from trafilatura import extract
>>> from trafilatura.settings import use_config
# load the new settings by providing a file name
>>> newconfig = use_config("myfile.cfg")
# use with a previously downloaded document
>>> extract(downloaded, config=newconfig)
# provide a file name directly (can be slower)
>>> extract(downloaded, settingsfile="myfile.cfg")

Useful adjustments include download parameters, minimal extraction length, or de-duplication settings. A timeout exit during extraction can be turned off if malicious data are not an issue or if you run into an error like signal only works in main thread.

Output Python objects#

The extraction can be customized using a series of parameters, for more see the core functions page.

The function bare_extraction can be used to bypass output conversion, it returns Python variables for metadata (dictionary) as well as main text and comments (both LXML objects).

>>> from trafilatura import bare_extraction
>>> bare_extraction(downloaded)

Date extraction#

Among metadata extraction, dates are handled by an external module: htmldate. By default, focus is on original dates and the extraction replicates the fast/no_fallback option.

Custom parameters can be passed through the extraction function or through the extract_metadata function in trafilatura.metadata, most notably:

  • extensive_search (boolean), to activate pattern-based opportunistic text search,

  • original_date (boolean) to look for the original publication date,

  • outputformat (string), to provide a custom datetime format,

  • max_date (string), to set the latest acceptable date manually (YYYY-MM-DD format).

>>> from trafilatura import extract
# pass the new parameters as dict, with a previously downloaded document
>>> extract(downloaded, output_format="xml", date_extraction_params={"extensive_search": True, "max_date": "2018-07-01"})

Passing URLs#

Even if the page to process has already been downloaded it can still be useful to pass the URL as an argument. See this previous bug for an example:

>>> url = "https://www.thecanary.co/feature/2021/05/19/another-by-election-headache-is-incoming-for-keir-starmer"
>>> downloaded = fetch_url(url)
>>> bare_extraction(downloaded, with_metadata=True)
# content discarded since necessary metadata couldn't be extracted
>>> url = "https://www.thecanary.co/feature/2021/05/19/another-by-election-headache-is-incoming-for-keir-starmer"
>>> bare_extraction(downloaded, with_metadata=True, url=url)
# date found in URL, extraction successful

Customization#

Settings file#

The standard settings file can be modified. It currently entails variables related to text extraction.

>>> from trafilatura.settings import use_config
>>> myconfig = use_config('path/to/myfile')
>>> extract(downloaded, config=myconfig)

User agent settings can also be specified in a custom settings.cfg file. Then you can apply the changes by parsing it beforehand and using the config argument.

Raw HTTP response objects#

The fetch_url() function can pass a urllib3 response object straight to the extraction by setting the optional decode argument to False.

This can be useful to get the final redirection URL with response.geturl() and then pass is directly as a URL argument to the extraction function:

>>> from trafilatura import fetch_url, bare_extraction
>>> response = fetch_url(url, decode=False)
>>> bare_extraction(response, url=response.geturl()) # here is the redirection URL

LXML objects#

The input can consist of a previously parsed tree (i.e. a lxml.html object), which is then handled seamlessly:

>>> from lxml import html
>>> mytree = html.fromstring('<html><body><article><p>Here is the main text. It has to be long enough in order to bypass the safety checks. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.</p></article></body></html>')
>>> extract(mytree)
'Here is the main text. It has to be long enough in order to bypass the safety checks. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.\n'

Package settings#

For further configuration (if the settings.cfg file is not enough) you can edit package-wide variables contained in the settings.py file:

  1. Clone the repository

  2. Edit settings.py

  3. Reinstall the package locally: pip install --no-deps -U . in the home directory of the cloned repository

These remaining variables greatly alter the functioning of the package!