rdflib.plugins.sparql package

Subpackages

Submodules

rdflib.plugins.sparql.aggregates module

class rdflib.plugins.sparql.aggregates.Accumulator(aggregation)[source]

Bases: object

abstract base class for different aggregation functions

Parameters:

aggregation (CompValue) –

__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.aggregates', '__doc__': 'abstract base class for different aggregation functions', '__init__': <function Accumulator.__init__>, 'dont_care': <function Accumulator.dont_care>, 'use_row': <function Accumulator.use_row>, 'set_value': <function Accumulator.set_value>, '__dict__': <attribute '__dict__' of 'Accumulator' objects>, '__weakref__': <attribute '__weakref__' of 'Accumulator' objects>, '__annotations__': {'get_value': 'Callable[[], Optional[Literal]]', 'update': "Callable[[FrozenBindings, 'Aggregator'], None]", 'seen': 'Set[Any]'}})
__init__(aggregation)[source]
Parameters:

aggregation (CompValue) –

__module__ = 'rdflib.plugins.sparql.aggregates'
__weakref__

list of weak references to the object (if defined)

dont_care(row)[source]

skips distinct test

Parameters:

row (FrozenBindings) –

Return type:

bool

set_value(bindings)[source]

sets final value in bindings

Parameters:

bindings (MutableMapping[Variable, Identifier]) –

Return type:

None

use_row(row)[source]

tests distinct with set

Parameters:

row (FrozenBindings) –

Return type:

bool

class rdflib.plugins.sparql.aggregates.Aggregator(aggregations)[source]

Bases: object

combines different Accumulator objects

Parameters:

aggregations (List[CompValue]) –

__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.aggregates', '__doc__': 'combines different Accumulator objects', 'accumulator_classes': {'Aggregate_Count': <class 'rdflib.plugins.sparql.aggregates.Counter'>, 'Aggregate_Sample': <class 'rdflib.plugins.sparql.aggregates.Sample'>, 'Aggregate_Sum': <class 'rdflib.plugins.sparql.aggregates.Sum'>, 'Aggregate_Avg': <class 'rdflib.plugins.sparql.aggregates.Average'>, 'Aggregate_Min': <class 'rdflib.plugins.sparql.aggregates.Minimum'>, 'Aggregate_Max': <class 'rdflib.plugins.sparql.aggregates.Maximum'>, 'Aggregate_GroupConcat': <class 'rdflib.plugins.sparql.aggregates.GroupConcat'>}, '__init__': <function Aggregator.__init__>, 'update': <function Aggregator.update>, 'get_bindings': <function Aggregator.get_bindings>, '__dict__': <attribute '__dict__' of 'Aggregator' objects>, '__weakref__': <attribute '__weakref__' of 'Aggregator' objects>, '__annotations__': {'bindings': 'Dict[Variable, Identifier]', 'accumulators': 'Dict[str, Accumulator]'}})
__init__(aggregations)[source]
Parameters:

aggregations (List[CompValue]) –

__module__ = 'rdflib.plugins.sparql.aggregates'
__weakref__

list of weak references to the object (if defined)

accumulator_classes = {'Aggregate_Avg': <class 'rdflib.plugins.sparql.aggregates.Average'>, 'Aggregate_Count': <class 'rdflib.plugins.sparql.aggregates.Counter'>, 'Aggregate_GroupConcat': <class 'rdflib.plugins.sparql.aggregates.GroupConcat'>, 'Aggregate_Max': <class 'rdflib.plugins.sparql.aggregates.Maximum'>, 'Aggregate_Min': <class 'rdflib.plugins.sparql.aggregates.Minimum'>, 'Aggregate_Sample': <class 'rdflib.plugins.sparql.aggregates.Sample'>, 'Aggregate_Sum': <class 'rdflib.plugins.sparql.aggregates.Sum'>}
get_bindings()[source]

calculate and set last values

Return type:

Mapping[Variable, Identifier]

update(row)[source]

update all own accumulators

Parameters:

row (FrozenBindings) –

Return type:

None

class rdflib.plugins.sparql.aggregates.Average(aggregation)[source]

Bases: Accumulator

Parameters:

aggregation (CompValue) –

__annotations__ = {}
__init__(aggregation)[source]
Parameters:

aggregation (CompValue) –

__module__ = 'rdflib.plugins.sparql.aggregates'
get_value()[source]
Return type:

Literal

update(row, aggregator)[source]
Parameters:
Return type:

None

class rdflib.plugins.sparql.aggregates.Counter(aggregation)[source]

Bases: Accumulator

Parameters:

aggregation (CompValue) –

__annotations__ = {}
__init__(aggregation)[source]
Parameters:

aggregation (CompValue) –

__module__ = 'rdflib.plugins.sparql.aggregates'
eval_full_row(row)[source]
Parameters:

row (FrozenBindings) –

Return type:

FrozenBindings

eval_row(row)[source]
Parameters:

row (FrozenBindings) –

Return type:

Identifier

get_value()[source]
Return type:

Literal

update(row, aggregator)[source]
Parameters:
Return type:

None

use_row(row)[source]

tests distinct with set

Parameters:

row (FrozenBindings) –

Return type:

bool

class rdflib.plugins.sparql.aggregates.Extremum(aggregation)[source]

Bases: Accumulator

abstract base class for Minimum and Maximum

Parameters:

aggregation (CompValue) –

__annotations__ = {}
__init__(aggregation)[source]
Parameters:

aggregation (CompValue) –

__module__ = 'rdflib.plugins.sparql.aggregates'
set_value(bindings)[source]

sets final value in bindings

Parameters:

bindings (MutableMapping[Variable, Identifier]) –

Return type:

None

update(row, aggregator)[source]
Parameters:
Return type:

None

class rdflib.plugins.sparql.aggregates.GroupConcat(aggregation)[source]

Bases: Accumulator

Parameters:

aggregation (CompValue) –

__annotations__ = {'value': 'List[Literal]'}
__init__(aggregation)[source]
Parameters:

aggregation (CompValue) –

__module__ = 'rdflib.plugins.sparql.aggregates'
get_value()[source]
Return type:

Literal

update(row, aggregator)[source]
Parameters:
Return type:

None

value: List[Literal]
class rdflib.plugins.sparql.aggregates.Maximum(aggregation)[source]

Bases: Extremum

Parameters:

aggregation (CompValue) –

__annotations__ = {}
__module__ = 'rdflib.plugins.sparql.aggregates'
compare(val1, val2)[source]
Parameters:
Return type:

TypeVar(_ValueT, Variable, BNode, URIRef, Literal)

class rdflib.plugins.sparql.aggregates.Minimum(aggregation)[source]

Bases: Extremum

Parameters:

aggregation (CompValue) –

__annotations__ = {}
__module__ = 'rdflib.plugins.sparql.aggregates'
compare(val1, val2)[source]
Parameters:
Return type:

TypeVar(_ValueT, Variable, BNode, URIRef, Literal)

class rdflib.plugins.sparql.aggregates.Sample(aggregation)[source]

Bases: Accumulator

takes the first eligible value

__annotations__ = {}
__init__(aggregation)[source]
__module__ = 'rdflib.plugins.sparql.aggregates'
get_value()[source]
Return type:

None

update(row, aggregator)[source]
Parameters:
Return type:

None

class rdflib.plugins.sparql.aggregates.Sum(aggregation)[source]

Bases: Accumulator

Parameters:

aggregation (CompValue) –

__annotations__ = {}
__init__(aggregation)[source]
Parameters:

aggregation (CompValue) –

__module__ = 'rdflib.plugins.sparql.aggregates'
get_value()[source]
Return type:

Literal

update(row, aggregator)[source]
Parameters:
Return type:

None

rdflib.plugins.sparql.aggregates.type_safe_numbers(*args)[source]
Parameters:

args (Union[Decimal, float, int]) –

Return type:

Iterable[Union[float, int]]

rdflib.plugins.sparql.algebra module

rdflib.plugins.sparql.algebra.BGP(triples=None)[source]
Parameters:

triples (Optional[List[Tuple[Identifier, Identifier, Identifier]]]) –

Return type:

CompValue

exception rdflib.plugins.sparql.algebra.ExpressionNotCoveredException[source]

Bases: Exception

__module__ = 'rdflib.plugins.sparql.algebra'
__weakref__

list of weak references to the object (if defined)

rdflib.plugins.sparql.algebra.Extend(p, expr, var)[source]
Parameters:
Return type:

CompValue

rdflib.plugins.sparql.algebra.Filter(expr, p)[source]
Parameters:
Return type:

CompValue

rdflib.plugins.sparql.algebra.Graph(term, graph)[source]
Parameters:
Return type:

CompValue

rdflib.plugins.sparql.algebra.Group(p, expr=None)[source]
Parameters:
Return type:

CompValue

rdflib.plugins.sparql.algebra.Join(p1, p2)[source]
Parameters:
Return type:

CompValue

rdflib.plugins.sparql.algebra.LeftJoin(p1, p2, expr)[source]
Parameters:
Return type:

CompValue

rdflib.plugins.sparql.algebra.Minus(p1, p2)[source]
Parameters:
Return type:

CompValue

rdflib.plugins.sparql.algebra.OrderBy(p, expr)[source]
Parameters:
Return type:

CompValue

rdflib.plugins.sparql.algebra.Project(p, PV)[source]
Parameters:
Return type:

CompValue

exception rdflib.plugins.sparql.algebra.StopTraversal(rv)[source]

Bases: Exception

Parameters:

rv (bool) –

__init__(rv)[source]
Parameters:

rv (bool) –

__module__ = 'rdflib.plugins.sparql.algebra'
__weakref__

list of weak references to the object (if defined)

rdflib.plugins.sparql.algebra.ToMultiSet(p)[source]
Parameters:

p (Union[List[Dict[Variable, str]], CompValue]) –

Return type:

CompValue

rdflib.plugins.sparql.algebra.Union(p1, p2)[source]
Parameters:
Return type:

CompValue

rdflib.plugins.sparql.algebra.Values(res)[source]
Parameters:

res (List[Dict[Variable, str]]) –

Return type:

CompValue

rdflib.plugins.sparql.algebra.analyse(n, children)[source]

Some things can be lazily joined. This propegates whether they can up the tree and sets lazy flags for all joins

Parameters:
  • n (Any) –

  • children (Any) –

Return type:

bool

rdflib.plugins.sparql.algebra.collectAndRemoveFilters(parts)[source]

FILTER expressions apply to the whole group graph pattern in which they appear.

http://www.w3.org/TR/sparql11-query/#sparqlCollectFilters

Parameters:

parts (List[CompValue]) –

Return type:

Optional[Expr]

rdflib.plugins.sparql.algebra.pprintAlgebra(q)[source]
Return type:

None

rdflib.plugins.sparql.algebra.reorderTriples(l_)[source]

Reorder triple patterns so that we execute the ones with most bindings first

Parameters:

l_ (Iterable[Tuple[Identifier, Identifier, Identifier]]) –

Return type:

List[Tuple[Identifier, Identifier, Identifier]]

rdflib.plugins.sparql.algebra.simplify(n)[source]

Remove joins to empty BGPs

Parameters:

n (Any) –

Return type:

Optional[CompValue]

rdflib.plugins.sparql.algebra.translate(q)[source]

http://www.w3.org/TR/sparql11-query/#convertSolMod

Parameters:

q (CompValue) –

Return type:

Tuple[Optional[CompValue], List[Variable]]

rdflib.plugins.sparql.algebra.translateAggregates(q, M)[source]
Parameters:
Return type:

Tuple[CompValue, List[Tuple[Variable, Variable]]]

rdflib.plugins.sparql.algebra.translateAlgebra(query_algebra)[source]

Translates a SPARQL 1.1 algebra tree into the corresponding query string.

Parameters:

query_algebra (Query) – An algebra returned by translateQuery.

Return type:

str

Returns:

The query form generated from the SPARQL 1.1 algebra tree for SELECT queries.

rdflib.plugins.sparql.algebra.translateExists(e)[source]

Translate the graph pattern used by EXISTS and NOT EXISTS http://www.w3.org/TR/sparql11-query/#sparqlCollectFilters

Parameters:

e (Union[Expr, Literal, Variable, URIRef]) –

Return type:

Union[Expr, Literal, Variable, URIRef]

rdflib.plugins.sparql.algebra.translateGraphGraphPattern(graphPattern)[source]
Parameters:

graphPattern (CompValue) –

Return type:

CompValue

rdflib.plugins.sparql.algebra.translateGroupGraphPattern(graphPattern)[source]

http://www.w3.org/TR/sparql11-query/#convertGraphPattern

Parameters:

graphPattern (CompValue) –

Return type:

CompValue

rdflib.plugins.sparql.algebra.translateGroupOrUnionGraphPattern(graphPattern)[source]
Parameters:

graphPattern (CompValue) –

Return type:

Optional[CompValue]

rdflib.plugins.sparql.algebra.translateInlineData(graphPattern)[source]
Parameters:

graphPattern (CompValue) –

Return type:

CompValue

rdflib.plugins.sparql.algebra.translatePName(p, prologue)[source]

Expand prefixed/relative URIs

Parameters:
Return type:

Optional[Identifier]

rdflib.plugins.sparql.algebra.translatePath(p)[source]

Translate PropertyPath expressions

Parameters:

p (Union[CompValue, URIRef]) –

Return type:

Optional[Path]

rdflib.plugins.sparql.algebra.translatePrologue(p, base, initNs=None, prologue=None)[source]
Parameters:
Return type:

Prologue

rdflib.plugins.sparql.algebra.translateQuads(quads)[source]
Parameters:

quads (CompValue) –

Return type:

Tuple[List[Tuple[Identifier, Identifier, Identifier]], DefaultDict[str, List[Tuple[Identifier, Identifier, Identifier]]]]

rdflib.plugins.sparql.algebra.translateQuery(q, base=None, initNs=None)[source]

Translate a query-parsetree to a SPARQL Algebra Expression

Return a rdflib.plugins.sparql.sparql.Query object

Parameters:
Return type:

Query

rdflib.plugins.sparql.algebra.translateUpdate(q, base=None, initNs=None)[source]

Returns a list of SPARQL Update Algebra expressions

Parameters:
Return type:

Update

rdflib.plugins.sparql.algebra.translateUpdate1(u, prologue)[source]
Parameters:
Return type:

CompValue

rdflib.plugins.sparql.algebra.translateValues(v)[source]
Parameters:

v (CompValue) –

Return type:

Union[List[Dict[Variable, str]], CompValue]

rdflib.plugins.sparql.algebra.traverse(tree, visitPre=<function <lambda>>, visitPost=<function <lambda>>, complete=None)[source]

Traverse tree, visit each node with visit function visit function may raise StopTraversal to stop traversal if complete!=None, it is returned on complete traversal, otherwise the transformed tree is returned

Parameters:
Return type:

Any

rdflib.plugins.sparql.algebra.triples(l)[source]
Parameters:

l (Union[List[List[Identifier]], List[Tuple[Identifier, Identifier, Identifier]]]) –

Return type:

List[Tuple[Identifier, Identifier, Identifier]]

rdflib.plugins.sparql.datatypes module

rdflib.plugins.sparql.datatypes.type_promotion(t1, t2)[source]
Parameters:
Return type:

URIRef

rdflib.plugins.sparql.evaluate module

rdflib.plugins.sparql.evaluate.evalAggregateJoin(ctx, agg)[source]
Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalAskQuery(ctx, query)[source]
Parameters:
Return type:

Mapping[str, Union[str, bool]]

rdflib.plugins.sparql.evaluate.evalBGP(ctx, bgp)[source]

A basic graph pattern

Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalConstructQuery(ctx, query)[source]
Parameters:
Return type:

Mapping[str, Union[str, Graph]]

rdflib.plugins.sparql.evaluate.evalDescribeQuery(ctx, query)[source]
Parameters:

ctx (QueryContext) –

Return type:

Dict[str, Union[str, Graph]]

rdflib.plugins.sparql.evaluate.evalDistinct(ctx, part)[source]
Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalExtend(ctx, extend)[source]
Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalFilter(ctx, part)[source]
Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalGraph(ctx, part)[source]
Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalGroup(ctx, group)[source]

http://www.w3.org/TR/sparql11-query/#defn_algGroup

Parameters:
rdflib.plugins.sparql.evaluate.evalJoin(ctx, join)[source]
Parameters:
Return type:

Generator[FrozenDict, None, None]

rdflib.plugins.sparql.evaluate.evalLazyJoin(ctx, join)[source]

A lazy join will push the variables bound in the first part to the second part, essentially doing the join implicitly hopefully evaluating much fewer triples

Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalLeftJoin(ctx, join)[source]
Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalMinus(ctx, minus)[source]
Parameters:
Return type:

Generator[FrozenDict, None, None]

rdflib.plugins.sparql.evaluate.evalMultiset(ctx, part)[source]
Parameters:
rdflib.plugins.sparql.evaluate.evalOrderBy(ctx, part)[source]
Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalPart(ctx, part)[source]
Parameters:
Return type:

Any

rdflib.plugins.sparql.evaluate.evalProject(ctx, project)[source]
Parameters:
rdflib.plugins.sparql.evaluate.evalQuery(graph, query, initBindings=None, base=None)[source]

Caution

This method can access indirectly requested network endpoints, for example, query processing will attempt to access network endpoints specified in SERVICE directives.

When processing untrusted or potentially malicious queries, measures should be taken to restrict network and file access.

For information on available security measures, see the RDFLib Security Considerations documentation.

Parameters:
Return type:

Mapping[Any, Any]

rdflib.plugins.sparql.evaluate.evalReduced(ctx, part)[source]

apply REDUCED to result

REDUCED is not as strict as DISTINCT, but if the incoming rows were sorted it should produce the same result with limited extra memory and time per incoming row.

Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evaluate.evalSelectQuery(ctx, query)[source]
Parameters:
Return type:

Mapping[str, Union[str, List[Variable], Iterable[FrozenDict]]]

rdflib.plugins.sparql.evaluate.evalServiceQuery(ctx, part)[source]
Parameters:
rdflib.plugins.sparql.evaluate.evalSlice(ctx, slice)[source]
Parameters:
rdflib.plugins.sparql.evaluate.evalUnion(ctx, union)[source]
Parameters:
Return type:

Iterable[FrozenBindings]

rdflib.plugins.sparql.evaluate.evalValues(ctx, part)[source]
Parameters:
Return type:

Generator[FrozenBindings, None, None]

rdflib.plugins.sparql.evalutils module

rdflib.plugins.sparql.operators module

rdflib.plugins.sparql.operators.AdditiveExpression(e, ctx)[source]
Parameters:
Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_ABS(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-abs

Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_BNODE(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-bnode

Return type:

BNode

rdflib.plugins.sparql.operators.Builtin_BOUND(e, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-bound

Parameters:

e (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_CEIL(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-ceil

Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_COALESCE(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-coalesce

Parameters:

expr (Expr) –

rdflib.plugins.sparql.operators.Builtin_CONCAT(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-concat

Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_CONTAINS(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strcontains

Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_DATATYPE(e, ctx)[source]
Parameters:

e (Expr) –

Return type:

Optional[str]

rdflib.plugins.sparql.operators.Builtin_DAY(e, ctx)[source]
Parameters:

e (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_ENCODE_FOR_URI(expr, ctx)[source]
Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_EXISTS(e, ctx)[source]
Parameters:
Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_FLOOR(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-floor

Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_HOURS(e, ctx)[source]
Parameters:

e (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_IF(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-if

Parameters:

expr (Expr) –

rdflib.plugins.sparql.operators.Builtin_IRI(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-iri

Parameters:
Return type:

URIRef

rdflib.plugins.sparql.operators.Builtin_LANG(e, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-lang

Returns the language tag of ltrl, if it has one. It returns “” if ltrl has no language tag. Note that the RDF data model does not include literals with an empty language tag.

Parameters:

e (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_LANGMATCHES(e, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-langMatches

Parameters:

e (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_LCASE(e, ctx)[source]
Parameters:

e (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_MD5(expr, ctx)[source]
Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_MINUTES(e, ctx)[source]
Parameters:

e (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_MONTH(e, ctx)[source]
Parameters:

e (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_NOW(e, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-now

Parameters:

e (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_RAND(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#idp2133952

Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_REGEX(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-regex Invokes the XPath fn:matches function to match text against a regular expression pattern. The regular expression language is defined in XQuery 1.0 and XPath 2.0 Functions and Operators section 7.6.1 Regular Expression Syntax

Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_REPLACE(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-substr

Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_ROUND(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-round

Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_SECONDS(e, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-seconds

Parameters:

e (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_SHA1(expr, ctx)[source]
Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_SHA256(expr, ctx)[source]
Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_SHA384(expr, ctx)[source]
Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_SHA512(expr, ctx)[source]
Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_STR(e, ctx)[source]
Parameters:

e (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_STRAFTER(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strafter

Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_STRBEFORE(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strbefore

Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_STRDT(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strdt

Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_STRENDS(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strends

Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_STRLANG(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strlang

Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_STRLEN(e, ctx)[source]
Parameters:

e (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_STRSTARTS(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strstarts

Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_STRUUID(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strdt

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_SUBSTR(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-substr

Parameters:

expr (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_TIMEZONE(e, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-timezone

Return type:

Literal

Returns:

the timezone part of arg as an xsd:dayTimeDuration.

Raises:

an error if there is no timezone.

Parameters:

e (Expr) –

rdflib.plugins.sparql.operators.Builtin_TZ(e, ctx)[source]
Parameters:

e (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_UCASE(e, ctx)[source]
Parameters:

e (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_UUID(expr, ctx)[source]

http://www.w3.org/TR/sparql11-query/#func-strdt

Parameters:

expr (Expr) –

Return type:

URIRef

rdflib.plugins.sparql.operators.Builtin_YEAR(e, ctx)[source]
Parameters:

e (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_isBLANK(expr, ctx)[source]
Parameters:
Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_isIRI(expr, ctx)[source]
Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_isLITERAL(expr, ctx)[source]
Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_isNUMERIC(expr, ctx)[source]
Return type:

Literal

rdflib.plugins.sparql.operators.Builtin_sameTerm(e, ctx)[source]
Parameters:

e (Expr) –

Return type:

Literal

rdflib.plugins.sparql.operators.ConditionalAndExpression(e, ctx)[source]
Parameters:
Return type:

Literal

rdflib.plugins.sparql.operators.ConditionalOrExpression(e, ctx)[source]
Parameters:
Return type:

Literal

rdflib.plugins.sparql.operators.EBV(rt)[source]

Effective Boolean Value (EBV)

  • If the argument is a typed literal with a datatype of xsd:boolean, the EBV is the value of that argument.

  • If the argument is a plain literal or a typed literal with a datatype of xsd:string, the EBV is false if the operand value has zero length; otherwise the EBV is true.

  • If the argument is a numeric type or a typed literal with a datatype derived from a numeric type, the EBV is false if the operand value is NaN or is numerically equal to zero; otherwise the EBV is true.

  • All other arguments, including unbound arguments, produce a type error.

Parameters:

rt (Union[Identifier, SPARQLError, Expr]) –

Return type:

bool

rdflib.plugins.sparql.operators.Function(e, ctx)[source]

Custom functions and casts

Parameters:
Return type:

Node

rdflib.plugins.sparql.operators.MultiplicativeExpression(e, ctx)[source]
Parameters:
Return type:

Literal

rdflib.plugins.sparql.operators.RelationalExpression(e, ctx)[source]
Parameters:
Return type:

Literal

rdflib.plugins.sparql.operators.UnaryMinus(expr, ctx)[source]
Parameters:
Return type:

Literal

rdflib.plugins.sparql.operators.UnaryNot(expr, ctx)[source]
Parameters:
Return type:

Literal

rdflib.plugins.sparql.operators.UnaryPlus(expr, ctx)[source]
Parameters:
Return type:

Literal

rdflib.plugins.sparql.operators.and_(*args)[source]
Parameters:

args (Expr) –

Return type:

Expr

rdflib.plugins.sparql.operators.calculateDuration(obj1, obj2)[source]

returns the duration Literal between two datetime

Parameters:
Return type:

Literal

rdflib.plugins.sparql.operators.calculateFinalDateTime(obj1, dt1, obj2, dt2, operation)[source]

Calculates the final dateTime/date/time resultant after addition/ subtraction of duration/dayTimeDuration/yearMonthDuration

Parameters:
Return type:

Literal

rdflib.plugins.sparql.operators.custom_function(uri, override=False, raw=False)[source]

Decorator version of register_custom_function().

Parameters:
Return type:

Callable[[Callable[[Expr, FrozenBindings], Node]], Callable[[Expr, FrozenBindings], Node]]

rdflib.plugins.sparql.operators.date(e)[source]
Parameters:

e (Literal) –

Return type:

date

rdflib.plugins.sparql.operators.dateTimeObjects(expr)[source]

return a dataTime/date/time/duration/dayTimeDuration/yearMonthDuration python objects from a literal

Parameters:

expr (Literal) –

Return type:

Any

rdflib.plugins.sparql.operators.datetime(e)[source]
Parameters:

e (Literal) –

Return type:

datetime

rdflib.plugins.sparql.operators.default_cast(e, ctx)[source]
Parameters:
Return type:

Literal

rdflib.plugins.sparql.operators.isCompatibleDateTimeDatatype(obj1, dt1, obj2, dt2)[source]

Returns a boolean indicating if first object is compatible with operation(+/-) over second object.

Parameters:
Return type:

bool

rdflib.plugins.sparql.operators.literal(s)[source]
Parameters:

s (Literal) –

Return type:

Literal

rdflib.plugins.sparql.operators.not_(arg)[source]
Return type:

Expr

rdflib.plugins.sparql.operators.numeric(expr)[source]

return a number from a literal http://www.w3.org/TR/xpath20/#promotion

or TypeError

Parameters:

expr (Literal) –

Return type:

Any

rdflib.plugins.sparql.operators.register_custom_function(uri, func, override=False, raw=False)[source]

Register a custom SPARQL function.

By default, the function will be passed the RDF terms in the argument list. If raw is True, the function will be passed an Expression and a Context.

The function must return an RDF term, or raise a SparqlError.

Parameters:
Return type:

None

rdflib.plugins.sparql.operators.simplify(expr)[source]
Parameters:

expr (Any) –

Return type:

Any

rdflib.plugins.sparql.operators.string(s)[source]

Make sure the passed thing is a string literal i.e. plain literal, xsd:string literal or lang-tagged literal

Parameters:

s (Literal) –

Return type:

Literal

rdflib.plugins.sparql.operators.unregister_custom_function(uri, func=None)[source]

The ‘func’ argument is included for compatibility with existing code. A previous implementation checked that the function associated with the given uri was actually ‘func’, but this is not necessary as the uri should uniquely identify the function.

Parameters:
Return type:

None

rdflib.plugins.sparql.parser module

SPARQL 1.1 Parser

based on pyparsing

rdflib.plugins.sparql.parser.expandBNodeTriples(terms)[source]

expand [ ?p ?o ] syntax for implicit bnodes

Parameters:

terms (ParseResults) –

Return type:

List[Any]

rdflib.plugins.sparql.parser.expandCollection(terms)[source]

expand ( 1 2 3 ) notation for collections

Parameters:

terms (ParseResults) –

Return type:

List[List[Any]]

rdflib.plugins.sparql.parser.expandTriples(terms)[source]

Expand ; and , syntax for repeat predicates, subjects

Parameters:

terms (ParseResults) –

Return type:

List[Any]

rdflib.plugins.sparql.parser.expandUnicodeEscapes(q)[source]

The syntax of the SPARQL Query Language is expressed over code points in Unicode [UNICODE]. The encoding is always UTF-8 [RFC3629]. Unicode code points may also be expressed using an uXXXX (U+0 to U+FFFF) or UXXXXXXXX syntax (for U+10000 onwards) where X is a hexadecimal digit [0-9A-F]

Parameters:

q (str) –

Return type:

str

rdflib.plugins.sparql.parser.neg(literal)[source]
Parameters:

literal (Literal) –

Return type:

Literal

rdflib.plugins.sparql.parser.parseQuery(q)[source]
Parameters:

q (Union[str, bytes, TextIO, BinaryIO]) –

Return type:

ParseResults

rdflib.plugins.sparql.parser.parseUpdate(q)[source]
Parameters:

q (Union[str, bytes, TextIO, BinaryIO]) –

Return type:

CompValue

rdflib.plugins.sparql.parser.setDataType(terms)[source]
Parameters:

terms (Tuple[Any, Optional[str]]) –

Return type:

Literal

rdflib.plugins.sparql.parser.setLanguage(terms)[source]
Parameters:

terms (Tuple[Any, Optional[str]]) –

Return type:

Literal

rdflib.plugins.sparql.parserutils module

class rdflib.plugins.sparql.parserutils.Comp(name, expr)[source]

Bases: TokenConverter

A pyparsing token for grouping together things with a label Any sub-tokens that are not Params will be ignored.

Returns CompValue / Expr objects - depending on whether evalFn is set.

Parameters:
  • name (str) –

  • expr (ParserElement) –

__abstractmethods__ = frozenset({})
__annotations__ = {'DEFAULT_WHITE_CHARS': 'str', '_defaultName': 'typing.Optional[str]', '_literalStringClass': 'type', 'customName': 'str', 'evalfn': 'Optional[Callable[[Any, Any], Any]]', 'failAction': 'typing.Optional[ParseFailAction]', 'ignoreExprs': "List['ParserElement']", 'parseAction': 'List[ParseAction]', 'recursion_memos': "typing.Dict[Tuple[int, 'Forward', bool], Tuple[int, Union[ParseResults, Exception]]]", 'resultsName': 'str', 'suppress_warnings_': 'List[Diagnostics]', 'verbose_stacktrace': 'bool'}
__init__(name, expr)[source]
Parameters:
  • name (str) –

  • expr (ParserElement) –

__module__ = 'rdflib.plugins.sparql.parserutils'
__slotnames__ = []
postParse(instring, loc, tokenList)[source]
Parameters:
  • instring (str) –

  • loc (int) –

  • tokenList (ParseResults) –

Return type:

Union[Expr, CompValue]

setEvalFn(evalfn)[source]
Parameters:

evalfn (Callable[[Any, Any], Any]) –

Return type:

Comp

class rdflib.plugins.sparql.parserutils.CompValue(name, **values)[source]

Bases: OrderedDict

The result of parsing a Comp Any included Params are available as Dict keys or as attributes

Parameters:

name (str) –

__getattr__(a)[source]
Parameters:

a (str) –

Return type:

Any

__getitem__(a)[source]

x.__getitem__(y) <==> x[y]

__init__(name, **values)[source]
Parameters:

name (str) –

__module__ = 'rdflib.plugins.sparql.parserutils'
__repr__()[source]

Return repr(self).

Return type:

str

__str__()[source]

Return str(self).

Return type:

str

clone()[source]
Return type:

CompValue

get(a, variables=False, errors=False)[source]

Return the value for key if key is in the dictionary, else default.

Parameters:
  • variables (bool) –

  • errors (bool) –

class rdflib.plugins.sparql.parserutils.Expr(name, evalfn=None, **values)[source]

Bases: CompValue

A CompValue that is evaluatable

Parameters:
__annotations__ = {}
__init__(name, evalfn=None, **values)[source]
Parameters:
__module__ = 'rdflib.plugins.sparql.parserutils'
eval(ctx={})[source]
Parameters:

ctx (Any) –

Return type:

Union[SPARQLError, Any]

class rdflib.plugins.sparql.parserutils.Param(name, expr, isList=False)[source]

Bases: TokenConverter

A pyparsing token for labelling a part of the parse-tree if isList is true repeat occurrences of ParamList have their values merged in a list

Parameters:
__abstractmethods__ = frozenset({})
__annotations__ = {'DEFAULT_WHITE_CHARS': 'str', '_defaultName': 'typing.Optional[str]', '_literalStringClass': 'type', 'customName': 'str', 'failAction': 'typing.Optional[ParseFailAction]', 'ignoreExprs': "List['ParserElement']", 'parseAction': 'List[ParseAction]', 'recursion_memos': "typing.Dict[Tuple[int, 'Forward', bool], Tuple[int, Union[ParseResults, Exception]]]", 'resultsName': 'str', 'suppress_warnings_': 'List[Diagnostics]', 'verbose_stacktrace': 'bool'}
__init__(name, expr, isList=False)[source]
Parameters:
__module__ = 'rdflib.plugins.sparql.parserutils'
__slotnames__ = []
postParse2(tokenList)[source]
Parameters:

tokenList (Union[List[Any], ParseResults]) –

Return type:

ParamValue

class rdflib.plugins.sparql.parserutils.ParamList(name, expr)[source]

Bases: Param

A shortcut for a Param with isList=True

Parameters:

name (str) –

__abstractmethods__ = frozenset({})
__annotations__ = {'DEFAULT_WHITE_CHARS': 'str', '_defaultName': 'typing.Optional[str]', '_literalStringClass': 'type', 'customName': 'str', 'failAction': 'typing.Optional[ParseFailAction]', 'ignoreExprs': "List['ParserElement']", 'parseAction': 'List[ParseAction]', 'recursion_memos': "typing.Dict[Tuple[int, 'Forward', bool], Tuple[int, Union[ParseResults, Exception]]]", 'resultsName': 'str', 'suppress_warnings_': 'List[Diagnostics]', 'verbose_stacktrace': 'bool'}
__init__(name, expr)[source]
Parameters:

name (str) –

__module__ = 'rdflib.plugins.sparql.parserutils'
class rdflib.plugins.sparql.parserutils.ParamValue(name, tokenList, isList)[source]

Bases: object

The result of parsing a Param This just keeps the name/value All cleverness is in the CompValue

Parameters:
__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.parserutils', '__doc__': '\n    The result of parsing a Param\n    This just keeps the name/value\n    All cleverness is in the CompValue\n    ', '__init__': <function ParamValue.__init__>, '__str__': <function ParamValue.__str__>, '__dict__': <attribute '__dict__' of 'ParamValue' objects>, '__weakref__': <attribute '__weakref__' of 'ParamValue' objects>, '__annotations__': {}})
__init__(name, tokenList, isList)[source]
Parameters:
__module__ = 'rdflib.plugins.sparql.parserutils'
__str__()[source]

Return str(self).

Return type:

str

__weakref__

list of weak references to the object (if defined)

rdflib.plugins.sparql.parserutils.prettify_parsetree(t, indent='', depth=0)[source]
Parameters:
  • t (ParseResults) –

  • indent (str) –

  • depth (int) –

Return type:

str

rdflib.plugins.sparql.parserutils.value(ctx, val, variables=False, errors=False)[source]

utility function for evaluating something…

Variables will be looked up in the context Normally, non-bound vars is an error, set variables=True to return unbound vars

Normally, an error raises the error, set errors=True to return error

Parameters:
Return type:

Any

rdflib.plugins.sparql.processor module

Code for tying SPARQL Engine into RDFLib

These should be automatically registered with RDFLib

class rdflib.plugins.sparql.processor.SPARQLProcessor(graph)[source]

Bases: Processor

__annotations__ = {}
__init__(graph)[source]
__module__ = 'rdflib.plugins.sparql.processor'
query(strOrQuery, initBindings=None, initNs=None, base=None, DEBUG=False)[source]

Evaluate a query with the given initial bindings, and initial namespaces. The given base is used to resolve relative URIs in the query and will be overridden by any BASE given in the query.

Caution

This method can access indirectly requested network endpoints, for example, query processing will attempt to access network endpoints specified in SERVICE directives.

When processing untrusted or potentially malicious queries, measures should be taken to restrict network and file access.

For information on available security measures, see the RDFLib Security Considerations documentation.

Parameters:
Return type:

Mapping[str, Any]

class rdflib.plugins.sparql.processor.SPARQLResult(res)[source]

Bases: Result

Parameters:

res (Mapping[str, Any]) –

__annotations__ = {}
__init__(res)[source]
Parameters:

res (Mapping[str, Any]) –

__module__ = 'rdflib.plugins.sparql.processor'
class rdflib.plugins.sparql.processor.SPARQLUpdateProcessor(graph)[source]

Bases: UpdateProcessor

__annotations__ = {}
__init__(graph)[source]
__module__ = 'rdflib.plugins.sparql.processor'
update(strOrQuery, initBindings=None, initNs=None)[source]

Caution

This method can access indirectly requested network endpoints, for example, query processing will attempt to access network endpoints specified in SERVICE directives.

When processing untrusted or potentially malicious queries, measures should be taken to restrict network and file access.

For information on available security measures, see the RDFLib Security Considerations documentation.

Parameters:
Return type:

None

rdflib.plugins.sparql.processor.prepareQuery(queryString, initNs=None, base=None)[source]

Parse and translate a SPARQL Query

Parameters:
Return type:

Query

rdflib.plugins.sparql.processor.prepareUpdate(updateString, initNs=None, base=None)[source]

Parse and translate a SPARQL Update

Parameters:
Return type:

Update

rdflib.plugins.sparql.processor.processUpdate(graph, updateString, initBindings=None, initNs=None, base=None)[source]

Process a SPARQL Update Request returns Nothing on success or raises Exceptions on error

Parameters:
Return type:

None

rdflib.plugins.sparql.sparql module

exception rdflib.plugins.sparql.sparql.AlreadyBound[source]

Bases: SPARQLError

Raised when trying to bind a variable that is already bound!

__init__()[source]
__module__ = 'rdflib.plugins.sparql.sparql'
class rdflib.plugins.sparql.sparql.Bindings(outer=None, d=[])[source]

Bases: MutableMapping

A single level of a stack of variable-value bindings. Each dict keeps a reference to the dict below it, any failed lookup is propegated back

In python 3.3 this could be a collections.ChainMap

Parameters:

outer (Optional[Bindings]) –

__abstractmethods__ = frozenset({})
__contains__(key)[source]
Parameters:

key (Any) –

Return type:

bool

__delitem__(key)[source]
Parameters:

key (str) –

Return type:

None

__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n\n    A single level of a stack of variable-value bindings.\n    Each dict keeps a reference to the dict below it,\n    any failed lookup is propegated back\n\n    In python 3.3 this could be a collections.ChainMap\n    ', '__init__': <function Bindings.__init__>, '__getitem__': <function Bindings.__getitem__>, '__contains__': <function Bindings.__contains__>, '__setitem__': <function Bindings.__setitem__>, '__delitem__': <function Bindings.__delitem__>, '__len__': <function Bindings.__len__>, '__iter__': <function Bindings.__iter__>, '__str__': <function Bindings.__str__>, '__repr__': <function Bindings.__repr__>, '__dict__': <attribute '__dict__' of 'Bindings' objects>, '__weakref__': <attribute '__weakref__' of 'Bindings' objects>, '__abstractmethods__': frozenset(), '_abc_impl': <_abc._abc_data object>, '__annotations__': {'_d': 'Dict[str, str]'}})
__getitem__(key)[source]
Parameters:

key (str) –

Return type:

str

__init__(outer=None, d=[])[source]
Parameters:

outer (Optional[Bindings]) –

__iter__()[source]
Return type:

Generator[str, None, None]

__len__()[source]
Return type:

int

__module__ = 'rdflib.plugins.sparql.sparql'
__repr__()[source]

Return repr(self).

Return type:

str

__setitem__(key, value)[source]
Parameters:
  • key (str) –

  • value (Any) –

Return type:

None

__str__()[source]

Return str(self).

Return type:

str

__weakref__

list of weak references to the object (if defined)

class rdflib.plugins.sparql.sparql.FrozenBindings(ctx, *args, **kwargs)[source]

Bases: FrozenDict

Parameters:

ctx (QueryContext) –

__abstractmethods__ = frozenset({})
__annotations__ = {'_d': 'Dict[Identifier, Identifier]', '_hash': 'Optional[int]'}
__getitem__(key)[source]
Parameters:

key (Union[Identifier, str]) –

Return type:

Identifier

__init__(ctx, *args, **kwargs)[source]
Parameters:

ctx (QueryContext) –

__module__ = 'rdflib.plugins.sparql.sparql'
property bnodes: Mapping[Identifier, BNode]
forget(before, _except=None)[source]

return a frozen dict only of bindings made in self since before

Parameters:
Return type:

FrozenBindings

merge(other)[source]
Parameters:

other (Mapping[Identifier, Identifier]) –

Return type:

FrozenBindings

property now: datetime
project(vars)[source]
Parameters:

vars (Container[Variable]) –

Return type:

FrozenBindings

property prologue: Prologue | None
remember(these)[source]

return a frozen dict only of bindings in these

Return type:

FrozenBindings

class rdflib.plugins.sparql.sparql.FrozenDict(*args, **kwargs)[source]

Bases: Mapping

An immutable hashable dict

Taken from http://stackoverflow.com/a/2704866/81121

Parameters:
  • args (Any) –

  • kwargs (Any) –

__abstractmethods__ = frozenset({})
__annotations__ = {'_d': 'Dict[Identifier, Identifier]', '_hash': 'Optional[int]'}
__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n    An immutable hashable dict\n\n    Taken from http://stackoverflow.com/a/2704866/81121\n\n    ', '__init__': <function FrozenDict.__init__>, '__iter__': <function FrozenDict.__iter__>, '__len__': <function FrozenDict.__len__>, '__getitem__': <function FrozenDict.__getitem__>, '__hash__': <function FrozenDict.__hash__>, 'project': <function FrozenDict.project>, 'disjointDomain': <function FrozenDict.disjointDomain>, 'compatible': <function FrozenDict.compatible>, 'merge': <function FrozenDict.merge>, '__str__': <function FrozenDict.__str__>, '__repr__': <function FrozenDict.__repr__>, '__dict__': <attribute '__dict__' of 'FrozenDict' objects>, '__weakref__': <attribute '__weakref__' of 'FrozenDict' objects>, '__abstractmethods__': frozenset(), '_abc_impl': <_abc._abc_data object>, '__annotations__': {'_d': 'Dict[Identifier, Identifier]', '_hash': 'Optional[int]'}})
__getitem__(key)[source]
Parameters:

key (Identifier) –

Return type:

Identifier

__hash__()[source]

Return hash(self).

Return type:

int

__init__(*args, **kwargs)[source]
Parameters:
  • args (Any) –

  • kwargs (Any) –

__iter__()[source]
__len__()[source]
Return type:

int

__module__ = 'rdflib.plugins.sparql.sparql'
__repr__()[source]

Return repr(self).

Return type:

str

__str__()[source]

Return str(self).

Return type:

str

__weakref__

list of weak references to the object (if defined)

compatible(other)[source]
Parameters:

other (Mapping[Identifier, Identifier]) –

Return type:

bool

disjointDomain(other)[source]
Parameters:

other (Mapping[Identifier, Identifier]) –

Return type:

bool

merge(other)[source]
Parameters:

other (Mapping[Identifier, Identifier]) –

Return type:

FrozenDict

project(vars)[source]
Parameters:

vars (Container[Variable]) –

Return type:

FrozenDict

exception rdflib.plugins.sparql.sparql.NotBoundError(msg=None)[source]

Bases: SPARQLError

Parameters:

msg (Optional[str]) –

__annotations__ = {}
__init__(msg=None)[source]
Parameters:

msg (Optional[str]) –

__module__ = 'rdflib.plugins.sparql.sparql'
class rdflib.plugins.sparql.sparql.Prologue[source]

Bases: object

A class for holding prefixing bindings and base URI information

__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n    A class for holding prefixing bindings and base URI information\n    ', '__init__': <function Prologue.__init__>, 'resolvePName': <function Prologue.resolvePName>, 'bind': <function Prologue.bind>, 'absolutize': <function Prologue.absolutize>, '__dict__': <attribute '__dict__' of 'Prologue' objects>, '__weakref__': <attribute '__weakref__' of 'Prologue' objects>, '__annotations__': {'base': 'Optional[str]'}})
__init__()[source]
__module__ = 'rdflib.plugins.sparql.sparql'
__weakref__

list of weak references to the object (if defined)

absolutize(iri)[source]

Apply BASE / PREFIXes to URIs (and to datatypes in Literals)

TODO: Move resolving URIs to pre-processing

Parameters:

iri (Union[CompValue, str, None]) –

Return type:

Union[CompValue, str, None]

bind(prefix, uri)[source]
Parameters:
Return type:

None

resolvePName(prefix, localname)[source]
Parameters:
Return type:

URIRef

class rdflib.plugins.sparql.sparql.Query(prologue, algebra)[source]

Bases: object

A parsed and translated query

Parameters:
__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n    A parsed and translated query\n    ', '__init__': <function Query.__init__>, '__dict__': <attribute '__dict__' of 'Query' objects>, '__weakref__': <attribute '__weakref__' of 'Query' objects>, '__annotations__': {'_original_args': 'Tuple[str, Mapping[str, str], Optional[str]]'}})
__init__(prologue, algebra)[source]
Parameters:
__module__ = 'rdflib.plugins.sparql.sparql'
__weakref__

list of weak references to the object (if defined)

class rdflib.plugins.sparql.sparql.QueryContext(graph=None, bindings=None, initBindings=None)[source]

Bases: object

Query context - passed along when evaluating the query

Parameters:
__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n    Query context - passed along when evaluating the query\n    ', '__init__': <function QueryContext.__init__>, 'now': <property object>, 'clone': <function QueryContext.clone>, 'dataset': <property object>, 'load': <function QueryContext.load>, '__getitem__': <function QueryContext.__getitem__>, 'get': <function QueryContext.get>, 'solution': <function QueryContext.solution>, '__setitem__': <function QueryContext.__setitem__>, 'pushGraph': <function QueryContext.pushGraph>, 'push': <function QueryContext.push>, 'clean': <function QueryContext.clean>, 'thaw': <function QueryContext.thaw>, '__dict__': <attribute '__dict__' of 'QueryContext' objects>, '__weakref__': <attribute '__weakref__' of 'QueryContext' objects>, '__annotations__': {'graph': 'Optional[Graph]', '_dataset': 'Optional[ConjunctiveGraph]', 'prologue': 'Optional[Prologue]', '_now': 'Optional[datetime.datetime]', 'bnodes': 't.MutableMapping[Identifier, BNode]'}})
__getitem__(key)[source]
Parameters:

key (Union[str, Path]) –

Return type:

Union[str, Path, None]

__init__(graph=None, bindings=None, initBindings=None)[source]
Parameters:
__module__ = 'rdflib.plugins.sparql.sparql'
__setitem__(key, value)[source]
Parameters:
  • key (str) –

  • value (str) –

Return type:

None

__weakref__

list of weak references to the object (if defined)

clean()[source]
Return type:

QueryContext

clone(bindings=None)[source]
Parameters:

bindings (Union[Bindings, FrozenBindings, List[Any], None]) –

Return type:

QueryContext

property dataset: ConjunctiveGraph

“current dataset

get(key, default=None)[source]
Parameters:
Return type:

Any

load(source, default=False, **kwargs)[source]

Load data from the source into the query context’s.

Parameters:
  • source (URIRef) – The source to load from.

  • default (bool) – If True, triples from the source will be added to the default graph, otherwise it will be loaded into a graph with source URI as its name.

  • kwargs (Any) – Keyword arguments to pass to rdflib.graph.Graph.parse().

Return type:

None

property now: datetime
push()[source]
Return type:

QueryContext

pushGraph(graph)[source]
Parameters:

graph (Optional[Graph]) –

Return type:

QueryContext

solution(vars=None)[source]

Return a static copy of the current variable bindings as dict

Parameters:

vars (Optional[Iterable[Variable]]) –

Return type:

FrozenBindings

thaw(frozenbindings)[source]

Create a new read/write query context from the given solution

Parameters:

frozenbindings (FrozenBindings) –

Return type:

QueryContext

exception rdflib.plugins.sparql.sparql.SPARQLError(msg=None)[source]

Bases: Exception

Parameters:

msg (Optional[str]) –

__annotations__ = {}
__init__(msg=None)[source]
Parameters:

msg (Optional[str]) –

__module__ = 'rdflib.plugins.sparql.sparql'
__weakref__

list of weak references to the object (if defined)

exception rdflib.plugins.sparql.sparql.SPARQLTypeError(msg)[source]

Bases: SPARQLError

Parameters:

msg (Optional[str]) –

__annotations__ = {}
__init__(msg)[source]
Parameters:

msg (Optional[str]) –

__module__ = 'rdflib.plugins.sparql.sparql'
class rdflib.plugins.sparql.sparql.Update(prologue, algebra)[source]

Bases: object

A parsed and translated update

Parameters:
__dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n    A parsed and translated update\n    ', '__init__': <function Update.__init__>, '__dict__': <attribute '__dict__' of 'Update' objects>, '__weakref__': <attribute '__weakref__' of 'Update' objects>, '__annotations__': {'_original_args': 'Tuple[str, Mapping[str, str], Optional[str]]'}})
__init__(prologue, algebra)[source]
Parameters:
__module__ = 'rdflib.plugins.sparql.sparql'
__weakref__

list of weak references to the object (if defined)

rdflib.plugins.sparql.update module

Code for carrying out Update Operations

rdflib.plugins.sparql.update.evalAdd(ctx, u)[source]

add all triples from src to dst

http://www.w3.org/TR/sparql11-update/#add

Parameters:
Return type:

None

rdflib.plugins.sparql.update.evalClear(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#clear

Parameters:
Return type:

None

rdflib.plugins.sparql.update.evalCopy(ctx, u)[source]

remove all triples from dst add all triples from src to dst

http://www.w3.org/TR/sparql11-update/#copy

Parameters:
Return type:

None

rdflib.plugins.sparql.update.evalCreate(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#create

Parameters:
Return type:

None

rdflib.plugins.sparql.update.evalDeleteData(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#deleteData

Parameters:
Return type:

None

rdflib.plugins.sparql.update.evalDeleteWhere(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#deleteWhere

Parameters:
Return type:

None

rdflib.plugins.sparql.update.evalDrop(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#drop

Parameters:
Return type:

None

rdflib.plugins.sparql.update.evalInsertData(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#insertData

Parameters:
Return type:

None

rdflib.plugins.sparql.update.evalLoad(ctx, u)[source]

http://www.w3.org/TR/sparql11-update/#load

Parameters:
Return type:

None

rdflib.plugins.sparql.update.evalModify(ctx, u)[source]
Parameters:
Return type:

None

rdflib.plugins.sparql.update.evalMove(ctx, u)[source]

remove all triples from dst add all triples from src to dst remove all triples from src

http://www.w3.org/TR/sparql11-update/#move

Parameters:
Return type:

None

rdflib.plugins.sparql.update.evalUpdate(graph, update, initBindings=None)[source]

http://www.w3.org/TR/sparql11-update/#updateLanguage

‘A request is a sequence of operations […] Implementations MUST ensure that operations of a single request are executed in a fashion that guarantees the same effects as executing them in lexical order.

Operations all result either in success or failure.

If multiple operations are present in a single request, then a result of failure from any operation MUST abort the sequence of operations, causing the subsequent operations to be ignored.’

This will return None on success and raise Exceptions on error

Caution

This method can access indirectly requested network endpoints, for example, query processing will attempt to access network endpoints specified in SERVICE directives.

When processing untrusted or potentially malicious queries, measures should be taken to restrict network and file access.

For information on available security measures, see the RDFLib Security Considerations documentation.

Parameters:
Return type:

None

Module contents

SPARQL implementation for RDFLib

New in version 4.0.

rdflib.plugins.sparql.CUSTOM_EVALS = {}

Custom evaluation functions

These must be functions taking (ctx, part) and raise NotImplementedError if they cannot handle a certain part

rdflib.plugins.sparql.prepareQuery(queryString, initNs=None, base=None)[source]

Parse and translate a SPARQL Query

Parameters:
Return type:

Query

rdflib.plugins.sparql.prepareUpdate(updateString, initNs=None, base=None)[source]

Parse and translate a SPARQL Update

Parameters:
Return type:

Update

rdflib.plugins.sparql.processUpdate(graph, updateString, initBindings=None, initNs=None, base=None)[source]

Process a SPARQL Update Request returns Nothing on success or raises Exceptions on error

Parameters:
Return type:

None