rdflib.plugins.sparql package¶
Subpackages¶
- rdflib.plugins.sparql.results package
- Submodules
- rdflib.plugins.sparql.results.csvresults module
- rdflib.plugins.sparql.results.graph module
- rdflib.plugins.sparql.results.jsonresults module
- rdflib.plugins.sparql.results.rdfresults module
- rdflib.plugins.sparql.results.tsvresults module
- rdflib.plugins.sparql.results.txtresults module
- rdflib.plugins.sparql.results.xmlresults module
SPARQLXMLWriter
SPARQLXMLWriter.__dict__
SPARQLXMLWriter.__init__()
SPARQLXMLWriter.__module__
SPARQLXMLWriter.__weakref__
SPARQLXMLWriter.close()
SPARQLXMLWriter.write_ask()
SPARQLXMLWriter.write_binding()
SPARQLXMLWriter.write_end_result()
SPARQLXMLWriter.write_header()
SPARQLXMLWriter.write_results_header()
SPARQLXMLWriter.write_start_result()
XMLResult
XMLResultParser
XMLResultSerializer
log
parseTerm()
- Module contents
Submodules¶
rdflib.plugins.sparql.aggregates module¶
- class rdflib.plugins.sparql.aggregates.Accumulator(aggregation)[source]¶
Bases:
object
abstract base class for different aggregation functions
- Parameters:
aggregation (
CompValue
) –
- __dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.aggregates', '__doc__': 'abstract base class for different aggregation functions', '__init__': <function Accumulator.__init__>, 'dont_care': <function Accumulator.dont_care>, 'use_row': <function Accumulator.use_row>, 'set_value': <function Accumulator.set_value>, '__dict__': <attribute '__dict__' of 'Accumulator' objects>, '__weakref__': <attribute '__weakref__' of 'Accumulator' objects>, '__annotations__': {'get_value': 'Callable[[], Optional[Literal]]', 'update': "Callable[[FrozenBindings, 'Aggregator'], None]", 'seen': 'Set[Any]'}})¶
- __module__ = 'rdflib.plugins.sparql.aggregates'¶
- __weakref__¶
list of weak references to the object (if defined)
- dont_care(row)[source]¶
skips distinct test
- Parameters:
row (
FrozenBindings
) –- Return type:
- set_value(bindings)[source]¶
sets final value in bindings
- Parameters:
bindings (
MutableMapping
[Variable
,Identifier
]) –- Return type:
- use_row(row)[source]¶
tests distinct with set
- Parameters:
row (
FrozenBindings
) –- Return type:
- class rdflib.plugins.sparql.aggregates.Aggregator(aggregations)[source]¶
Bases:
object
combines different Accumulator objects
- __dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.aggregates', '__doc__': 'combines different Accumulator objects', 'accumulator_classes': {'Aggregate_Count': <class 'rdflib.plugins.sparql.aggregates.Counter'>, 'Aggregate_Sample': <class 'rdflib.plugins.sparql.aggregates.Sample'>, 'Aggregate_Sum': <class 'rdflib.plugins.sparql.aggregates.Sum'>, 'Aggregate_Avg': <class 'rdflib.plugins.sparql.aggregates.Average'>, 'Aggregate_Min': <class 'rdflib.plugins.sparql.aggregates.Minimum'>, 'Aggregate_Max': <class 'rdflib.plugins.sparql.aggregates.Maximum'>, 'Aggregate_GroupConcat': <class 'rdflib.plugins.sparql.aggregates.GroupConcat'>}, '__init__': <function Aggregator.__init__>, 'update': <function Aggregator.update>, 'get_bindings': <function Aggregator.get_bindings>, '__dict__': <attribute '__dict__' of 'Aggregator' objects>, '__weakref__': <attribute '__weakref__' of 'Aggregator' objects>, '__annotations__': {'bindings': 'Dict[Variable, Identifier]', 'accumulators': 'Dict[str, Accumulator]'}})¶
- __module__ = 'rdflib.plugins.sparql.aggregates'¶
- __weakref__¶
list of weak references to the object (if defined)
- accumulator_classes = {'Aggregate_Avg': <class 'rdflib.plugins.sparql.aggregates.Average'>, 'Aggregate_Count': <class 'rdflib.plugins.sparql.aggregates.Counter'>, 'Aggregate_GroupConcat': <class 'rdflib.plugins.sparql.aggregates.GroupConcat'>, 'Aggregate_Max': <class 'rdflib.plugins.sparql.aggregates.Maximum'>, 'Aggregate_Min': <class 'rdflib.plugins.sparql.aggregates.Minimum'>, 'Aggregate_Sample': <class 'rdflib.plugins.sparql.aggregates.Sample'>, 'Aggregate_Sum': <class 'rdflib.plugins.sparql.aggregates.Sum'>}¶
- update(row)[source]¶
update all own accumulators
- Parameters:
row (
FrozenBindings
) –- Return type:
- class rdflib.plugins.sparql.aggregates.Average(aggregation)[source]¶
Bases:
Accumulator
- Parameters:
aggregation (
CompValue
) –
- __annotations__ = {}¶
- __module__ = 'rdflib.plugins.sparql.aggregates'¶
- update(row, aggregator)[source]¶
- Parameters:
row (
FrozenBindings
) –aggregator (
Aggregator
) –
- Return type:
- class rdflib.plugins.sparql.aggregates.Counter(aggregation)[source]¶
Bases:
Accumulator
- Parameters:
aggregation (
CompValue
) –
- __annotations__ = {}¶
- __module__ = 'rdflib.plugins.sparql.aggregates'¶
- eval_full_row(row)[source]¶
- Parameters:
row (
FrozenBindings
) –- Return type:
- eval_row(row)[source]¶
- Parameters:
row (
FrozenBindings
) –- Return type:
- update(row, aggregator)[source]¶
- Parameters:
row (
FrozenBindings
) –aggregator (
Aggregator
) –
- Return type:
- use_row(row)[source]¶
tests distinct with set
- Parameters:
row (
FrozenBindings
) –- Return type:
- class rdflib.plugins.sparql.aggregates.Extremum(aggregation)[source]¶
Bases:
Accumulator
abstract base class for Minimum and Maximum
- Parameters:
aggregation (
CompValue
) –
- __annotations__ = {}¶
- __module__ = 'rdflib.plugins.sparql.aggregates'¶
- set_value(bindings)[source]¶
sets final value in bindings
- Parameters:
bindings (
MutableMapping
[Variable
,Identifier
]) –- Return type:
- update(row, aggregator)[source]¶
- Parameters:
row (
FrozenBindings
) –aggregator (
Aggregator
) –
- Return type:
- class rdflib.plugins.sparql.aggregates.GroupConcat(aggregation)[source]¶
Bases:
Accumulator
- Parameters:
aggregation (
CompValue
) –
- __annotations__ = {'value': 'List[Literal]'}¶
- __module__ = 'rdflib.plugins.sparql.aggregates'¶
- update(row, aggregator)[source]¶
- Parameters:
row (
FrozenBindings
) –aggregator (
Aggregator
) –
- Return type:
- class rdflib.plugins.sparql.aggregates.Maximum(aggregation)[source]¶
Bases:
Extremum
- Parameters:
aggregation (
CompValue
) –
- __annotations__ = {}¶
- __module__ = 'rdflib.plugins.sparql.aggregates'¶
- class rdflib.plugins.sparql.aggregates.Minimum(aggregation)[source]¶
Bases:
Extremum
- Parameters:
aggregation (
CompValue
) –
- __annotations__ = {}¶
- __module__ = 'rdflib.plugins.sparql.aggregates'¶
- class rdflib.plugins.sparql.aggregates.Sample(aggregation)[source]¶
Bases:
Accumulator
takes the first eligible value
- __annotations__ = {}¶
- __module__ = 'rdflib.plugins.sparql.aggregates'¶
- update(row, aggregator)[source]¶
- Parameters:
row (
FrozenBindings
) –aggregator (
Aggregator
) –
- Return type:
- class rdflib.plugins.sparql.aggregates.Sum(aggregation)[source]¶
Bases:
Accumulator
- Parameters:
aggregation (
CompValue
) –
- __annotations__ = {}¶
- __module__ = 'rdflib.plugins.sparql.aggregates'¶
- update(row, aggregator)[source]¶
- Parameters:
row (
FrozenBindings
) –aggregator (
Aggregator
) –
- Return type:
rdflib.plugins.sparql.algebra module¶
- rdflib.plugins.sparql.algebra.BGP(triples=None)[source]¶
- Parameters:
triples (
Optional
[List
[Tuple
[Identifier
,Identifier
,Identifier
]]]) –- Return type:
- exception rdflib.plugins.sparql.algebra.ExpressionNotCoveredException[source]¶
Bases:
Exception
- __module__ = 'rdflib.plugins.sparql.algebra'¶
- __weakref__¶
list of weak references to the object (if defined)
- rdflib.plugins.sparql.algebra.Graph(term, graph)[source]¶
- Parameters:
term (
Identifier
) –graph (
CompValue
) –
- Return type:
- exception rdflib.plugins.sparql.algebra.StopTraversal(rv)[source]¶
Bases:
Exception
- Parameters:
rv (
bool
) –
- __module__ = 'rdflib.plugins.sparql.algebra'¶
- __weakref__¶
list of weak references to the object (if defined)
- rdflib.plugins.sparql.algebra.analyse(n, children)[source]¶
Some things can be lazily joined. This propegates whether they can up the tree and sets lazy flags for all joins
- rdflib.plugins.sparql.algebra.collectAndRemoveFilters(parts)[source]¶
FILTER expressions apply to the whole group graph pattern in which they appear.
- rdflib.plugins.sparql.algebra.reorderTriples(l_)[source]¶
Reorder triple patterns so that we execute the ones with most bindings first
- Parameters:
l_ (
Iterable
[Tuple
[Identifier
,Identifier
,Identifier
]]) –- Return type:
- rdflib.plugins.sparql.algebra.translateAlgebra(query_algebra)[source]¶
Translates a SPARQL 1.1 algebra tree into the corresponding query string.
- rdflib.plugins.sparql.algebra.translateExists(e)[source]¶
Translate the graph pattern used by EXISTS and NOT EXISTS http://www.w3.org/TR/sparql11-query/#sparqlCollectFilters
- rdflib.plugins.sparql.algebra.translateQuads(quads)[source]¶
- Parameters:
quads (
CompValue
) –- Return type:
Tuple
[List
[Tuple
[Identifier
,Identifier
,Identifier
]],DefaultDict
[str
,List
[Tuple
[Identifier
,Identifier
,Identifier
]]]]
- rdflib.plugins.sparql.algebra.translateQuery(q, base=None, initNs=None)[source]¶
Translate a query-parsetree to a SPARQL Algebra Expression
Return a rdflib.plugins.sparql.sparql.Query object
- rdflib.plugins.sparql.algebra.translateUpdate(q, base=None, initNs=None)[source]¶
Returns a list of SPARQL Update Algebra expressions
- rdflib.plugins.sparql.algebra.traverse(tree, visitPre=<function <lambda>>, visitPost=<function <lambda>>, complete=None)[source]¶
Traverse tree, visit each node with visit function visit function may raise StopTraversal to stop traversal if complete!=None, it is returned on complete traversal, otherwise the transformed tree is returned
- rdflib.plugins.sparql.algebra.triples(l)[source]¶
- Parameters:
l (
Union
[List
[List
[Identifier
]],List
[Tuple
[Identifier
,Identifier
,Identifier
]]]) –- Return type:
rdflib.plugins.sparql.datatypes module¶
rdflib.plugins.sparql.evaluate module¶
- rdflib.plugins.sparql.evaluate.evalAggregateJoin(ctx, agg)[source]¶
- Parameters:
ctx (
QueryContext
) –agg (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.evaluate.evalBGP(ctx, bgp)[source]¶
A basic graph pattern
- Parameters:
ctx (
QueryContext
) –bgp (
List
[Tuple
[Identifier
,Identifier
,Identifier
]]) –
- Return type:
- rdflib.plugins.sparql.evaluate.evalDistinct(ctx, part)[source]¶
- Parameters:
ctx (
QueryContext
) –part (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.evaluate.evalExtend(ctx, extend)[source]¶
- Parameters:
ctx (
QueryContext
) –extend (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.evaluate.evalFilter(ctx, part)[source]¶
- Parameters:
ctx (
QueryContext
) –part (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.evaluate.evalGraph(ctx, part)[source]¶
- Parameters:
ctx (
QueryContext
) –part (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.evaluate.evalGroup(ctx, group)[source]¶
http://www.w3.org/TR/sparql11-query/#defn_algGroup
- Parameters:
ctx (
QueryContext
) –group (
CompValue
) –
- rdflib.plugins.sparql.evaluate.evalJoin(ctx, join)[source]¶
- Parameters:
ctx (
QueryContext
) –join (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.evaluate.evalLazyJoin(ctx, join)[source]¶
A lazy join will push the variables bound in the first part to the second part, essentially doing the join implicitly hopefully evaluating much fewer triples
- Parameters:
ctx (
QueryContext
) –join (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.evaluate.evalLeftJoin(ctx, join)[source]¶
- Parameters:
ctx (
QueryContext
) –join (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.evaluate.evalMinus(ctx, minus)[source]¶
- Parameters:
ctx (
QueryContext
) –minus (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.evaluate.evalMultiset(ctx, part)[source]¶
- Parameters:
ctx (
QueryContext
) –part (
CompValue
) –
- rdflib.plugins.sparql.evaluate.evalOrderBy(ctx, part)[source]¶
- Parameters:
ctx (
QueryContext
) –part (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.evaluate.evalPart(ctx, part)[source]¶
- Parameters:
ctx (
QueryContext
) –part (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.evaluate.evalProject(ctx, project)[source]¶
- Parameters:
ctx (
QueryContext
) –project (
CompValue
) –
- rdflib.plugins.sparql.evaluate.evalQuery(graph, query, initBindings=None, base=None)[source]¶
Caution
This method can access indirectly requested network endpoints, for example, query processing will attempt to access network endpoints specified in
SERVICE
directives.When processing untrusted or potentially malicious queries, measures should be taken to restrict network and file access.
For information on available security measures, see the RDFLib Security Considerations documentation.
- rdflib.plugins.sparql.evaluate.evalReduced(ctx, part)[source]¶
apply REDUCED to result
REDUCED is not as strict as DISTINCT, but if the incoming rows were sorted it should produce the same result with limited extra memory and time per incoming row.
- Parameters:
ctx (
QueryContext
) –part (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.evaluate.evalSelectQuery(ctx, query)[source]¶
- Parameters:
ctx (
QueryContext
) –query (
CompValue
) –
- Return type:
Mapping
[str
,Union
[str
,List
[Variable
],Iterable
[FrozenDict
]]]
- rdflib.plugins.sparql.evaluate.evalServiceQuery(ctx, part)[source]¶
- Parameters:
ctx (
QueryContext
) –part (
CompValue
) –
- rdflib.plugins.sparql.evaluate.evalSlice(ctx, slice)[source]¶
- Parameters:
ctx (
QueryContext
) –slice (
CompValue
) –
- rdflib.plugins.sparql.evaluate.evalUnion(ctx, union)[source]¶
- Parameters:
ctx (
QueryContext
) –union (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.evaluate.evalValues(ctx, part)[source]¶
- Parameters:
ctx (
QueryContext
) –part (
CompValue
) –
- Return type:
rdflib.plugins.sparql.evalutils module¶
rdflib.plugins.sparql.operators module¶
- rdflib.plugins.sparql.operators.AdditiveExpression(e, ctx)[source]¶
- Parameters:
e (
Expr
) –ctx (
Union
[QueryContext
,FrozenBindings
]) –
- Return type:
- rdflib.plugins.sparql.operators.Builtin_BNODE(expr, ctx)[source]¶
http://www.w3.org/TR/sparql11-query/#func-bnode
- Return type:
- rdflib.plugins.sparql.operators.Builtin_COALESCE(expr, ctx)[source]¶
http://www.w3.org/TR/sparql11-query/#func-coalesce
- Parameters:
expr (
Expr
) –
- rdflib.plugins.sparql.operators.Builtin_EXISTS(e, ctx)[source]¶
- Parameters:
e (
Expr
) –ctx (
FrozenBindings
) –
- Return type:
- rdflib.plugins.sparql.operators.Builtin_IF(expr, ctx)[source]¶
http://www.w3.org/TR/sparql11-query/#func-if
- Parameters:
expr (
Expr
) –
- rdflib.plugins.sparql.operators.Builtin_IRI(expr, ctx)[source]¶
http://www.w3.org/TR/sparql11-query/#func-iri
- Parameters:
expr (
Expr
) –ctx (
FrozenBindings
) –
- Return type:
- rdflib.plugins.sparql.operators.Builtin_LANG(e, ctx)[source]¶
http://www.w3.org/TR/sparql11-query/#func-lang
Returns the language tag of ltrl, if it has one. It returns “” if ltrl has no language tag. Note that the RDF data model does not include literals with an empty language tag.
- rdflib.plugins.sparql.operators.Builtin_REGEX(expr, ctx)[source]¶
http://www.w3.org/TR/sparql11-query/#func-regex Invokes the XPath fn:matches function to match text against a regular expression pattern. The regular expression language is defined in XQuery 1.0 and XPath 2.0 Functions and Operators section 7.6.1 Regular Expression Syntax
- rdflib.plugins.sparql.operators.Builtin_STRUUID(expr, ctx)[source]¶
http://www.w3.org/TR/sparql11-query/#func-strdt
- Return type:
- rdflib.plugins.sparql.operators.Builtin_isBLANK(expr, ctx)[source]¶
- Parameters:
expr (
Expr
) –ctx (
FrozenBindings
) –
- Return type:
- rdflib.plugins.sparql.operators.ConditionalAndExpression(e, ctx)[source]¶
- Parameters:
e (
Expr
) –ctx (
Union
[QueryContext
,FrozenBindings
]) –
- Return type:
- rdflib.plugins.sparql.operators.ConditionalOrExpression(e, ctx)[source]¶
- Parameters:
e (
Expr
) –ctx (
Union
[QueryContext
,FrozenBindings
]) –
- Return type:
- rdflib.plugins.sparql.operators.EBV(rt)[source]¶
Effective Boolean Value (EBV)
If the argument is a typed literal with a datatype of xsd:boolean, the EBV is the value of that argument.
If the argument is a plain literal or a typed literal with a datatype of xsd:string, the EBV is false if the operand value has zero length; otherwise the EBV is true.
If the argument is a numeric type or a typed literal with a datatype derived from a numeric type, the EBV is false if the operand value is NaN or is numerically equal to zero; otherwise the EBV is true.
All other arguments, including unbound arguments, produce a type error.
- Parameters:
rt (
Union
[Identifier
,SPARQLError
,Expr
]) –- Return type:
- rdflib.plugins.sparql.operators.Function(e, ctx)[source]¶
Custom functions and casts
- Parameters:
e (
Expr
) –ctx (
FrozenBindings
) –
- Return type:
- rdflib.plugins.sparql.operators.MultiplicativeExpression(e, ctx)[source]¶
- Parameters:
e (
Expr
) –ctx (
Union
[QueryContext
,FrozenBindings
]) –
- Return type:
- rdflib.plugins.sparql.operators.RelationalExpression(e, ctx)[source]¶
- Parameters:
e (
Expr
) –ctx (
Union
[QueryContext
,FrozenBindings
]) –
- Return type:
- rdflib.plugins.sparql.operators.UnaryMinus(expr, ctx)[source]¶
- Parameters:
expr (
Expr
) –ctx (
FrozenBindings
) –
- Return type:
- rdflib.plugins.sparql.operators.UnaryNot(expr, ctx)[source]¶
- Parameters:
expr (
Expr
) –ctx (
FrozenBindings
) –
- Return type:
- rdflib.plugins.sparql.operators.UnaryPlus(expr, ctx)[source]¶
- Parameters:
expr (
Expr
) –ctx (
FrozenBindings
) –
- Return type:
- rdflib.plugins.sparql.operators.calculateDuration(obj1, obj2)[source]¶
returns the duration Literal between two datetime
- rdflib.plugins.sparql.operators.calculateFinalDateTime(obj1, dt1, obj2, dt2, operation)[source]¶
Calculates the final dateTime/date/time resultant after addition/ subtraction of duration/dayTimeDuration/yearMonthDuration
- rdflib.plugins.sparql.operators.custom_function(uri, override=False, raw=False)[source]¶
Decorator version of
register_custom_function()
.
- rdflib.plugins.sparql.operators.dateTimeObjects(expr)[source]¶
return a dataTime/date/time/duration/dayTimeDuration/yearMonthDuration python objects from a literal
- rdflib.plugins.sparql.operators.default_cast(e, ctx)[source]¶
- Parameters:
e (
Expr
) –ctx (
FrozenBindings
) –
- Return type:
- rdflib.plugins.sparql.operators.isCompatibleDateTimeDatatype(obj1, dt1, obj2, dt2)[source]¶
Returns a boolean indicating if first object is compatible with operation(+/-) over second object.
- rdflib.plugins.sparql.operators.numeric(expr)[source]¶
return a number from a literal http://www.w3.org/TR/xpath20/#promotion
or TypeError
- rdflib.plugins.sparql.operators.register_custom_function(uri, func, override=False, raw=False)[source]¶
Register a custom SPARQL function.
By default, the function will be passed the RDF terms in the argument list. If raw is True, the function will be passed an Expression and a Context.
The function must return an RDF term, or raise a SparqlError.
- rdflib.plugins.sparql.operators.string(s)[source]¶
Make sure the passed thing is a string literal i.e. plain literal, xsd:string literal or lang-tagged literal
- rdflib.plugins.sparql.operators.unregister_custom_function(uri, func=None)[source]¶
The ‘func’ argument is included for compatibility with existing code. A previous implementation checked that the function associated with the given uri was actually ‘func’, but this is not necessary as the uri should uniquely identify the function.
rdflib.plugins.sparql.parser module¶
SPARQL 1.1 Parser
based on pyparsing
- rdflib.plugins.sparql.parser.expandBNodeTriples(terms)[source]¶
expand [ ?p ?o ] syntax for implicit bnodes
- rdflib.plugins.sparql.parser.expandCollection(terms)[source]¶
expand ( 1 2 3 ) notation for collections
- rdflib.plugins.sparql.parser.expandTriples(terms)[source]¶
Expand ; and , syntax for repeat predicates, subjects
- rdflib.plugins.sparql.parser.expandUnicodeEscapes(q)[source]¶
The syntax of the SPARQL Query Language is expressed over code points in Unicode [UNICODE]. The encoding is always UTF-8 [RFC3629]. Unicode code points may also be expressed using an uXXXX (U+0 to U+FFFF) or UXXXXXXXX syntax (for U+10000 onwards) where X is a hexadecimal digit [0-9A-F]
rdflib.plugins.sparql.parserutils module¶
- class rdflib.plugins.sparql.parserutils.Comp(name, expr)[source]¶
Bases:
TokenConverter
A pyparsing token for grouping together things with a label Any sub-tokens that are not Params will be ignored.
Returns CompValue / Expr objects - depending on whether evalFn is set.
- Parameters:
name (
str
) –expr (
ParserElement
) –
- __abstractmethods__ = frozenset({})¶
- __annotations__ = {'DEFAULT_WHITE_CHARS': 'str', '_defaultName': 'typing.Optional[str]', '_literalStringClass': 'type', 'customName': 'str', 'evalfn': 'Optional[Callable[[Any, Any], Any]]', 'failAction': 'typing.Optional[ParseFailAction]', 'ignoreExprs': "List['ParserElement']", 'parseAction': 'List[ParseAction]', 'recursion_memos': "typing.Dict[Tuple[int, 'Forward', bool], Tuple[int, Union[ParseResults, Exception]]]", 'resultsName': 'str', 'suppress_warnings_': 'List[Diagnostics]', 'verbose_stacktrace': 'bool'}¶
- __module__ = 'rdflib.plugins.sparql.parserutils'¶
- __slotnames__ = []¶
- class rdflib.plugins.sparql.parserutils.CompValue(name, **values)[source]¶
Bases:
OrderedDict
The result of parsing a Comp Any included Params are available as Dict keys or as attributes
- Parameters:
name (
str
) –
- __module__ = 'rdflib.plugins.sparql.parserutils'¶
- class rdflib.plugins.sparql.parserutils.Expr(name, evalfn=None, **values)[source]¶
Bases:
CompValue
A CompValue that is evaluatable
- __annotations__ = {}¶
- __module__ = 'rdflib.plugins.sparql.parserutils'¶
- class rdflib.plugins.sparql.parserutils.Param(name, expr, isList=False)[source]¶
Bases:
TokenConverter
A pyparsing token for labelling a part of the parse-tree if isList is true repeat occurrences of ParamList have their values merged in a list
- __abstractmethods__ = frozenset({})¶
- __annotations__ = {'DEFAULT_WHITE_CHARS': 'str', '_defaultName': 'typing.Optional[str]', '_literalStringClass': 'type', 'customName': 'str', 'failAction': 'typing.Optional[ParseFailAction]', 'ignoreExprs': "List['ParserElement']", 'parseAction': 'List[ParseAction]', 'recursion_memos': "typing.Dict[Tuple[int, 'Forward', bool], Tuple[int, Union[ParseResults, Exception]]]", 'resultsName': 'str', 'suppress_warnings_': 'List[Diagnostics]', 'verbose_stacktrace': 'bool'}¶
- __module__ = 'rdflib.plugins.sparql.parserutils'¶
- __slotnames__ = []¶
- class rdflib.plugins.sparql.parserutils.ParamList(name, expr)[source]¶
Bases:
Param
A shortcut for a Param with isList=True
- Parameters:
name (
str
) –
- __abstractmethods__ = frozenset({})¶
- __annotations__ = {'DEFAULT_WHITE_CHARS': 'str', '_defaultName': 'typing.Optional[str]', '_literalStringClass': 'type', 'customName': 'str', 'failAction': 'typing.Optional[ParseFailAction]', 'ignoreExprs': "List['ParserElement']", 'parseAction': 'List[ParseAction]', 'recursion_memos': "typing.Dict[Tuple[int, 'Forward', bool], Tuple[int, Union[ParseResults, Exception]]]", 'resultsName': 'str', 'suppress_warnings_': 'List[Diagnostics]', 'verbose_stacktrace': 'bool'}¶
- __module__ = 'rdflib.plugins.sparql.parserutils'¶
- class rdflib.plugins.sparql.parserutils.ParamValue(name, tokenList, isList)[source]¶
Bases:
object
The result of parsing a Param This just keeps the name/value All cleverness is in the CompValue
- __dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.parserutils', '__doc__': '\n The result of parsing a Param\n This just keeps the name/value\n All cleverness is in the CompValue\n ', '__init__': <function ParamValue.__init__>, '__str__': <function ParamValue.__str__>, '__dict__': <attribute '__dict__' of 'ParamValue' objects>, '__weakref__': <attribute '__weakref__' of 'ParamValue' objects>, '__annotations__': {}})¶
- __module__ = 'rdflib.plugins.sparql.parserutils'¶
- __weakref__¶
list of weak references to the object (if defined)
- rdflib.plugins.sparql.parserutils.value(ctx, val, variables=False, errors=False)[source]¶
utility function for evaluating something…
Variables will be looked up in the context Normally, non-bound vars is an error, set variables=True to return unbound vars
Normally, an error raises the error, set errors=True to return error
- Parameters:
ctx (
FrozenBindings
) –val (
Any
) –variables (
bool
) –errors (
bool
) –
- Return type:
rdflib.plugins.sparql.processor module¶
Code for tying SPARQL Engine into RDFLib
These should be automatically registered with RDFLib
- class rdflib.plugins.sparql.processor.SPARQLProcessor(graph)[source]¶
Bases:
Processor
- __annotations__ = {}¶
- __module__ = 'rdflib.plugins.sparql.processor'¶
- query(strOrQuery, initBindings=None, initNs=None, base=None, DEBUG=False)[source]¶
Evaluate a query with the given initial bindings, and initial namespaces. The given base is used to resolve relative URIs in the query and will be overridden by any BASE given in the query.
Caution
This method can access indirectly requested network endpoints, for example, query processing will attempt to access network endpoints specified in
SERVICE
directives.When processing untrusted or potentially malicious queries, measures should be taken to restrict network and file access.
For information on available security measures, see the RDFLib Security Considerations documentation.
- class rdflib.plugins.sparql.processor.SPARQLResult(res)[source]¶
Bases:
Result
- __annotations__ = {}¶
- __module__ = 'rdflib.plugins.sparql.processor'¶
- class rdflib.plugins.sparql.processor.SPARQLUpdateProcessor(graph)[source]¶
Bases:
UpdateProcessor
- __annotations__ = {}¶
- __module__ = 'rdflib.plugins.sparql.processor'¶
- update(strOrQuery, initBindings=None, initNs=None)[source]¶
Caution
This method can access indirectly requested network endpoints, for example, query processing will attempt to access network endpoints specified in
SERVICE
directives.When processing untrusted or potentially malicious queries, measures should be taken to restrict network and file access.
For information on available security measures, see the RDFLib Security Considerations documentation.
- rdflib.plugins.sparql.processor.prepareQuery(queryString, initNs=None, base=None)[source]¶
Parse and translate a SPARQL Query
- rdflib.plugins.sparql.processor.prepareUpdate(updateString, initNs=None, base=None)[source]¶
Parse and translate a SPARQL Update
rdflib.plugins.sparql.sparql module¶
- exception rdflib.plugins.sparql.sparql.AlreadyBound[source]¶
Bases:
SPARQLError
Raised when trying to bind a variable that is already bound!
- __module__ = 'rdflib.plugins.sparql.sparql'¶
- class rdflib.plugins.sparql.sparql.Bindings(outer=None, d=[])[source]¶
Bases:
MutableMapping
A single level of a stack of variable-value bindings. Each dict keeps a reference to the dict below it, any failed lookup is propegated back
In python 3.3 this could be a collections.ChainMap
- __abstractmethods__ = frozenset({})¶
- __dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n\n A single level of a stack of variable-value bindings.\n Each dict keeps a reference to the dict below it,\n any failed lookup is propegated back\n\n In python 3.3 this could be a collections.ChainMap\n ', '__init__': <function Bindings.__init__>, '__getitem__': <function Bindings.__getitem__>, '__contains__': <function Bindings.__contains__>, '__setitem__': <function Bindings.__setitem__>, '__delitem__': <function Bindings.__delitem__>, '__len__': <function Bindings.__len__>, '__iter__': <function Bindings.__iter__>, '__str__': <function Bindings.__str__>, '__repr__': <function Bindings.__repr__>, '__dict__': <attribute '__dict__' of 'Bindings' objects>, '__weakref__': <attribute '__weakref__' of 'Bindings' objects>, '__abstractmethods__': frozenset(), '_abc_impl': <_abc._abc_data object>, '__annotations__': {'_d': 'Dict[str, str]'}})¶
- __module__ = 'rdflib.plugins.sparql.sparql'¶
- __weakref__¶
list of weak references to the object (if defined)
- class rdflib.plugins.sparql.sparql.FrozenBindings(ctx, *args, **kwargs)[source]¶
Bases:
FrozenDict
- Parameters:
ctx (
QueryContext
) –
- __abstractmethods__ = frozenset({})¶
- __annotations__ = {'_d': 'Dict[Identifier, Identifier]', '_hash': 'Optional[int]'}¶
- __getitem__(key)[source]¶
- Parameters:
key (
Union
[Identifier
,str
]) –- Return type:
- __init__(ctx, *args, **kwargs)[source]¶
- Parameters:
ctx (
QueryContext
) –
- __module__ = 'rdflib.plugins.sparql.sparql'¶
- property bnodes: Mapping[Identifier, BNode]¶
- forget(before, _except=None)[source]¶
return a frozen dict only of bindings made in self since before
- Parameters:
before (
QueryContext
) –
- Return type:
- merge(other)[source]¶
- Parameters:
other (
Mapping
[Identifier
,Identifier
]) –- Return type:
- class rdflib.plugins.sparql.sparql.FrozenDict(*args, **kwargs)[source]¶
Bases:
Mapping
An immutable hashable dict
Taken from http://stackoverflow.com/a/2704866/81121
- __abstractmethods__ = frozenset({})¶
- __annotations__ = {'_d': 'Dict[Identifier, Identifier]', '_hash': 'Optional[int]'}¶
- __dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n An immutable hashable dict\n\n Taken from http://stackoverflow.com/a/2704866/81121\n\n ', '__init__': <function FrozenDict.__init__>, '__iter__': <function FrozenDict.__iter__>, '__len__': <function FrozenDict.__len__>, '__getitem__': <function FrozenDict.__getitem__>, '__hash__': <function FrozenDict.__hash__>, 'project': <function FrozenDict.project>, 'disjointDomain': <function FrozenDict.disjointDomain>, 'compatible': <function FrozenDict.compatible>, 'merge': <function FrozenDict.merge>, '__str__': <function FrozenDict.__str__>, '__repr__': <function FrozenDict.__repr__>, '__dict__': <attribute '__dict__' of 'FrozenDict' objects>, '__weakref__': <attribute '__weakref__' of 'FrozenDict' objects>, '__abstractmethods__': frozenset(), '_abc_impl': <_abc._abc_data object>, '__annotations__': {'_d': 'Dict[Identifier, Identifier]', '_hash': 'Optional[int]'}})¶
- __getitem__(key)[source]¶
- Parameters:
key (
Identifier
) –- Return type:
- __module__ = 'rdflib.plugins.sparql.sparql'¶
- __weakref__¶
list of weak references to the object (if defined)
- compatible(other)[source]¶
- Parameters:
other (
Mapping
[Identifier
,Identifier
]) –- Return type:
- disjointDomain(other)[source]¶
- Parameters:
other (
Mapping
[Identifier
,Identifier
]) –- Return type:
- merge(other)[source]¶
- Parameters:
other (
Mapping
[Identifier
,Identifier
]) –- Return type:
- exception rdflib.plugins.sparql.sparql.NotBoundError(msg=None)[source]¶
Bases:
SPARQLError
- __annotations__ = {}¶
- __module__ = 'rdflib.plugins.sparql.sparql'¶
- class rdflib.plugins.sparql.sparql.Prologue[source]¶
Bases:
object
A class for holding prefixing bindings and base URI information
- __dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n A class for holding prefixing bindings and base URI information\n ', '__init__': <function Prologue.__init__>, 'resolvePName': <function Prologue.resolvePName>, 'bind': <function Prologue.bind>, 'absolutize': <function Prologue.absolutize>, '__dict__': <attribute '__dict__' of 'Prologue' objects>, '__weakref__': <attribute '__weakref__' of 'Prologue' objects>, '__annotations__': {'base': 'Optional[str]'}})¶
- __module__ = 'rdflib.plugins.sparql.sparql'¶
- __weakref__¶
list of weak references to the object (if defined)
- class rdflib.plugins.sparql.sparql.Query(prologue, algebra)[source]¶
Bases:
object
A parsed and translated query
- __dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n A parsed and translated query\n ', '__init__': <function Query.__init__>, '__dict__': <attribute '__dict__' of 'Query' objects>, '__weakref__': <attribute '__weakref__' of 'Query' objects>, '__annotations__': {'_original_args': 'Tuple[str, Mapping[str, str], Optional[str]]'}})¶
- __module__ = 'rdflib.plugins.sparql.sparql'¶
- __weakref__¶
list of weak references to the object (if defined)
- class rdflib.plugins.sparql.sparql.QueryContext(graph=None, bindings=None, initBindings=None)[source]¶
Bases:
object
Query context - passed along when evaluating the query
- Parameters:
- __dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n Query context - passed along when evaluating the query\n ', '__init__': <function QueryContext.__init__>, 'now': <property object>, 'clone': <function QueryContext.clone>, 'dataset': <property object>, 'load': <function QueryContext.load>, '__getitem__': <function QueryContext.__getitem__>, 'get': <function QueryContext.get>, 'solution': <function QueryContext.solution>, '__setitem__': <function QueryContext.__setitem__>, 'pushGraph': <function QueryContext.pushGraph>, 'push': <function QueryContext.push>, 'clean': <function QueryContext.clean>, 'thaw': <function QueryContext.thaw>, '__dict__': <attribute '__dict__' of 'QueryContext' objects>, '__weakref__': <attribute '__weakref__' of 'QueryContext' objects>, '__annotations__': {'graph': 'Optional[Graph]', '_dataset': 'Optional[ConjunctiveGraph]', 'prologue': 'Optional[Prologue]', '_now': 'Optional[datetime.datetime]', 'bnodes': 't.MutableMapping[Identifier, BNode]'}})¶
- __module__ = 'rdflib.plugins.sparql.sparql'¶
- __weakref__¶
list of weak references to the object (if defined)
- clone(bindings=None)[source]¶
- property dataset: ConjunctiveGraph¶
“current dataset
- load(source, default=False, **kwargs)[source]¶
Load data from the source into the query context’s.
- Parameters:
source (
URIRef
) – The source to load from.default (
bool
) – If True, triples from the source will be added to the default graph, otherwise it will be loaded into a graph withsource
URI as its name.kwargs (
Any
) – Keyword arguments to pass tordflib.graph.Graph.parse()
.
- Return type:
- solution(vars=None)[source]¶
Return a static copy of the current variable bindings as dict
- Parameters:
- Return type:
- thaw(frozenbindings)[source]¶
Create a new read/write query context from the given solution
- Parameters:
frozenbindings (
FrozenBindings
) –- Return type:
- exception rdflib.plugins.sparql.sparql.SPARQLError(msg=None)[source]¶
Bases:
Exception
- __annotations__ = {}¶
- __module__ = 'rdflib.plugins.sparql.sparql'¶
- __weakref__¶
list of weak references to the object (if defined)
- exception rdflib.plugins.sparql.sparql.SPARQLTypeError(msg)[source]¶
Bases:
SPARQLError
- __annotations__ = {}¶
- __module__ = 'rdflib.plugins.sparql.sparql'¶
- class rdflib.plugins.sparql.sparql.Update(prologue, algebra)[source]¶
Bases:
object
A parsed and translated update
- __dict__ = mappingproxy({'__module__': 'rdflib.plugins.sparql.sparql', '__doc__': '\n A parsed and translated update\n ', '__init__': <function Update.__init__>, '__dict__': <attribute '__dict__' of 'Update' objects>, '__weakref__': <attribute '__weakref__' of 'Update' objects>, '__annotations__': {'_original_args': 'Tuple[str, Mapping[str, str], Optional[str]]'}})¶
- __module__ = 'rdflib.plugins.sparql.sparql'¶
- __weakref__¶
list of weak references to the object (if defined)
rdflib.plugins.sparql.update module¶
Code for carrying out Update Operations
- rdflib.plugins.sparql.update.evalAdd(ctx, u)[source]¶
add all triples from src to dst
http://www.w3.org/TR/sparql11-update/#add
- Parameters:
ctx (
QueryContext
) –u (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.update.evalClear(ctx, u)[source]¶
http://www.w3.org/TR/sparql11-update/#clear
- Parameters:
ctx (
QueryContext
) –u (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.update.evalCopy(ctx, u)[source]¶
remove all triples from dst add all triples from src to dst
http://www.w3.org/TR/sparql11-update/#copy
- Parameters:
ctx (
QueryContext
) –u (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.update.evalCreate(ctx, u)[source]¶
http://www.w3.org/TR/sparql11-update/#create
- Parameters:
ctx (
QueryContext
) –u (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.update.evalDeleteData(ctx, u)[source]¶
http://www.w3.org/TR/sparql11-update/#deleteData
- Parameters:
ctx (
QueryContext
) –u (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.update.evalDeleteWhere(ctx, u)[source]¶
http://www.w3.org/TR/sparql11-update/#deleteWhere
- Parameters:
ctx (
QueryContext
) –u (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.update.evalDrop(ctx, u)[source]¶
http://www.w3.org/TR/sparql11-update/#drop
- Parameters:
ctx (
QueryContext
) –u (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.update.evalInsertData(ctx, u)[source]¶
http://www.w3.org/TR/sparql11-update/#insertData
- Parameters:
ctx (
QueryContext
) –u (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.update.evalLoad(ctx, u)[source]¶
http://www.w3.org/TR/sparql11-update/#load
- Parameters:
ctx (
QueryContext
) –u (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.update.evalModify(ctx, u)[source]¶
- Parameters:
ctx (
QueryContext
) –u (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.update.evalMove(ctx, u)[source]¶
remove all triples from dst add all triples from src to dst remove all triples from src
http://www.w3.org/TR/sparql11-update/#move
- Parameters:
ctx (
QueryContext
) –u (
CompValue
) –
- Return type:
- rdflib.plugins.sparql.update.evalUpdate(graph, update, initBindings=None)[source]¶
http://www.w3.org/TR/sparql11-update/#updateLanguage
‘A request is a sequence of operations […] Implementations MUST ensure that operations of a single request are executed in a fashion that guarantees the same effects as executing them in lexical order.
Operations all result either in success or failure.
If multiple operations are present in a single request, then a result of failure from any operation MUST abort the sequence of operations, causing the subsequent operations to be ignored.’
This will return None on success and raise Exceptions on error
Caution
This method can access indirectly requested network endpoints, for example, query processing will attempt to access network endpoints specified in
SERVICE
directives.When processing untrusted or potentially malicious queries, measures should be taken to restrict network and file access.
For information on available security measures, see the RDFLib Security Considerations documentation.
Module contents¶
SPARQL implementation for RDFLib
New in version 4.0.
- rdflib.plugins.sparql.CUSTOM_EVALS = {}¶
Custom evaluation functions
These must be functions taking (ctx, part) and raise NotImplementedError if they cannot handle a certain part
- rdflib.plugins.sparql.prepareQuery(queryString, initNs=None, base=None)[source]¶
Parse and translate a SPARQL Query
- rdflib.plugins.sparql.prepareUpdate(updateString, initNs=None, base=None)[source]¶
Parse and translate a SPARQL Update