A Python toolkit for orchestrating WordLift imports: fetch URLs from sitemaps, Google Sheets, or explicit lists, filter out already imported pages, enqueue search console jobs, push RDF graphs, and call the WordLift APIs to import web pages.
- URL sources: XML sitemaps (with optional regex filtering), Google Sheets (
urlcolumn), or Python lists. - Change detection: skips URLs that are already imported unless
OVERWRITEis enabled; re-imports whenlastmodis newer. - Web page imports: sends URLs to WordLift with embedding requests, output types, retry logic, and pluggable callbacks.
- Search Console refresh: triggers analytics imports when top queries are stale.
- Graph templates: renders
.ttl.liquidtemplates underdata/templateswith account data and uploads the resulting RDF graphs. - Extensible: override protocols via
WORDLIFT_OVERRIDE_DIRwithout changing the library code.
pip install wordlift-sdk
# or
poetry add wordlift-sdkRequires Python 3.10–3.14.
Settings are read in order: config/default.py (or a custom path you pass to ConfigurationProvider.create), environment variables, then (when available) Google Colab userdata.
Common options:
WORDLIFT_KEY(required): WordLift API key.API_URL: WordLift API base URL, defaults tohttps://api.wordlift.io.SITEMAP_URL: XML sitemap to crawl;SITEMAP_URL_PATTERNoptional regex to filter URLs.SHEETS_URL,SHEETS_NAME,SHEETS_SERVICE_ACCOUNT: use a Google Sheet as source; service account points to credentials file.URLS: list of URLs (e.g.,["https://example.com/a", "https://example.com/b"]).OVERWRITE: re-import URLs even if already present (defaultFalse).WEB_PAGE_IMPORT_WRITE_STRATEGY: WordLift write strategy (defaultcreateOrUpdateModel).EMBEDDING_PROPERTIES: list of schema properties to embed.WEB_PAGE_TYPES: output schema types, defaults to["http://schema.org/Article"].GOOGLE_SEARCH_CONSOLE: enable/disable Search Console handler (defaultTrue).CONCURRENCY: max concurrent handlers, defaults tomin(cpu_count(), 4).WORDLIFT_OVERRIDE_DIR: folder containing protocol overrides (defaultapp/overrides).
The SDK enforces SSL verification. On macOS it uses the system CA bundle when available and falls back to certifi if needed. You can override the CA bundle path explicitly in code:
from wordlift_sdk.client import ClientConfigurationFactory
from wordlift_sdk.structured_data import CreateRequest
factory = ClientConfigurationFactory(
key="your-api-key",
api_url="https://api.wordlift.io",
ssl_ca_cert="/path/to/ca.pem",
)
configuration = factory.create()
request = CreateRequest(
url="https://example.com",
target_type="Thing",
output_dir=Path("."),
base_name="structured-data",
jsonld_path=None,
yarrml_path=None,
api_key="your-api-key",
base_url=None,
ssl_ca_cert="/path/to/ca.pem",
debug=False,
headed=False,
timeout_ms=30000,
max_retries=2,
quality_check=True,
max_xhtml_chars=40000,
max_text_node_chars=400,
max_nesting_depth=2,
verbose=True,
validate=True,
wait_until="networkidle",
)Note: target_type is used for agent guidance and validation shape selection. The YARRRML materialization pipeline now preserves authored mapping semantics and does not coerce nodes to Review/Thing.
Example config/default.py:
WORDLIFT_KEY = "your-api-key"
SITEMAP_URL = "https://example.com/sitemap.xml"
SITEMAP_URL_PATTERN = r"^https://example.com/article/.*$"
GOOGLE_SEARCH_CONSOLE = True
WEB_PAGE_TYPES = ["http://schema.org/Article"]
EMBEDDING_PROPERTIES = [
"http://schema.org/headline",
"http://schema.org/abstract",
"http://schema.org/text",
]import asyncio
from wordlift_sdk import run_kg_import_workflow
if __name__ == "__main__":
asyncio.run(run_kg_import_workflow())The workflow:
- Renders and uploads RDF graphs from
data/templates/*.ttl.liquidusing account info. - Builds the configured URL source and filters out unchanged URLs (unless
OVERWRITE). - Sends each URL to WordLift for import with retries and optional Search Console refresh.
You can build components yourself when you need more control:
import asyncio
from wordlift_sdk.container.application_container import ApplicationContainer
async def main():
container = ApplicationContainer()
workflow = await container.create_kg_import_workflow()
await workflow.run()
asyncio.run(main())Override the web page import callback by placing web_page_import_protocol.py with a WebPageImportProtocol class under WORDLIFT_OVERRIDE_DIR (default app/overrides). The callback receives a WebPageImportResponse and can push to graph_queue or entity_patch_queue.
Add .ttl.liquid files under data/templates. Templates render with account fields available (e.g., {{ account.dataset_uri }}) and are uploaded before URL handling begins.
SHACL validation utilities and generated Google Search Gallery shapes are included. When a feature includes both container types (for example ItemList, BreadcrumbList, QAPage, FAQPage, Quiz, ProfilePage, Product, Recipe, Course, Review) and their contained types (ListItem, Question, Answer, Comment, Offer, AggregateOffer, HowToStep, Person, Organization, Rating, AggregateRating, Review, ItemList), the generator scopes the contained constraints under the container properties to avoid enforcing them on unrelated nodes. For Product snippets, offers is scoped as Offer or AggregateOffer, matching Google requirements. The generator also captures "one of" requirements expressed in prose lists and emits sh:or constraints so any listed property satisfies the requirement. Schema.org grammar checks are intentionally permissive and accept URL/text literals for all properties.
Use wordlift_sdk.validation.validate_jsonld_from_url to render a URL with Playwright, extract JSON-LD fragments, and validate them against SHACL shapes.
Playwright is required for URL rendering. After installing dependencies, install the browser binaries:
poetry run playwright installYARRRML mappings are now executed directly by morph-kgc native YARRRML support.
There is no JS transpile step via yarrrml-parser, and no temporary mapping.ttl
conversion artifact in the materialization pipeline.
Customer-authored mappings can use runtime tokens:
__XHTML__for the local XHTML source path used by materialization.__URL__for canonical page URL injection.__ID__for callback/import entity IRI injection.
__URL__ resolution order is:
response.web_page.url- explicit
urlargument passed to materialization
__ID__ resolution source is:
response.id
When unresolved:
- strict mode (
strict_url_token=True): fail fast - default non-strict mode: warn and keep
__URL__unchanged __ID__: fail closed with an explicit error
Recommendation: use __ID__ in subject/object IRI positions instead of
temporary hardcoded page subjects such as {{ dataset_uri }}/web-pages/page.
Compatibility note: morph-kgc native YARRRML behavior may differ from legacy
JS parser behavior for some advanced XPath/function constructs.
The SDK now includes a profile-driven cloud mapping module under wordlift_sdk.kg_build.
- Public module import:
wordlift_sdk.kg_build - Postprocessor runner entrypoint:
python -m wordlift_sdk.kg_build.postprocessor_runner - URL handling parity with legacy workflow:
WebPageImportUrlHandleris always enabledSearchConsoleUrlHandleris enabled whenGOOGLE_SEARCH_CONSOLE=True(default)
- Postprocessor manifests are loaded from:
profiles/_base/postprocessors.tomlprofiles/<profile>/postprocessors.toml
- Execution is manifest-based only (hard cutover): no legacy
.pyor*.command.tomldiscovery.
poetry install --with dev
poetry run pytest- Google Sheets Lookup: Utility for O(1) lookups from Google Sheets.
- Web Page Import: Configure fetch options, proxies, and JS rendering.
- Structured Data: Structured data architecture and pipeline behavior.
- Customer Project Contract: Profile repo contract and manifest-based postprocessor runtime.
- Structured Data Spec: Internal technical details for runtime placeholder resolution.
- Profile Config Spec: Profile inheritance, environment interpolation, and manifest postprocessor contract.
- Pipeline Architecture Spec:
kg_buildruntime flow and callback architecture. - Migration Guide: Breaking changes for structured data refactor.
- Changelog: Versioned release notes.