Python SDK Reference
The Keeper class from keep-skill. Install with uv add keep-skill.
from keep import Keeper
kp = Keeper() # Uses ~/.keep/ by default
kp.put("my note", tags={"project": "myapp"})
results = kp.find("authentication", limit=5)
CRUD
put
put(content: Optional[str] = None, *, uri: Optional[str] = None, id: Optional[str] = None, summary: Optional[str] = None, tags: Optional[dict[str, str]] = None, created_at: Optional[str] = None) -> Item
Store or update a note. Provide content for inline text, or uri to fetch from a URL/file.
Smart summary behavior:
- If summary is provided, use it (skips auto-summarization)
- If content is short (≤ max_summary_length), use content verbatim
- For large content, summarization is async (truncated placeholder stored immediately, real summary generated in background)
Update behavior (when id/uri already exists):
- Summary: Replaced with user-provided, content, or generated summary
- Tags: Merged - existing tags preserved, new tags override on key collision. System tags (_prefixed) are always managed by the system.
Args:
- content: Inline text content to store
- uri: URI to fetch content from (file:// or https://)
- id: Optional custom ID (auto-generated for inline content if None; URI-based notes use URI as ID)
- summary: User-provided summary (skips auto-summarization if given)
- tags: User-provided tags to merge with existing tags
Returns:
- The stored Item with merged tags and new summary
get
get(id: str) -> Optional[Item]
Retrieve a specific item by ID.
Reads from document store (canonical), falls back to vector store for legacy data.
Touches accessed_at on successful retrieval.
exists
exists(id: str) -> bool
Check if an item exists in the store.
delete
delete(id: str, *, delete_versions: bool = True) -> bool
Delete an item from both stores.
Args:
- id: Document identifier
- delete_versions: If True, also delete version history
Returns:
- True if item existed and was deleted.
count
count() -> int
Count items in a collection.
Returns count from document store if available, else vector store.
close
close() -> None
Close resources (stores, caches, queues).
Releases model locks (freeing GPU memory) before releasing file locks, ensuring the next process gets a clean GPU.
Context
get_context
get_context(id: str, *, version: int | None = None, similar_limit: int = 3, meta_limit: int = 3, include_similar: bool = True, include_meta: bool = True, include_parts: bool = True, include_versions: bool = True) -> ItemContext | None
Assemble complete display context for a single item in one call.
Returns an ItemContext containing the item itself plus similar items, meta-tag sections, version navigation (prev/next), and parts manifest. Replaces the need to call get_similar_for_display, resolve_meta, get_version_nav, and list_parts individually.
When viewing an older version (offset > 0), similar, meta, and parts sections are empty.
Args:
- id: Document identifier
- version: Version offset (0 or None = current, 1 = previous, ...)
- similar_limit: Max similar items to include
- meta_limit: Max items per meta-tag section
- include_similar: Whether to resolve similar items
- include_meta: Whether to resolve meta-tag sections
- include_parts: Whether to include parts manifest
- include_versions: Whether to include version navigation
Returns:
- ItemContext if the item exists, None otherwise
REST equivalent: GET /v1/notes/{id}/context, GET /v1/now/context
Search
find
find(query: Optional[str] = None, *, tags: Optional[dict[str, str]] = None, similar_to: Optional[str] = None, fulltext: bool = False, limit: int = 10, since: Optional[str] = None, until: Optional[str] = None, include_self: bool = False, include_hidden: bool = False) -> list[Item]
Unified search: semantic (default), full-text, or similar-to. Exactly one of query or similar_to is required.
Scores are adjusted by recency decay (ACT-R model) - older items have reduced effective relevance unless recently accessed.
Args:
- query: Search text (semantic similarity by default)
- tags: Pre-filter results to items matching these tag key=value pairs
- similar_to: Find items similar to this note ID
- fulltext: Use text matching instead of semantic similarity (only with query)
- limit: Maximum results to return
- since: Only include items updated since (ISO duration like P3D, or date)
- until: Only include items updated before (ISO duration or date)
- include_self: Include the anchor note in similar_to results
- include_hidden: Include system notes (dot-prefix IDs)
get_similar_for_display
get_similar_for_display(id: str, *, limit: int = 3) -> list[Item]
Find similar items for frontmatter display using stored embedding.
Optimized for display: uses stored embedding (no re-embedding), filters to distinct base documents, excludes source document versions.
Args:
- id: ID of item to find similar items for
- limit: Maximum results to return
Returns:
- List of similar items, one per unique base document
Listing & Tags
list_items
list_items(*, prefix: Optional[str] = None, tags: Optional[dict[str, str]] = None, tag_keys: Optional[list[str]] = None, since: Optional[str] = None, until: Optional[str] = None, order_by: str = 'updated', include_hidden: bool = False, include_history: bool = False, limit: int = 10) -> list[Item]
List items with composable filters. All filters are AND'd together.
Args:
- prefix: ID prefix filter (e.g. ".tag/act" matches all IDs starting with ".tag/act")
- tags: Tag key=value filters (all must match)
- tag_keys: Tag key-only filters (any value, all keys must be present)
- since: Only include items updated since (ISO duration like P7D, or date)
- until: Only include items updated before (ISO duration or date)
- order_by: Sort order - "updated" (default) or "accessed"
- include_hidden: Include system notes (dot-prefix IDs)
- include_history: Include archived versions alongside current items
- limit: Maximum results to return (default 10)
Returns:
- List of Items, most recent first
tag
tag(id: str, tags: Optional[dict[str, str]] = None) -> Optional[Item]
Update tags on an existing document without re-processing.
Does NOT re-fetch, re-embed, or re-summarize. Only updates tags.
Tag behavior:
- Provided tags are merged with existing user tags
- Empty string value ("") deletes that tag
- System tags (_prefixed) cannot be modified via this method
Args:
- id: Document identifier
- tags: Tags to add/update/delete (empty string = delete)
Returns:
- Updated Item if found, None if document doesn't exist
tag_part
tag_part(id: str, part_num: int, tags: dict[str, str]) -> Optional[PartInfo]
Edit tags on a structural part. Parts are otherwise immutable.
Args:
- id: Document identifier
- part_num: Part number (1-indexed)
- tags: Tags to add/update/delete (empty string = delete)
Returns:
- Updated PartInfo if found, None if document or part doesn't exist
list_tags
list_tags(key: Optional[str] = None) -> list[str]
List distinct tag keys or values.
Args:
- key: If provided, list distinct values for this key. If None, list distinct tag keys.
Returns:
- Sorted list of distinct keys or values
Versions
get_version
get_version(id: str, offset: int = 0) -> Optional[Item]
Get a specific version of a document by offset.
Offset semantics:
- 0 = current version
- 1 = previous version
- 2 = two versions ago
Args:
- id: Document identifier
- offset: Version offset (0=current, 1=previous, etc.)
Returns:
- Item if found, None if version doesn't exist
get_version_nav
get_version_nav(id: str, current_version: Optional[int] = None, limit: int = 3) -> dict[str, list[VersionInfo]]
Get version navigation info (prev/next) for display.
Args:
- id: Document identifier
- current_version: The version being viewed (None = current/live version)
- limit: Max previous versions to return when viewing current
Returns:
- Dict with 'prev' and optionally 'next' lists of VersionInfo.
get_version_offset
get_version_offset(item: Item) -> int
Get version offset (0=current, 1=previous, ...) for an item.
Converts the internal version number (1=oldest, 2=next...) to the user-visible offset format (0=current, 1=previous, 2=two-ago...).
Args:
- item: Item to get version offset for
Returns:
- Version offset (0 for current version)
list_versions
list_versions(id: str, limit: int = 10) -> list[VersionInfo]
List version history for a document.
Returns versions in reverse chronological order (newest archived first). Does not include the current version.
Args:
- id: Document identifier
- limit: Maximum versions to return
Returns:
- List of VersionInfo, newest archived first
revert
revert(id: str) -> Optional[Item]
Revert to the previous version, or delete if no versions exist.
Returns the restored item, or None if the item was fully deleted.
Meta
resolve_meta
resolve_meta(item_id: str, *, limit_per_doc: int = 3) -> dict[str, list[Item]]
Resolve all .meta/* docs against an item's tags.
Meta-tags define tag-based queries that surface contextually relevant items — open commitments, past learnings, decisions to revisit. Results are ranked by similarity to the current item + recency decay, so the most relevant matches surface first.
Args:
- item_id: ID of the item whose tags provide context
- limit_per_doc: Max results per meta-tag
Returns:
- Dict of {meta_name: [matching Items]}. Empty results omitted.
resolve_inline_meta
resolve_inline_meta(item_id: str, queries: list[dict[str, str]], context_keys: list[str] | None = None, prereq_keys: list[str] | None = None, *, limit: int = 3) -> list[Item]
Resolve an inline meta query against an item's tags.
Like resolve_meta() but with ad-hoc queries instead of persistent .meta/* documents. Queries use the same tag-based syntax.
Args:
- item_id: ID of the item whose tags provide context
- queries: List of tag-match dicts, each {key: value} for AND queries; multiple dicts are OR (union)
- context_keys: Tag keys to expand from the current item's tags
- prereq_keys: Tag keys the current item must have (or return empty)
- limit: Max results
Returns:
- List of matching Items, ranked by similarity + recency.
Now (Intentions)
get_now
get_now() -> Item
Get the current working intentions.
A singleton document representing what you're currently working on. If it doesn't exist, creates one with default content and tags from the bundled system now.md file.
Returns:
- The current intentions Item (never None - auto-creates if missing)
set_now
set_now(content: str, *, tags: Optional[dict[str, str]] = None) -> Item
Set the current working intentions.
Updates the singleton intentions with new content. Uses put() internally with the fixed NOWDOC_ID.
Args:
- content: New content for the current intentions
- tags: Optional additional tags to apply
Returns:
- The updated context Item
move
move(name: str, *, source_id: str = 'now', tags: Optional[dict[str, str]] = None, only_current: bool = False) -> Item
Move versions from a source document into a named item.
Moves matching versions (filtered by tags if provided) from source_id to a named item. If the target already exists, extracted versions are appended to its history. The source retains non-matching versions; if fully emptied and source is 'now', it resets to default.
Args:
- name: ID for the target item (created if new, extended if exists)
- source_id: Document to extract from (default: now)
- tags: If provided, only extract versions whose tags contain all specified key=value pairs. If None, extract all.
- only_current: If True, only extract the current (tip) version, not any archived history.
Returns:
- The moved Item.
Raises:
- ValueError: If name is empty, source doesn't exist, or no versions match the filter.
Analysis
analyze
analyze(id: str, *, force: bool = False) -> dict
Analyze a document into searchable structural parts.
Uses the configured analyzer (e.g. sliding-window) to decompose the document into thematic parts, each independently searchable. Skips if parts already exist unless force=True.
Args:
- id: Document identifier
- force: Re-analyze even if parts already exist
Returns:
- Dict with: parts_created (int), skipped (bool)
Continuations
Stateful multi-step memory interactions. See the Continuations guide for the full schema, decision support signals, and worked examples.
continue_flow
continue_flow(payload: dict) -> dict
Run one continuation tick. Start a new flow by providing goal, profile, frame_request, or steps. Resume an existing flow with flow_id and state_version.
Args:
- payload: Dict with flow fields. See Continuations for the full input schema.
Returns:
- Dict with
flow_id,state_version,status(done|waiting_work|paused|failed),frame(evidence and decision signals),requests(pending work),next(recommended action),errors.
Example:
# Simple query
result = kp.continue_flow({
"goal": "query",
"frame_request": {
"seed": {"mode": "query", "value": "authentication"},
"pipeline": [{"op": "slice", "args": {"limit": 5}}],
},
})
# Auto-refining loop
result = kp.continue_flow({
"goal": "query",
"profile": "query.auto",
"params": {"text": "auth patterns"},
"frame_request": {"seed": {"mode": "query", "value": "auth patterns"}},
})
while result.get("next", {}).get("recommended") == "continue":
result = kp.continue_flow({
"flow_id": result["flow_id"],
"state_version": result["state_version"],
})
Preview — this interface may change.
continue_run_work
continue_run_work(flow_id: str, work_id: str) -> dict
Execute a pending work item within a continuation flow. The runtime runs the work locally using the configured providers and returns a work-result dict suitable for feeding back via feedback.work_results.
Args:
- flow_id: The flow containing the work item
- work_id: The work item to execute
Returns:
- Dict with
work_id,status,outputs,quality.
Data
export_iter
export_iter(*, include_system: bool = True) -> Iterator[dict]
Stream-export the store. Yields a header dict first, then one self-contained dict per document (versions and parts inline).
Args:
- include_system: Include system documents (dot-prefix IDs)
export_data
export_data(*, include_system: bool = True) -> dict
Export the entire store as a single dict. Convenience wrapper around export_iter() that collects everything into memory.
import_data
import_data(data: dict, *, mode: str = 'merge') -> dict
Import documents from an export dict.
Args:
- data: Export dict (from export_data or JSON file)
- mode: "merge" (skip existing IDs) or "replace" (clear store first)
Returns:
- Dict with: imported, skipped, versions, parts, queued
System
list_collections
list_collections() -> list[str]
List all collections in the store.
list_system_documents
list_system_documents() -> list[Item]
List all system documents.
System documents are identified by the category: system tag. These are preloaded on init and provide foundational content.
Returns:
- List of system document Items
reindex
reindex() -> dict
Rebuild search index with current embedding provider (foreground).
Re-embeds all items from the document store into the current vector collection. Use as an explicit backstop when background reindex didn't complete.
Returns:
- Dict with stats: total, indexed, failed
reconcile
reconcile(fix: bool = False) -> dict
Check and optionally fix consistency between document store and vector store.
Detects:
- Documents in document store missing from vector store (not searchable)
- Documents in vector store missing from document store (orphaned embeddings)
Args:
- fix: If True, re-index documents missing from vector store
Returns:
- Dict with 'missing_from_index', 'orphaned_in_index', 'fixed' counts
reset_system_documents
reset_system_documents() -> dict
Force reload all system documents from bundled content.
This overwrites any user modifications to system documents. Use with caution - primarily for recovery or testing.
Returns:
- Dict with stats: reset count
process_pending
process_pending(limit: int = 10) -> dict
Process pending summaries queued by lazy put.
Generates real summaries for items that were indexed with truncated placeholders. Updates the stored items in place.
When items have user tags (non-system tags), context is gathered from similar items with matching tags to produce contextual summaries.
Items that fail MAX_SUMMARY_ATTEMPTS times are removed from the queue (the truncated placeholder remains in the store).
Args:
- limit: Maximum number of items to process in this batch
Returns:
- Dict with: processed (int), failed (int), abandoned (int), errors (list)
pending_count
pending_count() -> int
Get count of pending summaries awaiting processing.
pending_stats
pending_stats() -> dict
Get pending summary queue statistics.
Returns dict with: pending, collections, max_attempts, oldest, queue_path
embedding_cache_stats
embedding_cache_stats() -> dict
Get embedding cache statistics.
Returns dict with: entries, hits, misses, hit_rate, cache_path
Returns {"loaded": False} if embedding provider hasn't been loaded yet.