Alpha Release - Context Router is currently in active development. Features and APIs may change.
What is Context Router?
Context Router is a universal data access layer that enables AI agents to interact with diverse data sources through a unified interface. It transforms fragmented organizational data across databases, SaaS tools, and APIs into an accessible, governed data fabric optimized for agent workflows.
Designed specifically for voice agents and real-time applications, Context Router delivers sub-100ms data access with intelligent caching and query optimization.
Core Problem
AI agents need access to organizational data, but that data is scattered across:
Data Systems
- Relational databases (PostgreSQL, MySQL, Snowflake)
- NoSQL stores (MongoDB, DynamoDB, Redis)
- Vector databases (Pinecone, Weaviate, Qdrant)
- Knowledge graphs (Neo4j)
SaaS Tools & APIs
- Communication platforms (Slack, Gmail, Microsoft Teams)
- CRM systems (Salesforce, HubSpot)
- Project management (Jira, Asana, Linear)
- Document stores (Google Drive, Notion, Confluence)
Each system has its own query language, authentication model, rate limits, and access patterns. Context Router provides a unified interface while handling the complexity of routing, translation, caching, and tool calling.
Key Features
Natural Language Queries
Developers interact with data using natural language—Context Router handles the complexity of translating queries to the appropriate backend languages internally.
from vantedge import VantEdgeClient
client = VantEdgeClient(
context_router_url="http://localhost:8000",
api_key="your_api_key"
)
# Natural language queries - no SQL required
result = client.context_router.query("Show me all pending orders for building 123")
result = client.context_router.query("What's the work center capacity at plant 5?")
result = client.context_router.query("Find recent emails about the Q4 roadmap")
Intelligent Query Planning
Context Router uses LLM-powered query planning to:
- Understand natural language intent
- Determine which data sources to query
- Generate optimized backend queries (SQL, API calls, etc.)
- Merge results from multiple sources when needed
Semantic Caching
Multi-tier caching system with semantic understanding:
- Similar queries share cached results (e.g., “Show recent tickets” and “Display latest support requests”)
- Configurable TTL and similarity thresholds
- Sub-10ms response time for cached queries
- 70-90% cache hit rate for common queries
Multi-Source Data Access
Query across heterogeneous data sources through a single interface:
- Databases: PostgreSQL, MySQL, and more
- SaaS Tools: Gmail, Slack, and other integrations
- APIs: REST endpoints and custom connectors
Data Synchronization
Sync data from external sources to local storage for faster querying:
- Schedule automated syncs (e.g., sync last 30 days of emails daily)
- Query synced data with sub-millisecond latency
- Manage sync jobs through the Management API
Python Client SDK
The VantEdge Python Client provides a comprehensive SDK for interacting with Context Router, including querying data and managing configuration.
Installation
pip install vantedge-client
Basic Usage
from vantedge import VantEdgeClient
# Initialize the client
client = VantEdgeClient(
context_router_url="http://localhost:8000",
api_key="your_api_key",
timeout=30,
max_retries=3
)
# Natural language query
result = client.context_router.query(
query="Show me all pending orders for building 123",
user_id="agent_001"
)
print(f"Found {result.count} results")
print(f"Sources used: {result.sources_used}")
print(f"Latency: {result.latency_ms}ms")
print(f"Cache hit: {result.cache_hit}")
# Access individual results
for item in result.data:
print(item.data)
# Health check
health = client.context_router.health_check()
# Get available data sources
sources = client.context_router.get_sources()
# Close when done
client.close()
Management API
The Management API enables remote configuration of Context Router components including connectors, cache, LLM settings, and data synchronization.
Connector Management
Manage data source connectors at runtime.
from vantedge import VantEdgeClient, PostgresConfig
client = VantEdgeClient(
context_router_url="http://localhost:8000",
api_key="your_api_key"
)
# List all connectors
connectors = client.management.list_connectors()
for conn in connectors:
print(f"{conn.name}: {conn.status} ({conn.type})")
# Get specific connector details
connector = client.management.get_connector("orders")
# Create a new PostgreSQL connector
config = PostgresConfig(
host="db.example.com",
port=5432,
database="orders_db",
user="readonly_user",
password="secure_password",
description="Orders database"
)
new_connector = client.management.create_postgres_connector("orders", config)
# Update connector configuration
updated = client.management.update_connector(
name="orders",
timeout=60
)
# Get connector schema (tables and columns)
schema = client.management.get_connector_schema("orders")
# Delete a connector
client.management.delete_connector("old_db")
Cache Management
Configure and manage the semantic caching layer.
# Get current cache settings
settings = client.management.get_cache_settings()
print(f"TTL: {settings.ttl_seconds}s, Threshold: {settings.semantic_threshold}")
# Update cache settings
new_settings = client.management.update_cache_settings(
ttl_seconds=600, # 10 minute TTL
semantic_threshold=0.9 # Higher similarity required
)
# Get cache statistics
stats = client.management.get_cache_stats()
print(f"Cached queries: {stats.total_keys}")
print(f"Memory used: {stats.memory_used_bytes} bytes")
# List cached entries
entries = client.management.list_cache_entries(limit=50, offset=0)
for entry in entries['entries']:
print(f"Hash: {entry.query_hash}, TTL remaining: {entry.ttl_remaining}s")
# Clear entire cache
client.management.invalidate_cache()
# Clear cache by pattern
client.management.invalidate_cache(pattern="orders")
# Clear cache for specific source
client.management.invalidate_cache(source="gmail")
LLM Configuration
Configure the LLM provider used for query planning.
from vantedge import LLMProvider
# Get current LLM settings
llm_settings = client.management.get_llm_settings()
print(f"Provider: {llm_settings.provider}")
print(f"Model: {llm_settings.model}")
# Switch to Groq (faster inference)
new_settings = client.management.update_llm_settings(
provider=LLMProvider.GROQ,
model="llama-3.3-70b-versatile",
temperature=0.1
)
# Switch to Anthropic
client.management.update_llm_settings(
provider=LLMProvider.ANTHROPIC,
model="claude-sonnet-4-20250514"
)
# Test LLM connectivity
result = client.management.test_llm(test_query="What tables exist?")
if result.success:
print(f"LLM responding in {result.latency_ms}ms")
else:
print(f"LLM error: {result.error}")
Data Sync
Synchronize data from external sources (e.g., Gmail) to local PostgreSQL for faster querying.
# List all sync jobs
jobs = client.management.list_sync_jobs()
# Create a sync job for Gmail
sync_config = {
"days_back": 30,
"filters": {
"labels": ["INBOX", "SENT"],
"from_address": "[email protected]",
"subject_contains": "invoice"
},
"include_attachments": False,
"max_items": 500
}
schedule = {
"enabled": True,
"cron": "0 2 * * *" # Daily at 2am
}
job = client.management.create_sync_job(
name="daily-email-sync",
source_connector="gmail",
target_connector="postgres",
sync_config=sync_config,
schedule=schedule
)
print(f"Created sync job: {job.job_id}")
# Get sync job details
job_info = client.management.get_sync_job(job.job_id)
# Update sync job
updated_job = client.management.update_sync_job(
job_id=job.job_id,
sync_config=SyncConfig(days_back=7),
enabled=True
)
# Run sync manually
execution = client.management.run_sync_job(
job_id=job.job_id,
days_back=7, # Override config
force_full=False # Incremental sync
)
print(f"Sync started: {execution.execution_id}")
# Check sync status
status = client.management.get_sync_status(job.job_id)
print(f"Status: {status.status}")
print(f"Progress: {status.progress_pct}%")
print(f"Items synced: {status.items_synced}")
# Get sync history
history = client.management.get_sync_history(job.job_id, limit=10)
for exec in history['executions']:
print(f"{exec.started_at}: {exec.status} - {exec.items_synced} items")
# Cancel running sync
if status.status == "running":
client.management.cancel_sync_job(job.job_id)
# Get synced data statistics
stats = client.management.get_synced_data_stats(job.job_id)
print(f"Total items: {stats.total_items}")
print(f"Date range: {stats.date_range_start} to {stats.date_range_end}")
# Purge old synced data
result = client.management.purge_synced_data(
job_id=job.job_id,
older_than_days=90,
confirm=True # Required to execute
)
print(f"Purged {result['deleted_count']} old records")
# Delete sync job
client.management.delete_sync_job(job.job_id)
Planner Hints
Inject domain-specific knowledge into the query planner to guide how it interprets natural language queries.
# List all hints
hints = client.management.list_hints()
# Create hints to guide the planner
client.management.create_hint(
name="building-terminology",
content="When users ask about 'buildings', query the 'facilities' table using facility_id",
category="terminology",
priority=10
)
client.management.create_hint(
name="order-status",
content="Order status values are: 'pending', 'processing', 'shipped', 'delivered', 'cancelled'",
category="business_rules",
priority=5
)
client.management.create_hint(
name="date-format",
content="Dates in the database are stored in UTC. Convert user timezone references accordingly.",
category="schema",
priority=8
)
# Update a hint
client.management.update_hint(
hint_id="abc123",
content="Updated hint content",
enabled=True
)
# Preview how hints appear in the system prompt
preview = client.management.preview_hints_prompt()
print(f"Active hints: {preview['enabled_hint_count']}")
print(preview['prompt_section'])
# Delete a hint
client.management.delete_hint("abc123")
# Clear all hints
client.management.clear_hints()
Context Router enables AI agents to access organizational data with sub-100ms latency while maintaining security, governance, and compliance.