⚠️ IMPORTANT: All features are experimental, under active development. Use at your own risk. Customization to your workflow required. © 2026 GLG, a.s. | ← Back to Index
13. Real-World Use-Case Scenarios
Scenario A: Solo Agent with Memory (Community)
Setup: 1 AI assistant, personal use, no team.
pip install uaml-memory
uaml --accept-eula && uaml init
from uaml.facade import UAML
uaml = UAML()
# Daily workflow:
# 1. User asks about a past decision
quick = uaml.search("database migration decision", limit=3)
if quick:
# Found it — include in response
context = "\n".join([r.content for r in quick])
else:
# Not stored yet — answer and store for next time
uaml.learn(
"Decided to use PostgreSQL over MySQL for the new project. "
"Reasons: better JSON support, PostGIS for geo queries.",
topic="decisions", confidence=0.95
)
Cost: Free. ~1,000 memories. No API, no dashboard.
Scenario B: Developer Agent with Focus Engine (Starter)
Setup: 1 AI coding assistant, needs smart recall and web dashboard.
uaml = UAML()
uaml.apply_preset("standard")
# During code review — recall relevant architecture decisions:
context = uaml.recall(
"authentication middleware patterns",
budget_tokens=800
)
# Focus Engine returns only relevant entries within 800 tokens
# Instead of dumping 50K tokens of all code memories
# Store code review findings:
uaml.learn(
"Code review: auth middleware lacks rate limiting. "
"Added TODO to implement token bucket on /api/login.",
topic="code-review", confidence=0.9
)
# Dashboard at http://localhost:8780 — visual filter tuning
Cost: €8/mo. 10K memories. REST API + dashboard + Focus Engine.
Scenario C: Power User with MCP Integration (Professional)
Setup: 1 AI agent connected via MCP, full data management.
# MCP bridge handles everything automatically:
# Agent calls memory_focus_recall → gets budget-constrained context
# Agent calls memory_store → learns from conversations
# Export for backup:
uaml.export("weekly-backup.jsonl", format="jsonl")
# Named config presets for different tasks:
# "coding" preset — strict, code-focused
# "research" preset — broad, exploratory
# Switch based on current task:
if task_type == "coding":
requests.post("http://localhost:8780/api/v1/saved-configs/load",
json={"name": "coding", "filter_type": "output"})
else:
requests.post("http://localhost:8780/api/v1/saved-configs/load",
json={"name": "research", "filter_type": "output"})
# Rules changelog tracks every config change:
GET /api/v1/rules-log # who changed what, when, why
Cost: €29/mo. 100K memories. MCP + export/import + saved configs + audit.
Scenario D: AI Team with Coordination (Team)
Setup: 5 agents — coordinator, 2 coders, researcher, marketing. Shared knowledge.
# === MORNING: Coordinator plans the day ===
coord.claim(agent="leader", scope="daily-plan", reason="Morning planning")
# Recall yesterday's status:
yesterday = uaml.recall("what was completed yesterday?", budget_tokens=1000)
# Assign tasks:
for task in today_tasks:
requests.post(f"{COORD}/events", json={
"event_type": "ASSIGN",
"agent_id": "leader",
"target_agent": task["assignee"],
"scope": task["scope"],
"reason": task["description"]
})
coord.release(agent="leader", scope="daily-plan")
# === DAYTIME: Agents work independently ===
# Each agent has Focus Engine configured for their role:
# - Coders see code + architecture (deny: marketing, sales)
# - Researcher sees research + analysis (deny: code details)
# - Marketing sees content + SEO (deny: infrastructure)
# Coder works on feature:
coord.claim(agent="coder-1", scope="src/api/v2/*", reason="New recall endpoint")
# ... codes, tests, commits ...
uaml.learn("Implemented /v2/recall with pagination support. 15 tests pass.",
topic="code", confidence=0.95)
coord.release(agent="coder-1", scope="src/api/v2/*")
# === CREATIVE TASK: Website copy ===
# Leader broadcasts proposal request:
for agent in ["coder-1", "researcher", "marketing"]:
requests.post(f"{COORD}/events", json={
"event_type": "ASSIGN",
"target_agent": agent,
"scope": "proposal/landing-page",
"reason": "Propose new landing page approach from your perspective"
})
# Each agent submits proposal to shared memory:
uaml.learn("Proposal: Technical landing page with live API demo...",
topic="proposal/landing-page", source_type="proposal")
# Leader collects all proposals:
proposals = uaml.search("proposal/landing-page", topic="proposal/landing-page")
# Summarizes for owner → owner picks best approach
# === EMERGENCY: Owner says "STOP" ===
# Bridge detects "STOP" in chat → HALT sent to all agents
# All agents pause within 30 seconds
# Owner discusses new direction → "GO" resumes work
# === END OF DAY: Sync ===
sync.export_delta(since="2026-03-16T00:00:00")
# Delta synced to all agents via Federation
Cost: €190/mo. 500K memories. Federation + RBAC + coordination + GDPR.
Scenario E: Regulated Environment (Enterprise)
Setup: Financial institution, 20+ agents, strict compliance.
# Post-quantum encryption for all stored data:
# ML-KEM-768 (NIST FIPS 203) — resistant to quantum attacks
config = {
"encryption": {
"enabled": true,
"algorithm": "ML-KEM-768",
"key_rotation": "quarterly"
}
}
# GDPR compliance — right to be forgotten:
from uaml.compliance.auditor import ComplianceAuditor
auditor = ComplianceAuditor(store)
# Customer requests data deletion:
auditor.gdpr_erasure(customer_id="C-12345")
# → All memories referencing customer removed
# → Audit trail preserved (legally required)
# → Compliance report generated
# Full audit trail:
report = auditor.full_audit()
# Score: 94.2% | Critical findings: 0 | Recommendations: 3
# On-premise deployment — no cloud dependency:
# All data on-site, no external API calls
# Custom SLA: 99.9% uptime, 4-hour response time
Cost: Custom. Unlimited. PQC + compliance + SLA + on-prem.