Getting Started with Loom

Build a fully-featured interactive chat agent in 7 steps. Each step adds one layer.

Prerequisites

  • Python 3.12 or later
  • An OpenAI-compatible LLM endpoint (Ollama, vLLM, Groq, OpenAI, etc.)

Installation

pip install "git+https://github.com/NinoCoelho/loom.git"

For Anthropic support:

pip install "loom[anthropic] @ git+https://github.com/NinoCoelho/loom.git"

For search support:

pip install "loom[search] @ git+https://github.com/NinoCoelho/loom.git"

For web scraping support:

pip install "loom[scrape] @ git+https://github.com/NinoCoelho/loom.git"

Step 1 — Your first agent

This is the entire agentic loop. An Agent takes a provider, a tool registry, and a config, and runs conversations with run_turn().

import asyncio
from loom.loop import Agent, AgentConfig
from loom.llm.openai_compat import OpenAICompatibleProvider
from loom.tools.registry import ToolRegistry
from loom.types import ChatMessage, Role

async def main():
    provider = OpenAICompatibleProvider(
        base_url="http://localhost:11434/v1",  # or any OpenAI-compatible endpoint
        default_model="llama3",
    )

    agent = Agent(
        provider=provider,
        tool_registry=ToolRegistry(),
        config=AgentConfig(system_preamble="You are a helpful assistant."),
    )

    messages = [ChatMessage(role=Role.USER, content="Hello!")]
    turn = await agent.run_turn(messages)
    print(turn.reply)

asyncio.run(main())

turn.reply is the assistant's text. turn.iterations, turn.tool_calls, turn.input_tokens, and turn.output_tokens give you telemetry.


Step 2 — Add a tool

Subclass ToolHandler, describe the tool with a ToolSpec, and implement invoke(). Register it with the ToolRegistry. The agent will call it automatically when the LLM decides to.

from loom.tools.base import ToolHandler, ToolResult
from loom.tools.registry import ToolRegistry
from loom.types import ToolSpec

class WeatherTool(ToolHandler):
    @property
    def tool(self) -> ToolSpec:
        return ToolSpec(
            name="get_weather",
            description="Get current weather for a city",
            parameters={
                "type": "object",
                "properties": {"city": {"type": "string"}},
                "required": ["city"],
            },
        )

    async def invoke(self, args: dict) -> ToolResult:
        city = args["city"]
        return ToolResult(text=f"Weather in {city}: 72°F, sunny")

tools = ToolRegistry()
tools.register(WeatherTool())

agent = Agent(provider=provider, tool_registry=tools, config=config)

Loom handles the full tool call loop: it dispatches your handler, appends the result to the conversation, and keeps iterating until the LLM stops calling tools.


Step 3 — Add memory

MemoryToolHandler gives your agent a persistent, searchable memory across sessions. Point it at a directory and register it as a tool — the agent will store and recall memories on its own.

from pathlib import Path
from loom.tools.memory import MemoryToolHandler

memory_dir = Path.home() / ".myapp" / "memory"
tools.register(MemoryToolHandler(memory_dir))

Memory uses hybrid BM25 + salience + recency ranking. The agent decides when to save and recall — you just give it the tool.


Step 4 — Add skills

Skills are Markdown files with YAML frontmatter. The agent can activate them by name to load reusable instructions mid-conversation — without burning them into the system prompt permanently.

~/.myapp/skills/
  summarize.md
  code-review.md
---
name: summarize
description: How to summarize text clearly and concisely
---

1. Identify the key points — no more than five.
2. Write one sentence per point, plain language.
3. End with a single-sentence takeaway.
from loom.skills.registry import SkillRegistry

skills_dir = Path.home() / ".myapp" / "skills"
skills_dir.mkdir(parents=True, exist_ok=True)

skill_registry = SkillRegistry(skills_dir)
skill_registry.scan()

agent = Agent(
    provider=provider,
    tool_registry=tools,
    skill_registry=skill_registry,
    config=config,
)

When the agent activates a skill, its body is injected into the conversation at that point. Skills can also be created, edited, and deleted by the agent itself via SkillManager.


Step 5 — Add human-in-the-loop

AskUserTool lets the agent pause and ask the user a question. TerminalTool lets it run shell commands with approval. Both require you to provide the handler — you own the UI.

from loom.tools.hitl import AskUserTool, TerminalTool

async def ask_user_handler(kind: str, message: str, choices: list[str] | None) -> str:
    print(f"\n? {message}")
    if kind == "confirm":
        return input("[y/n] > ").strip()
    elif kind == "choice" and choices:
        for i, c in enumerate(choices, 1):
            print(f"  {i}. {c}")
        idx = int(input("Choice > ").strip()) - 1
        return choices[idx]
    return input("> ").strip()

ask_user = AskUserTool(handler=ask_user_handler)
tools.register(ask_user)
tools.register(TerminalTool(ask_user))  # TerminalTool uses AskUserTool for approvals

Step 6 — Add credentials

Store secrets, resolve them into transport-ready headers, and gate usage with a policy — all without the agent ever touching the secret bytes.

Store a secret:

from pathlib import Path
from loom.store.secrets import SecretStore

store = SecretStore(path=Path.home() / ".myapp" / "secrets.db")
await store.put("my-api", {"type": "api_key", "value": "sk-..."})

The store writes an encrypted file at the path you choose (Fernet at-rest, key auto-generated at secrets.db/../keys/secrets.key). Override the key with LOOM_SECRET_KEY env var.

Resolve headers automatically via CredentialResolver + HttpCallTool:

from loom.auth.appliers import ApiKeyHeaderApplier
from loom.auth.resolver import CredentialResolver
from loom.tools.http import HttpCallTool

resolver = CredentialResolver(store)
resolver.register(ApiKeyHeaderApplier(header_name="Authorization"), transport="http")

async def auth_hook(req: dict) -> dict:
    headers = await resolver.resolve_for("my-api", "http")
    return {**req, "headers": {**req["headers"], **headers}}

tools.register(HttpCallTool(pre_request_hook=auth_hook))

The hook runs before every HTTP request. The agent calls http_call with a URL and method; the hook injects the header. The agent never sees the key value.

Add a policy (optional — AUTONOMOUS by default):

from loom.auth.enforcer import PolicyEnforcer
from loom.auth.policies import CredentialPolicy, PolicyMode
from loom.auth.policy_store import PolicyStore

policy_store = PolicyStore(path=Path.home() / ".myapp" / "policies.json")
await policy_store.put(CredentialPolicy(scope="my-api", mode=PolicyMode.NOTIFY_BEFORE))

enforcer = PolicyEnforcer(policy_store=policy_store, hitl=hitl_broker)
resolver = CredentialResolver(store, enforcer=enforcer)

NOTIFY_BEFORE blocks the request and fires a HITL prompt before releasing the secret. Other modes: AUTONOMOUS (no gate), NOTIFY_AFTER (fire-and-log), TIME_BOXED (allowed inside a datetime window), ONE_SHOT (single use, then auto-revoked).


Step 7 — Put it all together

The previous steps each added one capability. Here's what a complete agent looks like when you wire everything into a single interactive chat loop — the same pattern behind the full TUI example.

import asyncio
from pathlib import Path

from loom.loop import Agent, AgentConfig
from loom.llm.openai_compat import OpenAICompatibleProvider
from loom.skills.registry import SkillRegistry
from loom.tools.hitl import AskUserTool, TerminalTool
from loom.tools.memory import MemoryToolHandler
from loom.tools.registry import ToolRegistry
from loom.types import ChatMessage, Role

APP_DIR = Path.home() / ".myapp"

async def ask_user_handler(kind: str, message: str, choices: list[str] | None) -> str:
    print(f"\n? {message}")
    if kind == "confirm":
        return input("[y/n] > ").strip()
    elif kind == "choice" and choices:
        for i, c in enumerate(choices, 1):
            print(f"  {i}. {c}")
        return choices[int(input("Choice > ").strip()) - 1]
    return input("> ").strip()

async def main():
    provider = OpenAICompatibleProvider(
        base_url="http://localhost:11434/v1",
        default_model="llama3",
    )

    tools = ToolRegistry()
    tools.register(MemoryToolHandler(APP_DIR / "memory"))
    ask_user = AskUserTool(handler=ask_user_handler)
    tools.register(ask_user)
    tools.register(TerminalTool(ask_user))

    skills_dir = APP_DIR / "skills"
    skills_dir.mkdir(parents=True, exist_ok=True)
    skill_registry = SkillRegistry(skills_dir)
    skill_registry.scan()

    agent = Agent(
        provider=provider,
        tool_registry=tools,
        skill_registry=skill_registry,
        config=AgentConfig(system_preamble="You are a helpful assistant."),
    )

    history: list[ChatMessage] = []
    print("Type 'quit' to exit.\n")

    while True:
        user_input = input("You> ").strip()
        if not user_input or user_input.lower() in ("quit", "exit", "q"):
            break

        history.append(ChatMessage(role=Role.USER, content=user_input))
        turn = await agent.run_turn(history)
        history.append(ChatMessage(role=Role.ASSISTANT, content=turn.reply))

        print(f"\nAssistant: {turn.reply}\n")

asyncio.run(main())

That's a fully working persistent agent with memory, skills, human-in-the-loop, and credentials support — under 70 lines.


What's Next?

The steps above cover the most common patterns. Loom has more:

See [ARCHITECTURE.md](ARCHITECTURE.md) for detailed design documentation and [docs/API.md](docs/API.md) for the complete API reference.


WhatWhere
Full TUI with rich formatting and history[examples/tui](examples/tui)
Anthropic Claude providerloom.llm.anthropic
Multi-agent runtime with delegationloom.runtime
FastAPI server with SSE streamingloom.server
Agent Communication Protocol (WebSocket)loom.acp
MCP client (external tool servers)loom.mcp
Multi-provider registry with model routingloom.llm.registry, loom.routing
Agent home (identity files, vault, sessions)loom.home
Credentials — typed secrets (8 types), 8 appliers (HTTP/SSH/AWS/JWT), resolver, 5 HITL policy modes, OS keychain backendloom.auth, loom.store.secrets, loom.store.keychain
SSH tool — run commands on remote hosts; auth via credential pipelineloom.tools.ssh (loom[ssh])
Recurring tasks — cron/interval-scheduled drivers that detect events and trigger agent runsloom.heartbeat
GraphRAG — knowledge-graph-augmented retrieval with vector search, entity extraction, and context injectionloom.store.graphrag (loom[graphrag])
Web search — multi-provider web search (DDGS, Brave, Tavily, Google) with concurrent/fallback strategiesloom.search, loom.tools.search (loom[search])
Web scrape — Scrapling-based page scraper with cascade fetching (HTTP→dynamic→stealthy), cookie auth, format conversion, CSS/XPath extractionloom.scrape, loom.tools.scrape (loom[scrape])
Cookie store — domain-keyed cookie persistence (Netscape format) for scrape auth retryloom.store.cookies