Skip to content

Agents Module

The agents module implements the agentic logic pillar. All agents implement the BaseAgent ABC with a run() method. Agents handle queries by coordinating tool calls, memory retrieval, and inference engine interactions. The module also includes the OpenClaw infrastructure for interoperating with external agent frameworks via HTTP or subprocess transport.

Abstract Base Classes and Context

BaseAgent

BaseAgent

BaseAgent(engine: InferenceEngine, model: str, *, bus: Optional[EventBus] = None, temperature: float = 0.7, max_tokens: int = 1024)

Bases: ABC

Base class for all agent implementations.

Subclasses must be registered via @AgentRegistry.register("name") to become discoverable.

Provides concrete helper methods that eliminate boilerplate in subclasses:

  • :meth:_emit_turn_start / :meth:_emit_turn_end -- event bus
  • :meth:_build_messages -- conversation + system prompt assembly
  • :meth:_generate -- delegates to engine with stored defaults
  • :meth:_max_turns_result -- standard max-turns-exceeded result
  • :meth:_strip_think_tags -- remove <think> blocks
Source code in src/openjarvis/agents/_stubs.py
def __init__(
    self,
    engine: InferenceEngine,
    model: str,
    *,
    bus: Optional[EventBus] = None,
    temperature: float = 0.7,
    max_tokens: int = 1024,
) -> None:
    self._engine = engine
    self._model = model
    self._bus = bus
    self._temperature = temperature
    self._max_tokens = max_tokens

Functions

run abstractmethod

run(input: str, context: Optional[AgentContext] = None, **kwargs: Any) -> AgentResult

Execute the agent on input and return an AgentResult.

Source code in src/openjarvis/agents/_stubs.py
@abstractmethod
def run(
    self,
    input: str,
    context: Optional[AgentContext] = None,
    **kwargs: Any,
) -> AgentResult:
    """Execute the agent on *input* and return an ``AgentResult``."""

ToolUsingAgent

ToolUsingAgent

ToolUsingAgent(engine: InferenceEngine, model: str, *, tools: Optional[List['BaseTool']] = None, bus: Optional[EventBus] = None, max_turns: int = 10, temperature: float = 0.7, max_tokens: int = 1024)

Bases: BaseAgent

Intermediate base for agents that accept and use tools.

Sets accepts_tools = True for CLI/SDK introspection, and initialises a :class:ToolExecutor from the provided tools.

Source code in src/openjarvis/agents/_stubs.py
def __init__(
    self,
    engine: InferenceEngine,
    model: str,
    *,
    tools: Optional[List["BaseTool"]] = None,  # noqa: F821
    bus: Optional[EventBus] = None,
    max_turns: int = 10,
    temperature: float = 0.7,
    max_tokens: int = 1024,
) -> None:
    super().__init__(
        engine, model, bus=bus,
        temperature=temperature, max_tokens=max_tokens,
    )
    from openjarvis.tools._stubs import ToolExecutor

    self._tools = tools or []
    self._executor = ToolExecutor(self._tools, bus=bus)
    self._max_turns = max_turns

AgentContext

AgentContext dataclass

AgentContext(conversation: Conversation = Conversation(), tools: List[str] = list(), memory_results: List[Any] = list(), metadata: Dict[str, Any] = dict())

Runtime context handed to an agent on each invocation.

AgentResult

AgentResult dataclass

AgentResult(content: str, tool_results: List[ToolResult] = list(), turns: int = 0, metadata: Dict[str, Any] = dict())

Result returned after an agent completes a run.


Agent Implementations

SimpleAgent

SimpleAgent

SimpleAgent(engine: InferenceEngine, model: str, *, bus: Optional[EventBus] = None, temperature: float = 0.7, max_tokens: int = 1024)

Bases: BaseAgent

Single-turn agent: query -> model -> response. No tool calling.

Source code in src/openjarvis/agents/_stubs.py
def __init__(
    self,
    engine: InferenceEngine,
    model: str,
    *,
    bus: Optional[EventBus] = None,
    temperature: float = 0.7,
    max_tokens: int = 1024,
) -> None:
    self._engine = engine
    self._model = model
    self._bus = bus
    self._temperature = temperature
    self._max_tokens = max_tokens

Functions

run

run(input: str, context: Optional[AgentContext] = None, **kwargs: Any) -> AgentResult

Single-turn: build messages, call engine, return result.

Source code in src/openjarvis/agents/simple.py
def run(
    self,
    input: str,
    context: Optional[AgentContext] = None,
    **kwargs: Any,
) -> AgentResult:
    """Single-turn: build messages, call engine, return result."""
    self._emit_turn_start(input)

    messages = self._build_messages(input, context)
    result = self._generate(messages)
    content = result.get("content", "")

    self._emit_turn_end(content_length=len(content))

    return AgentResult(content=content, turns=1)

OrchestratorAgent

OrchestratorAgent

OrchestratorAgent(engine: InferenceEngine, model: str, *, tools: Optional[List[BaseTool]] = None, bus: Optional[EventBus] = None, max_turns: int = 10, temperature: float = 0.7, max_tokens: int = 1024, mode: str = 'function_calling', system_prompt: Optional[str] = None)

Bases: ToolUsingAgent

Multi-turn agent that routes between tools and the LLM.

Implements a tool-calling loop: 1. Send messages with tool definitions to the engine. 2. If the response contains tool_calls, execute them and loop. 3. If no tool_calls, return the final answer. 4. Stop after max_turns iterations.

In structured mode the agent instead uses a THOUGHT: / TOOL: / INPUT: / FINAL_ANSWER: text protocol identical to the format used by the orchestrator SFT/GRPO training pipelines.

Source code in src/openjarvis/agents/orchestrator.py
def __init__(
    self,
    engine: InferenceEngine,
    model: str,
    *,
    tools: Optional[List[BaseTool]] = None,
    bus: Optional[EventBus] = None,
    max_turns: int = 10,
    temperature: float = 0.7,
    max_tokens: int = 1024,
    mode: str = "function_calling",
    system_prompt: Optional[str] = None,
) -> None:
    super().__init__(
        engine, model, tools=tools, bus=bus,
        max_turns=max_turns, temperature=temperature,
        max_tokens=max_tokens,
    )
    self._mode = mode
    self._system_prompt = system_prompt

NativeReActAgent

NativeReActAgent

NativeReActAgent(engine: InferenceEngine, model: str, *, tools: Optional[List[BaseTool]] = None, bus: Optional[EventBus] = None, max_turns: int = 10, temperature: float = 0.7, max_tokens: int = 1024)

Bases: ToolUsingAgent

ReAct agent: Thought -> Action -> Observation loop.

Source code in src/openjarvis/agents/native_react.py
def __init__(
    self,
    engine: InferenceEngine,
    model: str,
    *,
    tools: Optional[List[BaseTool]] = None,
    bus: Optional[EventBus] = None,
    max_turns: int = 10,
    temperature: float = 0.7,
    max_tokens: int = 1024,
) -> None:
    super().__init__(
        engine, model, tools=tools, bus=bus,
        max_turns=max_turns, temperature=temperature,
        max_tokens=max_tokens,
    )

NativeOpenHandsAgent

NativeOpenHandsAgent

NativeOpenHandsAgent(engine: InferenceEngine, model: str, *, tools: Optional[List[BaseTool]] = None, bus: Optional[EventBus] = None, max_turns: int = 3, temperature: float = 0.7, max_tokens: int = 2048)

Bases: ToolUsingAgent

Native CodeAct agent -- generates and executes Python code.

Source code in src/openjarvis/agents/native_openhands.py
def __init__(
    self,
    engine: InferenceEngine,
    model: str,
    *,
    tools: Optional[List[BaseTool]] = None,
    bus: Optional[EventBus] = None,
    max_turns: int = 3,
    temperature: float = 0.7,
    max_tokens: int = 2048,
) -> None:
    super().__init__(
        engine, model, tools=tools, bus=bus,
        max_turns=max_turns, temperature=temperature,
        max_tokens=max_tokens,
    )

RLMAgent

RLMAgent

RLMAgent(engine: InferenceEngine, model: str, *, tools: Optional[List[BaseTool]] = None, bus: Optional[EventBus] = None, max_turns: int = 10, temperature: float = 0.7, max_tokens: int = 2048, sub_model: Optional[str] = None, sub_temperature: float = 0.3, sub_max_tokens: int = 1024, max_output_chars: int = 10000, system_prompt: Optional[str] = None)

Bases: ToolUsingAgent

Recursive Language Model agent using a persistent REPL.

The agent generates Python code that runs in a sandboxed REPL with access to llm_query() / llm_batch() for recursive sub-LM calls. Context is stored as a REPL variable rather than injected directly into the prompt, enabling processing of arbitrarily long inputs through recursive decomposition.

Source code in src/openjarvis/agents/rlm.py
def __init__(
    self,
    engine: InferenceEngine,
    model: str,
    *,
    tools: Optional[List[BaseTool]] = None,
    bus: Optional[EventBus] = None,
    max_turns: int = 10,
    temperature: float = 0.7,
    max_tokens: int = 2048,
    sub_model: Optional[str] = None,
    sub_temperature: float = 0.3,
    sub_max_tokens: int = 1024,
    max_output_chars: int = 10000,
    system_prompt: Optional[str] = None,
) -> None:
    super().__init__(
        engine, model, tools=tools, bus=bus,
        max_turns=max_turns, temperature=temperature,
        max_tokens=max_tokens,
    )
    # Override executor: RLM only creates one if tools are provided
    if not self._tools:
        self._executor = None  # type: ignore[assignment]
    self._sub_model = sub_model or model
    self._sub_temperature = sub_temperature
    self._sub_max_tokens = sub_max_tokens
    self._max_output_chars = max_output_chars
    self._custom_system_prompt = system_prompt

OpenHandsAgent

OpenHandsAgent

OpenHandsAgent(engine: InferenceEngine, model: str, *, bus: Optional[EventBus] = None, temperature: float = 0.7, max_tokens: int = 1024, workspace: Optional[str] = None, api_key: Optional[str] = None)

Bases: BaseAgent

Agent that wraps the real openhands-sdk package.

This is a thin adapter that delegates to the openhands-sdk library for AI-driven software development tasks. Requires openhands-sdk to be installed.

Source code in src/openjarvis/agents/openhands.py
def __init__(
    self,
    engine: InferenceEngine,
    model: str,
    *,
    bus: Optional[EventBus] = None,
    temperature: float = 0.7,
    max_tokens: int = 1024,
    workspace: Optional[str] = None,
    api_key: Optional[str] = None,
) -> None:
    super().__init__(
        engine, model, bus=bus,
        temperature=temperature, max_tokens=max_tokens,
    )
    self._workspace = workspace or os.getcwd()
    self._api_key = api_key or os.environ.get("LLM_API_KEY", "")

OpenClaw Infrastructure

The OpenClaw protocol, transport, and plugin modules (openclaw_protocol.py, openclaw_transport.py, openclaw_plugin.py, openclaw.py) are part of the OpenClaw agent infrastructure and require the openjarvis[openclaw] extra. See the architecture documentation for protocol and transport details.