Skip to content

Chat & AI Agents

Execute Large Language Model (LLM) requests and build AI agents that can use tools to perform actions.

Capabilities

FeatureDescription
Basic ChatSimple LLM conversations with OpenAI, Anthropic, Gemini, and more
Tool CallingAI agents that execute DAG workflows as tools during conversations

Quick Start

Simple Chat

yaml
steps:
  - type: chat
    llm:
      provider: openai
      model: gpt-4o
    messages:
      - role: user
        content: "What is 2+2?"
    output: ANSWER

AI Agent with Tools

yaml
# Main DAG that uses tools
steps:
  - type: chat
    llm:
      provider: anthropic
      model: claude-sonnet-4-20250514
      tools:
        - calculator
    messages:
      - role: user
        content: "What is 15 times 23?"

---
# Tool DAG definition
name: calculator
description: "Perform basic arithmetic operations"
defaultParams: "operation a b"

steps:
  - name: calculate
    script: |
      case "$1" in
        multiply) echo $(($2 * $3)) ;;
        *) echo "Unknown operation" ;;
      esac
    output: RESULT

Supported Providers

ProviderEnvironment VariableModels
openaiOPENAI_API_KEYGPT-3.5, GPT-4, GPT-4o
anthropicANTHROPIC_API_KEYClaude 3, Claude 4, Claude Sonnet
geminiGOOGLE_API_KEYGemini 1.5, Gemini 2
openrouterOPENROUTER_API_KEY100+ models via OpenRouter
local(none)Ollama, vLLM, LM Studio (OpenAI-compatible)

Key Features

  • Multi-Provider Support - Switch between OpenAI, Anthropic, Gemini, and local models
  • Conversation History - Automatic message inheritance between dependent steps
  • Extended Thinking - Enable reasoning mode for complex tasks (Anthropic, OpenAI, Gemini)
  • Secret Masking - Automatic masking of sensitive values before sending to LLM
  • Tool Calling - Build AI agents that execute workflows as function calls
  • Variable Substitution - Use ${VAR} in messages for dynamic content
  • DAG-Level Defaults - Share LLM configuration across multiple steps

Configuration

Basic LLM Config

FieldTypeDefaultDescription
providerstringrequiredLLM provider (openai, anthropic, gemini, etc.)
modelstringrequiredModel identifier
temperaturefloatprovider defaultResponse randomness (0.0-2.0)
maxTokensintprovider defaultMaximum tokens to generate
streambooltrueStream response tokens

Tool Calling Config

FieldTypeDefaultDescription
toolsstring[][]DAG names to expose as callable tools
maxToolIterationsint10Maximum tool calling loops

Examples

Multi-turn Conversation

yaml
type: graph

steps:
  - name: setup
    type: chat
    llm:
      provider: openai
      model: gpt-4o
      system: "You are a math tutor."
    messages:
      - role: user
        content: "What is 2+2?"

  - name: followup
    depends: [setup]
    type: chat
    llm:
      provider: openai
      model: gpt-4o
    messages:
      - role: user
        content: "Now multiply that by 3."

The followup step automatically inherits conversation history from setup.

DAG-Level Configuration

yaml
# Share LLM config across all chat steps
llm:
  provider: anthropic
  model: claude-sonnet-4-20250514
  temperature: 0.7

steps:
  - type: chat
    messages:
      - role: user
        content: "Explain quantum computing"

  - type: chat
    messages:
      - role: user
        content: "Now explain it to a 5-year-old"

Both steps inherit the DAG-level LLM configuration.

See Also

Released under the MIT License.