Workflow System¶
Kubiya's workflow system allows you to create complex processes by chaining both tools and teammates (AI agents) together. These workflows can then be exposed as LLM tools themselves, creating powerful automation capabilities.
Workflow Fundamentals¶
A workflow in Kubiya is a directed structure that orchestrates:
- Docker-based tools
- Teammates (self-contained AI agents)
- Other workflows
- Conditional logic and branching
Each workflow has: - A unique identifier - A description - A set of tools and/or teammates - Optional configuration and parameters - Exposable interfaces for LLM consumption
Simple Linear Workflows¶
At its simplest, a workflow can be a linear sequence of tools:
from kubiya_sdk import Workflow, tool
@tool(image="python:3.12-slim", requirements=["requests"])
def fetch_data(url: str) -> dict:
"""Fetch data from an API"""
import requests
response = requests.get(url)
return response.json()
@tool(image="python:3.12-slim", requirements=["pandas"])
def analyze_data(data: dict) -> dict:
"""Analyze the fetched data"""
import pandas as pd
# Analysis code
return {"result": "Analyzed data"}
# Create a simple workflow
simple_workflow = Workflow(
id="data-analysis",
description="Fetch and analyze data",
tools=[fetch_data, analyze_data]
)
# Execute the workflow
result = simple_workflow.execute({"url": "https://api.example.com/data"})
DAG Workflows with Teammates¶
For complex processes, Kubiya excels at Directed Acyclic Graph (DAG) workflows that can include both tools and teammates:
from kubiya_sdk.workflows.workflow import Workflow, WorkflowNode
from kubiya_sdk.workflows.node_types import NodeType
from kubiya_sdk import Teammate, tool
@tool(image="python:3.12-slim", requirements=["requests"])
def fetch_incident_data(incident_id: str) -> dict:
"""Fetch incident data from the service desk API"""
# Implementation to fetch data
return {"id": incident_id, "description": "Server outage", "severity": "high"}
# Create a DevOps engineer teammate
devops_engineer = Teammate(
id="devops-engineer",
description="AI agent that analyzes and resolves infrastructure issues",
tools=[...], # Tools available to this teammate
llm_config={
"provider": "openai",
"model": "gpt-4-turbo"
}
)
# Create a support analyst teammate
support_analyst = Teammate(
id="support-analyst",
description="AI agent that manages customer communications",
tools=[...], # Tools available to this teammate
llm_config={
"provider": "anthropic",
"model": "claude-3-sonnet"
}
)
@tool(image="python:3.12-slim", requirements=["requests"])
def update_incident(incident_id: str, resolution: str, status: str) -> dict:
"""Update the incident in the service desk system"""
# Implementation to update incident
return {"id": incident_id, "status": status, "updated_at": "2023-01-01T12:00:00Z"}
# Create a complex workflow with both tools and teammates
incident_workflow = Workflow(
id="incident-management",
name="Incident Management Process",
description="End-to-end incident management workflow",
nodes=[
WorkflowNode(
id="fetch_data",
name="Fetch Incident Data",
node_type=NodeType.TOOL,
tool_name="fetch_incident_data"
),
WorkflowNode(
id="analyze_incident",
name="Analyze Incident",
node_type=NodeType.TEAMMATE, # Use an AI agent (teammate)
teammate_id="devops-engineer",
depends_on=["fetch_data"]
),
WorkflowNode(
id="prepare_communication",
name="Prepare Customer Communication",
node_type=NodeType.TEAMMATE, # Another AI agent
teammate_id="support-analyst",
depends_on=["analyze_incident"]
),
WorkflowNode(
id="update_system",
name="Update Incident System",
node_type=NodeType.TOOL,
tool_name="update_incident",
depends_on=["analyze_incident", "prepare_communication"]
)
]
)
This complex workflow: 1. Fetches incident data using a tool 2. Passes the data to a DevOps Engineer teammate (AI agent) for analysis 3. Passes analysis results to a Support Analyst teammate to prepare communications 4. Updates the incident in the system once both AI agents complete their tasks
Exposing Workflows as LLM Tools¶
A key feature of Kubiya workflows is that they can be exposed as LLM tools themselves:
from kubiya_sdk import expose_as_tool, Workflow
# Create a workflow (as shown above)
incident_workflow = Workflow(
id="incident-management",
name="Incident Management Process",
description="End-to-end incident management workflow",
nodes=[...]
)
# Expose the entire workflow as a tool for LLMs to use
expose_as_tool(
workflow=incident_workflow,
name="manage_incident",
description="Manages the entire incident resolution process from start to finish",
input_schema={
"incident_id": {"type": "string", "description": "ID of the incident to manage"}
},
output_schema={
"status": {"type": "string", "description": "Final status of the incident"},
"resolution_time": {"type": "string", "description": "Time taken to resolve the incident"}
}
)
This allows LLMs to invoke complex workflows as a single tool, abstracting away the complexity of the underlying process.
Advanced Workflow Features¶
Workflow Parameters¶
Define explicit parameters for your workflows:
from kubiya_sdk.workflows.models import WorkflowParameter, WorkflowParameterSet, ParameterType
# Define workflow parameters
params = WorkflowParameterSet(parameters=[
WorkflowParameter(
name="ESCALATION_THRESHOLD",
type=ParameterType.INTEGER,
description="Minutes before escalating an incident",
default=30
),
WorkflowParameter(
name="AUTO_RESOLVE",
type=ParameterType.BOOLEAN,
description="Whether to attempt automatic resolution",
default=True
)
])
# Create workflow with parameters
workflow = Workflow(
id="parameterized_workflow",
description="Workflow with parameters",
parameters=params,
# nodes and other configuration...
)
Conditional Execution¶
Control flow based on dynamic conditions:
from kubiya_sdk.workflows.models import Precondition
# Node that only executes under certain conditions
escalation_node = WorkflowNode(
id="escalate_incident",
name="Escalate Incident",
node_type=NodeType.TEAMMATE,
teammate_id="senior-engineer",
precondition=Precondition(
condition="${RESOLUTION_TIME} > ${ESCALATION_THRESHOLD}",
description="Only escalate if resolution time exceeds threshold"
)
)
Retry Policies¶
Make workflows resilient with retry capabilities:
from kubiya_sdk.workflows.models import RetryPolicy
# Node with retry policy for unstable operations
api_node = WorkflowNode(
id="external_api_call",
name="Call External API",
node_type=NodeType.TOOL,
tool_name="api_tool",
retry_policy=RetryPolicy(
max_attempts=3,
backoff_factor=2.0,
initial_delay_seconds=5
)
)
Dynamic Workflow Creation¶
Workflows can be created dynamically based on runtime conditions:
def create_workflow_for_incident(incident_type, severity):
"""Create a customized workflow based on incident type and severity"""
# Base tools for all workflows
base_tools = [fetch_incident_data, update_incident]
# Add specialized tools based on incident type
if incident_type == "security":
base_tools.append(security_analysis_tool)
teammate = security_expert_teammate
elif incident_type == "performance":
base_tools.append(performance_analysis_tool)
teammate = performance_engineer_teammate
else:
base_tools.append(general_analysis_tool)
teammate = general_support_teammate
# Adjust workflow configuration based on severity
if severity == "high":
sla_minutes = 30
auto_escalate = True
else:
sla_minutes = 120
auto_escalate = False
# Create and return the customized workflow
return Workflow(
id=f"{incident_type}-incident-workflow",
name=f"{incident_type.capitalize()} Incident Workflow",
description=f"Workflow for handling {severity} severity {incident_type} incidents",
tools=base_tools,
teammates=[teammate],
config={
"sla_minutes": sla_minutes,
"auto_escalate": auto_escalate
}
)
Best Practices for Workflows¶
- Design for Composability: Create workflows that can be composed of smaller, reusable workflows
- Leverage Teammates: Use AI agents (teammates) for complex reasoning and decision-making tasks
- Handle Errors Gracefully: Implement proper error handling and retry policies
- Balance Autonomy and Control: Give teammates enough autonomy while maintaining control over critical processes
- Document Thoroughly: Add clear descriptions to workflows, nodes, and parameters
- Expose as Tools: Consider exposing complex workflows as simple LLM tools
- Monitor Execution: Implement logging and monitoring for production workflows
Next Steps¶
Now that you understand workflows, you can:
- Learn about Tool Building to create tools for your workflows
- Explore Teammates to create AI agents for your workflows
- See how to run workflows on different infrastructure