LLM Integration¶
Kubiya enables you to integrate Large Language Models (LLMs) into your tools and workflows, and provides a standard interface for LLM applications to consume tools through the Model Context Protocol (MCP).
LLM Integration Approaches¶
There are two primary ways to integrate LLMs with Kubiya:
- Using LLMs in Tools: Build tools that leverage LLMs for specific tasks
- Exposing Tools to LLMs: Make tools available to external LLM applications via MCP
Using LLMs in Tools¶
You can create tools that use LLMs to perform tasks:
from kubiya_sdk import tool
import openai
@tool(
image="python:3.12-slim",
requirements=["openai"]
)
def summarize_text(text: str, max_words: int = 100) -> str:
"""Summarize text using an LLM"""
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": f"Summarize the following text in {max_words} words or less:"},
{"role": "user", "content": text}
],
temperature=0.3,
max_tokens=max_words * 2 # Approximately 2 tokens per word
)
return response.choices[0].message.content
Using Different LLM Providers¶
Kubiya is provider-agnostic and works with various LLM providers:
from kubiya_sdk import tool
import anthropic
@tool(
image="python:3.12-slim",
requirements=["anthropic"]
)
def analyze_sentiment(text: str) -> dict:
"""Analyze the sentiment of text using Claude"""
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=100,
messages=[
{
"role": "user",
"content": f"""Analyze the sentiment of the following text.
Return a JSON object with:
- sentiment: 'positive', 'negative', or 'neutral'
- score: a number between -1 (negative) and 1 (positive)
- explanation: brief explanation of the analysis
Text: {text}"""
}
],
temperature=0
)
# Parse the JSON response
import json
# Assuming the model returns properly formatted JSON
return json.loads(response.content[0].text)
Teammates with LLM Capabilities¶
Teammates are AI agents that can use both tools and LLMs:
from kubiya_sdk import Teammate, tool
@tool(image="python:3.12-slim", requirements=["pandas"])
def analyze_data(data: list) -> dict:
"""Analyze data and generate statistics"""
import pandas as pd
df = pd.DataFrame(data)
return {
"mean": df.mean().to_dict(),
"median": df.median().to_dict(),
"std": df.std().to_dict()
}
# Create a teammate with LLM capabilities
data_analyst = Teammate(
id="data-analyst",
description="Analyzes data and provides insights",
tools=[analyze_data],
llm_config={
"provider": "openai",
"model": "gpt-4-turbo",
"temperature": 0.2,
"system_prompt": """You are a data analyst expert. Your job is to analyze data and provide insights.
You have access to tools that can help you analyze data. Use them effectively to answer questions."""
}
)
# The teammate can now use both the tools and LLM capabilities
result = data_analyst.run("Analyze this data and explain what patterns you see",
context={"data": [{"x": 1, "y": 2}, {"x": 3, "y": 4}, {"x": 5, "y": 6}]})
Model Context Protocol (MCP)¶
MCP is a standard protocol that allows LLM applications to discover and invoke Kubiya tools. This enables you to build tools once and use them with any MCP-compatible LLM application.
Exposing Tools via MCP¶
You can expose your tools through MCP:
from kubiya_sdk import tool
from kubiya_sdk.mcp import expose_via_mcp
@tool(image="python:3.12-slim", requirements=["requests"])
def get_weather(city: str) -> dict:
"""Get current weather for a city"""
import requests
# Weather API code
api_key = "YOUR_API_KEY"
url = f"https://api.weatherapi.com/v1/current.json?key={api_key}&q={city}"
response = requests.get(url)
data = response.json()
return {
"temperature": data["current"]["temp_c"],
"condition": data["current"]["condition"]["text"],
"humidity": data["current"]["humidity"],
"wind_speed": data["current"]["wind_kph"]
}
# Expose the tool via MCP
expose_via_mcp(get_weather, endpoint="/api/tools/weather")
MCP in Teammates¶
Teammates automatically expose their tools via MCP:
from kubiya_sdk import Teammate, tool
@tool(image="python:3.12-slim")
def search_knowledge_base(query: str) -> list:
"""Search the knowledge base for information"""
# Search implementation
return [{"title": "Article 1", "content": "..."}, {"title": "Article 2", "content": "..."}]
@tool(image="python:3.12-slim")
def create_ticket(title: str, description: str, priority: str) -> str:
"""Create a support ticket"""
# Ticket creation implementation
return "TICKET-123"
# Create a support agent teammate
support_agent = Teammate(
id="support-agent",
name="Support Assistant",
description="Helps customers with support requests",
tools=[search_knowledge_base, create_ticket]
)
# This teammate's tools are automatically available via MCP
MCP API Structure¶
MCP provides a standard API for tool discovery and invocation:
GET /api/tools # List all available tools
GET /api/tools/{tool_id} # Get tool details and schema
POST /api/tools/{tool_id} # Invoke a tool with parameters
LLM-Enhanced Workflows¶
You can build workflows that combine traditional tools with LLM capabilities:
from kubiya_sdk import Workflow, tool, WorkflowNode
from kubiya_sdk.workflows.node_types import NodeType
@tool(image="python:3.12-slim", requirements=["requests"])
def fetch_data(url: str) -> dict:
"""Fetch data from an API"""
import requests
response = requests.get(url)
return response.json()
@tool(image="python:3.12-slim", requirements=["openai"])
def generate_insights(data: dict) -> str:
"""Generate insights from data using GPT-4"""
import openai
client = openai.OpenAI()
prompt = f"""Analyze the following data and provide 3-5 key insights:
{data}
Format your response as a bulleted list."""
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a data analyst expert."},
{"role": "user", "content": prompt}
],
temperature=0.5
)
return response.choices[0].message.content
# Create a workflow that combines data fetching with LLM analysis
insights_workflow = Workflow(
id="data-insights",
name="Data Insights Generator",
description="Fetch data and generate insights using LLM",
nodes=[
WorkflowNode(
id="fetch",
name="Fetch Data",
node_type=NodeType.TOOL,
tool_name="fetch_data"
),
WorkflowNode(
id="analyze",
name="Generate Insights",
node_type=NodeType.TOOL,
tool_name="generate_insights",
depends_on=["fetch"]
)
]
)
Best Practices for LLM Integration¶
- Choose the Right Model: Select appropriate models based on task complexity and cost
- Manage Tokens Efficiently: Optimize prompts to minimize token usage
- Implement Caching: Cache LLM responses for identical inputs
- Use Structured Output: Request specific formats (like JSON) for easier parsing
- Handle Errors Gracefully: Account for API errors and quota limits
- Implement Retries: Add retry logic for transient failures
- Secure API Keys: Store LLM API credentials securely
Next Steps¶
Now that you understand LLM integration in Kubiya, you can:
- Learn about Tool Building to create tools that use LLMs
- Explore the Workflow System to build workflows that combine LLMs with other tools
- Understand Teammates to create AI agents that use both tools and LLMs