Skip to content

Basic Tool Example

This example demonstrates how to create a simple Docker-based tool using Kubiya SDK.

Creating a Basic Tool

Here's a simple example of a tool that processes text:

Python
from kubiya_sdk import kubiya

@kubiya.tool(
    name="text-processor",
    description="Process text with various operations",
    image="python:3.12-slim"
)
def process_text(text: str, operation: str = "uppercase") -> str:
    """
    Process text with various operations

    Args:
        text: The input text to process
        operation: The operation to perform (uppercase, lowercase, capitalize)

    Returns:
        The processed text
    """
    if operation == "uppercase":
        return text.upper()
    elif operation == "lowercase":
        return text.lower()
    elif operation == "capitalize":
        return text.capitalize()
    else:
        return text

# Use the tool
result = process_text("Hello, World!", "uppercase")
print(result)  # Output: "HELLO, WORLD!"

This example demonstrates: - Creating a tool with the @kubiya.tool decorator - Specifying the Docker image to use (python:3.12-slim) - Defining parameters with type hints and default values - Adding documentation with docstrings - Using the tool with direct function calls

How It Works

When you call process_text, Kubiya SDK:

  1. Pulls the specified Docker image if it's not already available locally
  2. Creates a container from this image
  3. Copies your function code into the container
  4. Executes the function inside the container with the provided arguments
  5. Returns the result to your application

Adding More Functionality

Let's enhance the tool to handle more text operations:

Python
from kubiya_sdk import kubiya

@kubiya.tool(
    name="advanced-text-processor",
    description="Process text with various operations",
    image="python:3.12-slim"
)
def process_text(
    text: str, 
    operation: str = "uppercase",
    prefix: str = "",
    suffix: str = ""
) -> dict:
    """
    Process text with various operations

    Args:
        text: The input text to process
        operation: The operation to perform (uppercase, lowercase, capitalize, reverse, count)
        prefix: Optional prefix to add to the result
        suffix: Optional suffix to add to the result

    Returns:
        A dictionary with the processed text and metadata
    """
    # Process the text based on the operation
    if operation == "uppercase":
        processed = text.upper()
    elif operation == "lowercase":
        processed = text.lower()
    elif operation == "capitalize":
        processed = text.capitalize()
    elif operation == "reverse":
        processed = text[::-1]
    elif operation == "count":
        return {
            "original": text,
            "character_count": len(text),
            "word_count": len(text.split()),
            "operation": operation
        }
    else:
        processed = text

    # Add prefix and suffix
    result = f"{prefix}{processed}{suffix}"

    return {
        "original": text,
        "processed": result,
        "operation": operation,
        "length": len(result)
    }

# Use the tool
result = process_text(
    text="Hello, World!",
    operation="uppercase",
    prefix="** ",
    suffix=" **"
)
print(result)
# Output: {'original': 'Hello, World!', 'processed': '** HELLO, WORLD! **', 'operation': 'uppercase', 'length': 18}

This enhanced version: - Accepts additional parameters for customization - Returns a structured result with metadata - Supports more operations, including text analysis

Using Additional Python Packages

You can specify Python packages to install in the container:

Python
from kubiya_sdk import kubiya

@kubiya.tool(
    name="text-analyzer",
    description="Analyze text using natural language processing",
    image="python:3.12-slim",
    requirements=["nltk", "textblob"]
)
def analyze_text(text: str) -> dict:
    """
    Analyze text using NLP techniques

    Args:
        text: The text to analyze

    Returns:
        Analysis results including sentiment and key phrases
    """
    import nltk
    from textblob import TextBlob

    # Download required NLTK data
    nltk.download('punkt', quiet=True)

    # Create a TextBlob object
    blob = TextBlob(text)

    # Get sentiment
    sentiment = blob.sentiment

    # Get noun phrases
    noun_phrases = list(blob.noun_phrases)

    # Get word frequencies
    word_freq = {}
    for word in blob.words:
        if len(word) > 3:  # Skip short words
            word = word.lower()
            word_freq[word] = word_freq.get(word, 0) + 1

    # Sort word frequencies
    sorted_words = sorted(word_freq.items(), key=lambda x: x[1], reverse=True)
    top_words = sorted_words[:5]

    return {
        "sentiment": {
            "polarity": sentiment.polarity,
            "subjectivity": sentiment.subjectivity
        },
        "noun_phrases": noun_phrases,
        "top_words": dict(top_words)
    }

# Use the tool
result = analyze_text("The Kubiya SDK is amazing! It makes building Docker-based tools incredibly easy and efficient.")
print(result)

Next Steps