Infrastructure Integration¶
Kubiya tools and workflows can run on various infrastructures, from local Docker environments to Kubernetes clusters. This flexibility allows you to develop locally and deploy to production without code changes.
Supported Infrastructure¶
Kubiya supports the following infrastructure options:
- Local Docker: Run tools on your local machine using Docker
- Kubernetes: Deploy tools to Kubernetes clusters
- Cloud Providers: AWS, GCP, and Azure integration
- Serverless: Functions-as-a-Service platforms
Running Tools on Kubernetes¶
To run tools on Kubernetes, use the Kubiya execution functionality:
Python
from kubiya_sdk import tool
from kubiya_sdk.execution import execute_tool_in_kubernetes
@tool(image="python:3.12-slim")
def hello_world(name: str) -> str:
"""Say hello to someone"""
return f"Hello, {name}!"
# Execute on Kubernetes
result = execute_tool_in_kubernetes(
"hello_world",
{"name": "User"},
namespace="kubiya-tools",
service_account="kubiya-runner"
)
print(result) # Hello, User!
You can specify additional Kubernetes options:
Python
from kubiya_sdk import tool
from kubiya_sdk.execution import execute_tool_in_kubernetes
# Execute on Kubernetes with advanced options
result = execute_tool_in_kubernetes(
"data_processor",
{"data": large_dataset},
namespace="kubiya-processing",
service_account="data-processor",
resources={
"requests": {
"memory": "512Mi",
"cpu": "500m"
},
"limits": {
"memory": "1Gi",
"cpu": "1000m"
}
},
node_selector={
"kubernetes.io/role": "worker"
},
tolerations=[
{
"key": "dedicated",
"operator": "Equal",
"value": "processing",
"effect": "NoSchedule"
}
]
)
Serverless Execution¶
Kubiya can run tools on serverless platforms like AWS Lambda:
Python
from kubiya_sdk import tool
from kubiya_sdk.execution import execute_tool_serverless
@tool(image="python:3.12-slim")
def process_data(data: dict) -> dict:
"""Process input data"""
return {"processed": True, "input": data}
# Execute as a serverless function
result = execute_tool_serverless(
"process_data",
{"data": {"key": "value"}},
provider="aws",
memory=1024,
timeout=30
)
Multi-Environment Execution¶
You can write code that adapts to different execution environments:
Python
from kubiya_sdk import tool
from kubiya_sdk.execution import execute_tool_in_kubernetes, execute_tool
@tool(image="python:3.12-slim")
def analyze_data(data: list) -> dict:
"""Analyze data and return statistics"""
import statistics
return {
"mean": statistics.mean(data),
"median": statistics.median(data),
"stdev": statistics.stdev(data) if len(data) > 1 else 0
}
def process_dataset(dataset, environment="local"):
"""Process a dataset in different environments"""
if environment == "kubernetes":
# Run on Kubernetes for large datasets
return execute_tool_in_kubernetes(
"analyze_data",
{"data": dataset},
namespace="data-processing"
)
else:
# Run locally for small datasets
return analyze_data(dataset)
Cloud Provider Integration¶
Kubiya integrates with major cloud providers:
Python
from kubiya_sdk import tool
from kubiya_sdk.execution import execute_tool_in_cloud
@tool(image="python:3.12-slim")
def batch_process(files: list) -> dict:
"""Process multiple files"""
# Processing code
return {"processed_count": len(files)}
# Execute on cloud provider
result = execute_tool_in_cloud(
"batch_process",
{"files": ["file1.csv", "file2.csv"]},
provider="aws",
service="ecs",
task_definition="kubiya-processor",
cluster="processing-cluster"
)
Infrastructure as Code¶
Kubiya's infrastructure configuration can be defined as code and version-controlled:
Python
from kubiya_sdk import tool
from kubiya_sdk.execution import KubernetesExecutionConfig, DockerExecutionConfig
# Define execution configurations
kubernetes_config = KubernetesExecutionConfig(
namespace="kubiya-tools",
service_account="tool-runner",
resources={
"requests": {
"memory": "256Mi",
"cpu": "100m"
},
"limits": {
"memory": "512Mi",
"cpu": "200m"
}
}
)
docker_config = DockerExecutionConfig(
network="kubiya-network",
memory="512m",
cpu_count=1
)
# Choose configuration based on environment
def get_execution_config(environment):
if environment == "production":
return kubernetes_config
else:
return docker_config
Next Steps¶
- Learn about CI/CD integration
- Explore Kubernetes deployment
- Set up serverless execution
- Understand scaling and performance