Use Cases & Examples
Real-world examples of how teams use Knify to scale their agentic workloads.
Code Generation
Automated Feature Development
Scenario: Generate boilerplate code for new features
curl -X POST https://api.knify.io/jobs \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Create a REST API endpoint for user authentication with JWT tokens. Include: 1) Route definition, 2) Controller logic, 3) Middleware, 4) Tests, 5) API documentation",
"model": "sonnet-4.5",
"workspace_path": "/workspaces/my-api"
}
}'
Output:
routes/auth.js- Route definitionscontrollers/authController.js- Business logicmiddleware/authMiddleware.js- JWT validationtests/auth.test.js- Unit testsdocs/auth-api.md- Documentation
Database Schema Migration
Scenario: Generate migration scripts from schema changes
curl -X POST https://api.knify.io/jobs \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Compare schema.sql with migrations/ directory. Generate migration scripts for: 1) New tables, 2) Column changes, 3) Index additions, 4) Rollback scripts",
"workspace_path": "/workspaces/database"
}
}'
Code Analysis
Security Audit
Scenario: Automated security vulnerability scanning
curl -X POST https://api.knify.io/jobs \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Scan the entire codebase for security issues: 1) SQL injection vulnerabilities, 2) XSS attack vectors, 3) Insecure dependencies, 4) Hardcoded secrets, 5) Unsafe cryptography. Generate a detailed report with severity ratings.",
"model": "opus-4.1",
"workspace_path": "/workspaces/production-app"
}
}'
Output: security-audit-report.md with findings and remediation steps
Performance Profiling
Scenario: Identify performance bottlenecks
# Step 1: Profile
JOB_ID=$(curl -X POST https://api.knify.io/jobs \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Analyze code for performance issues: 1) N+1 queries, 2) Memory leaks, 3) CPU-intensive operations, 4) Inefficient algorithms",
"workspace_path": "/workspaces/slow-app"
}
}' | jq -r '.job_id')
# Step 2: Generate optimizations
curl -X POST https://api.knify.io/jobs/$JOB_ID/continue \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Based on your analysis, implement optimizations for the top 5 performance issues"
}
}'
# Step 3: Benchmark
curl -X POST https://api.knify.io/jobs/$JOB_ID/continue \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Run benchmarks comparing original vs optimized code. Generate performance report."
}
}'
Testing & QA
Automated Test Generation
Scenario: Generate comprehensive test suites
curl -X POST https://api.knify.io/jobs \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Generate test suite for UserService class: 1) Unit tests for all methods, 2) Edge cases, 3) Error handling, 4) Mock external dependencies, 5) Aim for 90%+ coverage",
"workspace_path": "/workspaces/backend"
}
}'
E2E User Flow Testing
Scenario: Automated browser testing with Playwright
curl -X POST https://api.knify.io/jobs \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "fh_e2e_tools",
"prompt": "Test complete user signup flow: 1) Navigate to signup, 2) Fill form, 3) Verify email, 4) Complete onboarding, 5) Check dashboard access. Take screenshots at each step.",
"workspace_path": "/workspaces/e2e-tests"
}
}'
Output:
- Test results with pass/fail status
- Screenshots at each step
- Performance metrics (load times)
- Console logs and network requests
Documentation
API Documentation Generation
Scenario: Generate OpenAPI specs from code
curl -X POST https://api.knify.io/jobs \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Scan routes/ directory and generate OpenAPI 3.0 specification. Include: 1) All endpoints, 2) Request/response schemas, 3) Authentication requirements, 4) Examples, 5) Error responses",
"workspace_path": "/workspaces/api-server"
}
}'
Code Documentation
Scenario: Generate comprehensive code documentation
curl -X POST https://api.knify.io/jobs \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Document the entire codebase: 1) Add JSDoc comments to all functions, 2) Create README for each module, 3) Generate architecture diagram, 4) Document data flows, 5) Add usage examples",
"model": "sonnet-4.5",
"workspace_path": "/workspaces/undocumented-project"
}
}'
DevOps & Infrastructure
Infrastructure as Code
Scenario: Generate Terraform/CloudFormation configs
curl -X POST https://api.knify.io/jobs \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Create Terraform configuration for: 1) VPC with public/private subnets, 2) ECS cluster for containers, 3) RDS PostgreSQL database, 4) ElastiCache Redis, 5) Application Load Balancer, 6) CloudWatch monitoring. Follow AWS best practices.",
"workspace_path": "/workspaces/infrastructure"
}
}'
CI/CD Pipeline Setup
Scenario: Generate GitHub Actions workflows
curl -X POST https://api.knify.io/jobs \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Create GitHub Actions workflow: 1) Run tests on PR, 2) Build Docker image, 3) Push to ECR, 4) Deploy to staging on main merge, 5) Manual production deploy with approval, 6) Rollback capability",
"workspace_path": "/workspaces/ci-cd"
}
}'
Data Processing
ETL Pipeline
Scenario: Extract, transform, load data
curl -X POST https://api.knify.io/jobs \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Create ETL pipeline to: 1) Extract data from CSV files in data/ directory, 2) Clean and validate data, 3) Transform to match target schema, 4) Load to PostgreSQL database, 5) Generate data quality report",
"workspace_path": "/workspaces/data-pipeline"
}
}'
Data Analysis & Reporting
Scenario: Analyze data and generate insights
curl -X POST https://api.knify.io/jobs \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Analyze user_events.csv: 1) Calculate key metrics (DAU, retention, churn), 2) Identify usage patterns, 3) Segment users by behavior, 4) Generate visualizations, 5) Create executive summary report",
"workspace_path": "/workspaces/analytics"
}
}'
Migration Projects
Framework Migration
Scenario: Migrate from one framework to another
# Step 1: Analyze current code
JOB_ID=$(curl -X POST https://api.knify.io/jobs \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Analyze the Express.js application and create migration plan to Fastify. Document: 1) Routes to migrate, 2) Middleware equivalents, 3) Breaking changes, 4) Migration strategy",
"workspace_path": "/workspaces/legacy-api"
}
}' | jq -r '.job_id')
# Step 2: Migrate core functionality
curl -X POST https://api.knify.io/jobs/$JOB_ID/continue \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Migrate the authentication routes and middleware to Fastify"
}
}'
# Step 3: Migrate remaining routes
curl -X POST https://api.knify.io/jobs/$JOB_ID/continue \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Migrate all remaining routes to Fastify"
}
}'
# Step 4: Update tests
curl -X POST https://api.knify.io/jobs/$JOB_ID/continue \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Update all tests to work with Fastify"
}
}'
Database Migration
Scenario: Migrate from MongoDB to PostgreSQL
curl -X POST https://api.knify.io/jobs \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Create migration from MongoDB to PostgreSQL: 1) Design relational schema, 2) Write data migration scripts, 3) Update queries to use SQL, 4) Add database indexes, 5) Create rollback plan",
"workspace_path": "/workspaces/db-migration"
}
}'
Customer Support Automation
Ticket Analysis
Scenario: Analyze support tickets for patterns
curl -X POST https://api.knify.io/jobs \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Analyze support_tickets.json: 1) Categorize tickets by type, 2) Identify most common issues, 3) Calculate resolution times, 4) Find correlation with app versions, 5) Generate insights report",
"workspace_path": "/workspaces/support-analysis"
}
}'
Multi-Tenant SaaS
Tenant Provisioning
Scenario: Automate new tenant setup
curl -X POST https://api.knify.io/jobs \
-H "Authorization: Bearer $KNIFY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"job_type": "cursor_task",
"prompt": "Provision new tenant: 1) Create database schema, 2) Set up S3 bucket with proper permissions, 3) Configure CDN, 4) Create admin user, 5) Send welcome email, 6) Log provisioning details",
"workspace_path": "/workspaces/tenant-provisioning",
"metadata": {
"tenant_id": "tenant_abc123",
"plan": "enterprise"
}
}
}'
Build Your Own
Template for Custom Use Case
import requests
import json
def create_knify_job(
api_key: str,
job_type: str,
prompt: str,
workspace_path: str = None,
model: str = "sonnet-4.5",
metadata: dict = None
):
"""
Generic function to create Knify jobs.
Customize for your specific use case.
"""
spec = {
"job_type": job_type,
"prompt": prompt,
"model": model
}
if workspace_path:
spec["workspace_path"] = workspace_path
if metadata:
spec["metadata"] = metadata
response = requests.post(
"https://api.knify.io/jobs",
headers={
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
},
json={"spec": spec}
)
return response.json()
# Example usage
result = create_knify_job(
api_key="your_api_key",
job_type="cursor_task",
prompt="Your custom prompt here",
workspace_path="/workspaces/your-workspace",
metadata={"user_id": "user_123"}
)
print(f"Job created: {result['job_id']}")
Best Practices for Production
1. Use Job Metadata
Tag jobs for tracking and analytics:
{
"spec": {
"job_type": "cursor_task",
"prompt": "...",
"metadata": {
"customer_id": "cust_123",
"feature": "code-review",
"priority": "high",
"billing_code": "proj-456"
}
}
}
2. Implement Timeouts
Set reasonable timeouts for job operations:
import time
def wait_for_job_completion(job_id, timeout=300):
start_time = time.time()
while time.time() - start_time < timeout:
job = get_job_status(job_id)
if job['status'] in ['completed', 'failed']:
return job
time.sleep(5)
raise TimeoutError(f"Job {job_id} did not complete within {timeout}s")
3. Monitor Job Performance
Track key metrics:
metrics = {
"jobs_created": 0,
"jobs_completed": 0,
"jobs_failed": 0,
"avg_execution_time": 0,
"total_cost": 0
}
# Update after each job
def track_job(job):
metrics["jobs_created"] += 1
if job['status'] == 'completed':
metrics["jobs_completed"] += 1
elif job['status'] == 'failed':
metrics["jobs_failed"] += 1
execution_time = (
job['completed_at'] - job['started_at']
).total_seconds()
metrics["avg_execution_time"] = (
metrics["avg_execution_time"] * (metrics["jobs_completed"] - 1) +
execution_time
) / metrics["jobs_completed"]
4. Handle Failures Gracefully
Implement proper error handling:
try:
result = create_knify_job(...)
job_id = result['job_id']
job = wait_for_job_completion(job_id)
if job['status'] == 'completed':
artifacts = get_job_artifacts(job_id)
process_results(artifacts)
else:
log_error(f"Job failed: {job['error']}")
notify_team(job)
except TimeoutError as e:
log_error(f"Job timeout: {e}")
cleanup_job(job_id)
except Exception as e:
log_error(f"Unexpected error: {e}")
raise
Get Inspired
Browse more examples and templates:
Advanced UsageLearn advanced patterns and techniques.