AI Workflow Orchestrator — Build Multi-LLM Workflows with Spring Boot
A lightweight, Spring Boot–friendly plugin framework to orchestrate complex AI workflows — integrating multiple LLMs, local tools (@AiTool
), and MCP services with zero boilerplate.
Introduction
Artificial Intelligence is no longer limited to single-model interactions. Real-world apps often orchestrate multiple LLMs, integrate MCP services, and combine them with custom tools — all in one flow. The challenge is putting this together without heavy frameworks, glue code, or slow startup.
AI Workflow Orchestrator solves this with a lightweight, Spring Boot–friendly approach:
multi-LLM support, @AiTool
local tools, MCP clients, dynamic planning, schema transformations, and both CLI/Web execution — with zero boilerplate.
- Features
- Project Structure
- Quick Start
- Local LLM Setup with Ollama
- API Usage
- Architecture
- Workflow Types & Step Types
- Plug & Play Workflow Creation
- Real-World Plug-and-Play Examples
- Web UI: Document Summarizer
- Advanced Configuration
- CLI Mode
- Logging
- Timeouts
- Environment Variables
- Troubleshooting
- License
- Conclusion & What’s Next
๐ Features
- ๐ง Multi-LLM Support: OpenAI, Anthropic, and Ollama via custom HTTP clients
- ⚙️ Dynamic Workflow Planning: AI-driven step orchestration
- ๐ MCP Protocol Support: HTTP, WebSocket & Async clients
- ๐ ️ Tool Integration: Register & invoke
@AiTool
methods - ๐ Schema Transformation: JSON schema-based conversions
- ๐ฅ️ CLI & Web Modes: Dual execution interfaces
- ๐งฉ Intelligent Transformations: LLM-powered data conversions
- ๐ชถ Lightweight: No external AI framework dependencies
- ⚡ Performance: Direct HTTP API calls with Java HttpClient
๐ Project Structure
SpringAIOrchestrator/
├── ai-orchestrator-bom/ # Bill of Materials & Core Framework
│ └── ai-orchestrator-core/ # Core orchestration engine
├── examples/ # Example workflow implementations
│ ├── ai-travel-workflow/ # Travel planning workflow example
│ ├── ai-resume_builder-workflow/ # Resume building workflow example
│ └── ai-document_summarizer-workflow/ # Summarizing + classification
├── LICENSE
├── README.md
└── USAGE.md
๐ ️ Quick Start
✅ Prerequisites
- ☕ Java 21+
- ๐ฆ Maven 3.8+
- ๐ API Keys (OpenAI / Anthropic / Ollama)
- ๐ง Local Ollama Installation (optional but recommended)
⚙️ Code Installation
# 1) Clone
git clone <repository-url>
cd SpringAIOrchestrator/spring-ai-orchestrator
# 2) Build the framework
cd ai-orchestrator-bom
mvn clean install
# 3) Env Vars
export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export OLLAMA_BASE_URL="http://localhost:11434"
export SERVER_PORT=8282
export CLI_MODE=false
export APP_NAME="ai-orchestrator"
export PLANNER_FALLBACK=true
export PLANNER_MAX_ITERATIONS=5
export PLANNER_TARGET_RANK=10
# 4) Run an example
cd ../examples/ai-travel-workflow
mvn spring-boot:run
# or
cd ../ai-resume_builder-workflow
mvn spring-boot:run
๐ง Local LLM Setup with Ollama
The framework can run fully offline using Ollama, letting you test workflows locally without API keys or cloud costs.
Install Ollama
# macOS
brew install ollama
# Linux
curl -fsSL https://ollama.com/install.sh | sh
Windows: Download the installer from ollama.com/download.
Using Ollama for Local (Free) Models
- Ensure Ollama server is running at
http://localhost:11434
- Verify by opening
http://localhost:11434
in a browser - Ollama models are quantized by default (e.g., q4); choose tags if needed
Common Pull Commands & Config
Goal | Ollama CLI | models.yml |
---|---|---|
Pull Default (q4) | ollama pull llama2 |
modelName: "llama2" |
Pull Specific (e.g., 3-bit) | ollama pull llama2:7b-q3_K_L |
modelName: "llama2:7b-q3_K_L" |
Pull Latest | ollama pull llama3 |
modelName: "llama3" |
Coder Model | ollama pull qwen2.5-coder:14b-instruct-q3_K_L |
modelName: "qwen2.5-coder:14b-instruct-q3_K_L" |
Lightweight | ollama pull mistral |
modelName: "mistral" |
Local Models Config Example
models:
- alias: "local"
provider: "ollama"
modelName: "llama2"
baseUrl: "http://localhost:11434"
enabled: true
Using Ollama Cloud Models
# Sign in
ollama signin
# Pull a cloud-offloaded model
ollama pull qwen3-coder:480b-cloud
models:
- alias: "default"
provider: "ollama"
modelName: "qwen3-coder:480b-cloud"
baseUrl: "http://localhost:11434"
enabled: true
- alias: "transformer"
provider: "ollama"
modelName: "qwen3-coder:480b-cloud"
baseUrl: "http://localhost:11434"
enabled: true
๐ API Usage
Run Travel Workflow
curl -X POST "http://localhost:8282/api/workflows/run/travel-planner" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "prompt=Plan a 5-day trip to Tokyo in March with budget of $3000"
curl -X POST "http://localhost:8282/api/workflows/run/travel-planner" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "prompt=Business trip to London for 3 days, need flights and hotel recommendations"
Run Resume Workflow
curl -X POST "http://localhost:8282/api/workflows/run/resume-builder" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "prompt=Create resume for Senior Java Developer with 5 years Spring Boot experience"
curl -X POST "http://localhost:8282/api/workflows/run/resume-builder" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "prompt=Build resume for Data Scientist role with Python and ML expertise"
List Workflows
curl -X GET "http://localhost:8282/api/workflows/list"
{
"travel-planner": "travel-planner - AI-powered travel planning with weather, flights, and cost analysis",
"resume-builder": "AI Resume Builder - Create a professional resume from user inputs and job descriptions"
}
๐️ Architecture
- ๐ง Planner LLM – Generates and ranks intelligent execution plans
- ๐ Transformer LLM – Handles format conversions between steps
- ๐ Custom HTTP Clients – Direct API integration without external AI frameworks
- ๐ชถ Zero Heavy Dependencies – No LangChain4j or similar
┌─────────────────────────────────────────────────────────────────────────┐
│ AI Orchestrator Framework │
└─────────────────────────────────────────────────────────────────────────┘
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ User Input │ │ REST API │ │ CLI Mode │
└──────┬───────┘ └──────┬───────┘ └──────┬───────┘
│ │ │
└─────────────────┼─────────────────┘
│
┌─────────▼─────────┐
│ Workflow Controller│
└─────────┬─────────┘
│
┌──────────▼──────────┐
│ Dynamic Engine (AI) │
└──────────┬──────────┘
│
┌─────────────▼─────────────┐
│ ๐ง Planner + ๐ Transformer │
└─────────────┬─────────────┘
│
┌─────────────▼─────────────┐
│ Step Execution Engine │
└────────────────────────────┘
๐ Workflow Types & Step Types
Workflow Types
- ๐ Sequential – Predefined step order
- ๐ง Dynamic – AI-planned execution order
Step Types
- ๐ค LLM – Language model interactions
- ๐ง Tool – Custom
@AiTool
methods - ๐ MCP – External service calls (HTTP, WebSocket, Async)
⚡ Plug & Play Workflow Creation
- Add Dependency
<dependency>
<groupId>codetumblr.net.ai.orchestrator</groupId>
<artifactId>ai-orchestrator-core</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
- Create Application
@SpringBootApplication
@ComponentScan({"codetumblr.net.ai.orchestrator.core", "your.package"})
public class MyWorkflowApplication {
public static void main(String[] args) {
SpringApplication.run(MyWorkflowApplication.class, args);
}
}
- Define Models (
models.yml
)
models:
- alias: "default"
provider: "openai"
model: "gpt-4"
apiKey: "${OPENAI_API_KEY}"
- Create Workflow (
workflows/my-workflow.yaml
)
id: "document-analyzer"
executionMode: "dynamic"
steps:
- id: "extract_content"
type: "tool"
parameters:
tool: "document_extractor"
- id: "analyze_sentiment"
type: "llm"
parameters:
model: "default"
prompt: "Analyze sentiment: {input}"
- Add Tools
@Component
public class DocumentTools {
@AiTool(name = "document_extractor")
public String extractText(String filePath) {
return "Extracted content...";
}
}
- Run
mvn spring-boot:run
๐ Real-World Plug-and-Play Workflow Examples
- ๐ซ Travel Planner – Multi-day itineraries with weather, flights, and cost analysis
- ๐ Resume Builder – Tailored resumes from job descriptions/skills
- ๐ Document Summarizer – Extract, summarize, classify, and store metadata (PDF, DOCX, etc.) using a dynamic workflow
๐ Sample Workflow Implementations
Below are some practical examples showing how the AI Workflow Orchestrator can be extended with custom tools, MCP services, and dynamic orchestration — all without additional boilerplate.
๐ Example 1: Document Analyzer Workflow
This workflow demonstrates how to build an AI-powered document analyzer that extracts text, summarizes content, classifies it into categories, and stores metadata — all in a single orchestration pipeline.
id: "document-analyzer"
name: "AI Document Analyzer & Classifier"
description: "Extract, summarize, classify, and persist document insights."
version: "1.0"
executionMode: "dynamic"
outputMode: "all"
steps:
- id: "extract_text"
type: "mcp"
description: "Extract text from documents with OCR fallback."
parameters:
client: "doc-processor"
service: "extract_text"
- id: "summarize_content"
type: "llm"
description: "Generate a human-readable summary."
parameters:
model: "default"
promptTemplate: "prompts/document/summarize.txt"
- id: "suggest_categories"
type: "tool"
description: "Suggest relevant categories based on content."
parameters:
tool: "suggest_categories"
- id: "classify_document"
type: "llm"
description: "Classify into the most relevant category."
parameters:
model: "default"
promptTemplate: "prompts/document/classify.txt"
- id: "store_metadata"
type: "tool"
description: "Persist classification and summary results."
parameters:
tool: "save_document_metadata"
---
๐ ️ Example 2: Local Tool - Category Suggestion
Local tools can be created with @AiTool
to enrich workflows. Here’s a sample tool that suggests relevant categories for a given document text.
@Component
public class CategoryTool {
@AiTool(
name = "suggest_categories",
description = "Suggest relevant document categories based on content."
)
public List suggestCategories(String text) {
return List.of("Finance", "Healthcare", "Legal", "Technology");
}
}
---
๐ Example 3: Internal MCP Service - Document Text Extraction
MCP services allow external-like service calls to be part of the same workflow. The example below shows how to implement a document extraction service with OCR fallback.
@MCPService(
name = "doc-processor",
description = "Extract text from documents and scanned PDFs."
)
public class DocumentProcessor {
@MCPMethod(
name = "extract_text",
description = "Extracts text with auto-detection and OCR fallback."
)
public Map extractText(String docName, String type, String base64Content) {
// Example implementation: decode + process content
Map result = new HashMap<>();
result.put("document", docName);
result.put("text", "Extracted content goes here...");
result.put("status", "Success");
return result;
}
}
---
๐ Why These Matter
These examples demonstrate how easily you can:
- ๐ Combine LLM, local tools, and MCP services into one workflow.
- ⚙️ Extend capabilities with zero boilerplate using
@AiTool
and@MCPService
. - ๐ Enable both internal and external integrations within the same orchestration engine.
Upcoming Examples (Planned)
- ๐ง Intelligent Chat Agent
- ๐ Financial Report Analyzer
- ๐งช Code Review Assistant
๐ฅ️ UI / Web Integration — Document Summarizer
The framework supports Spring Boot web controllers for file upload and dynamic prompt generation.
How it works: Upload → Base64 encode → Build planner prompt → Review → Submit to Orchestrator → Planner routes to initial MCP step (extract_text
).
Run the Web Example
cd examples/ai-document_summarizer-workflow
mvn spring-boot:run
Open: http://localhost:8383/
Key Endpoints
Endpoint | Method | Description |
---|---|---|
/ | GET | Upload form |
/generate-prompt | POST | Base64 encodes file, builds planner prompt |
/submit-prompt | POST | Submits to /api/workflows/run/document-summarizer |
⚙️ Advanced Configuration Options
application.yml
server:
port: ${SERVER_PORT:8282}
spring:
application:
name: ${APP_NAME:ai-orchestrator-document-summarizer}
...
ai:
orchestrator:
workflows:
path: ${WORKFLOWS_PATH:classpath*:workflows/*-workflow.yaml}
prompt:
path: ${PLANNER_PROMPT_PATH:}
transformer:
prompt:
path: ${TRANSFORMER_PROMPT_PATH:}
mcp:
clients:
path: classpath:${MCP_CLIENTS_PATH:/mcp-clients.yml}
mcp-clients.yml
clients:
# ============================================================
# MCP Clients Configuration for Document Summarizer Workflow
# ============================================================
# This configuration defines the MCP services used for text extraction.
# All other steps (summarization, classification, metadata storage)
# are handled by LLMs or tools within the orchestrator.
#
# Environment variables can override default values for flexibility.
# ============================================================
clients:
# ----------------------------------------------------------
# Document Processor Service (Primary MCP for Text Extraction)
# ----------------------------------------------------------
- name: "document-processor"
endpoint: "${MCP_DOCUMENT_PROCESSOR_ENDPOINT:http://localhost:8383}"
service: "extract_text"
protocol: "${MCP_DOCUMENT_PROCESSOR_PROTOCOL:http}"
timeout: "${MCP_DOCUMENT_PROCESSOR_TIMEOUT:15000}" # Default: 15s timeout
auth:
type: "${MCP_DOCUMENT_PROCESSOR_AUTH_TYPE:none}" # Supported: none, basic, oauth2, api-key, jwt
# --- Basic Auth (Optional) ---
username: "${MCP_DOCUMENT_PROCESSOR_USERNAME:}"
password: "${MCP_DOCUMENT_PROCESSOR_PASSWORD:}"
# --- API Key Auth (Optional) ---
apiKey: "${MCP_DOCUMENT_PROCESSOR_API_KEY:}"
apiKeyHeader: "${MCP_DOCUMENT_PROCESSOR_API_KEY_HEADER:X-API-Key}"
# --- OAuth2 (Optional) ---
tokenUri: "${MCP_DOCUMENT_PROCESSOR_TOKEN_URI:}"
clientId: "${MCP_DOCUMENT_PROCESSOR_CLIENT_ID:}"
clientSecret: "${MCP_DOCUMENT_PROCESSOR_CLIENT_SECRET:}"
scope: "${MCP_DOCUMENT_PROCESSOR_SCOPE:}"
# --- JWT Auth (Optional) ---
jwtIssuer: "${MCP_DOCUMENT_PROCESSOR_JWT_ISSUER:}"
jwtSubject: "${MCP_DOCUMENT_PROCESSOR_JWT_SUBJECT:}"
jwtSecret: "${MCP_DOCUMENT_PROCESSOR_JWT_SECRET:}"
enabled: "${MCP_DOCUMENT_PROCESSOR_ENABLED:true}"
models.yml
config:
defaultMaxTokens: 1000
promptTruncateLength: 50
descriptionTruncateLength: 30
autoRegisterForPlugins: true
models:
- alias: "default"
provider: "ollama"
modelName: "qwen3-coder:480b-cloud"
baseUrl: "http://localhost:11434"
temperature: 0.7
maxTokens: 2000
enabled: true
- alias: "openai"
provider: "openai"
modelName: "gpt-4o-mini"
apiKey: "${OPENAI_API_KEY}"
temperature: 0.7
maxTokens: 4000
enabled: true
- alias: "planner"
provider: "ollama"
modelName: "qwen2.5-coder:14b-instruct-q3_K_L"
baseUrl: "http://localhost:11434"
temperature: 0.3
maxTokens: 1500
enabled: true
- alias: "transformer"
provider: "ollama"
modelName: "qwen3-coder:480b-cloud"
baseUrl: "http://localhost:11434"
temperature: 0.1
maxTokens: 1000
enabled: true
๐งช CLI Mode
export CLI_MODE=true
mvn spring-boot:run
# or
CLI_MODE=true mvn spring-boot:run
# or
java -DCLI_MODE=true -jar ai-travel-workflow-1.0-SNAPSHOT.jar
Commands
# List workflows
lw
# Interactive
interactive
i
run-interactive
# Run specific
run --workflow-id travel-planner --prompt "Plan weekend trip to Paris"
run-workflow --workflow-id resume-builder --prompt "Create resume for DevOps engineer"
# Quick run
quick --number 1
q --number 2
# Help / Logs
help
h
?
log-info
logs
debug-help
# Exit
exit
quit
bye
๐งฐ Logging
logging:
config: classpath:log4j2.xml
⏱️ Timeouts
# Planner (configurable)
ai:
orchestrator:
planner:
max-iterations: 5
target-rank: 10
fallback-enabled: true
# MCP client timeouts (per client)
clients:
- name: "weather-services-api"
timeout: "5000"
- name: "flight-services-api"
timeout: "5000"
LLM Planning Timeout: 120s · Planning Session Timeout: 60s · LLM Client Timeouts: 5m request / 30s connect · Schema Caching: ConcurrentHashMap · HTTP Client: Java 21 HttpClient
๐ง Environment Variables Reference
Core Application
export SERVER_PORT=8282
export CLI_MODE=false
export APP_NAME="ai-orchestrator"
LLM API Keys
export OPENAI_API_KEY="sk-your-openai-key"
export ANTHROPIC_API_KEY="sk-ant-your-anthropic-key"
export OLLAMA_BASE_URL="http://localhost:11434"
Planner Configuration
export PLANNER_FALLBACK=true
export PLANNER_MAX_ITERATIONS=5
export PLANNER_TARGET_RANK=10
export STEP_SEPARATOR=","
export PLANNER_PROMPT_PATH=""
export TRANSFORMER_PROMPT_PATH=""
File Paths
export WORKFLOWS_PATH="classpath*:workflows/*-workflow.yaml"
export MODELS_PATH="/models.yml"
export MCP_CLIENTS_PATH="/mcp-clients.yml"
Logging Levels
export PLANNER_LOG_LEVEL=INFO
export ENGINE_LOG_LEVEL=INFO
export WORKFLOW_LOG_LEVEL=INFO
export MCP_SERVICE_LOG_LEVEL=INFO
export SCHEMA_LOG_LEVEL=INFO
๐ Troubleshooting
Issue | Solution |
---|---|
Schema not found | Verify schema files exist in resources/schemas/ |
LLM client not available | Check API keys and network connectivity |
Workflow not found | Ensure workflow YAML is in resources/workflows/ |
Tool execution failed | Verify tool parameters and implementation |
MCP service timeout | Check MCP client configuration and endpoints |
Planning failed | Enable planner fallback: PLANNER_FALLBACK=true |
Health Checks
curl -X GET "http://localhost:8282/api/workflows/list"
echo $OPENAI_API_KEY
echo $ANTHROPIC_API_KEY
echo $OLLAMA_BASE_URL
Debug Steps
# Enable debug
export PLANNER_LOG_LEVEL=DEBUG
export ENGINE_LOG_LEVEL=DEBUG
export WORKFLOW_LOG_LEVEL=DEBUG
# Tail logs
tail -f logs/ai-orchestrator-**.log
# Validate YAML
cat src/main/resources/models.yml
cat src/main/resources/mcp-clients.yml
๐ License
Licensed under the MIT License. See LICENSE
in the repository.
๐จ๐ป Author
Ravinderjeet Singh Nagpal
Building next-generation AI orchestration systems.
๐ Performance Features
- No External AI Frameworks — custom HTTP clients instead of LangChain4j
- Smaller JAR Size — reduced by ~15–20MB
- Faster Startup — no auto-configuration overhead
- Direct API Control — optimized request/response handling
๐ Related Projects
- Spring Boot – Java Backend Framework
- Model Context Protocol – MCP Specification
๐ Conclusion — Build Smarter AI Pipelines
AI Workflow Orchestrator unifies LLMs, @AiTool
local tools, and MCP services in a single, lightweight, Spring-friendly framework.
With dynamic planning, schema transformations, and direct HTTP clients, it’s a practical foundation for modern AI apps — from document pipelines to enterprise AI agents.
⭐ Next Steps
- Try built-in workflows (Travel Planner, Resume Builder, Document Summarizer)
- Create your own workflow YAML in minutes
- Run locally with Ollama or integrate with cloud LLMs
Have ideas or feedback? Share them in the comments — I’d love to hear how you’re building AI workflows.