Prototype implementation roadmap — from database to working chatbot.
Step 1 Database Schema with RLS
Create PostgreSQL tables: subscribers, billing, invoices, plans. Enable Row-Level Security policies that scope all queries by subscriber_id set at connection time via SET app.current_subscriber_id.
CREATE TABLE billing (
id UUID DEFAULT gen_random_uuid(),
subscriber_id UUID NOT NULL REFERENCES subscribers(id),
amount DECIMAL(10,2), due_date DATE, status VARCHAR(20)
);
ALTER TABLE billing ENABLE ROW LEVEL SECURITY;
CREATE POLICY billing_isolation ON billing
USING (subscriber_id = current_setting('app.current_subscriber_id')::UUID);
Step 2 Spring Boot Project Setup
Initialize Spring Boot 3.4 project with dependencies: spring-ai, langchain4j, spring-data-jpa, spring-security, spring-websocket, spring-data-redis. Configure virtual threads for high concurrency.
@SpringBootApplication
public class TelecomAiBotApplication {
public static void main(String[] args) {
SpringApplication.run(TelecomAiBotApplication.class, args);
}
}
// application.yml: spring.threads.virtual.enabled=true
Step 3 MCP Tool Server Implementation
Build MCP-compliant tool server with @Tool annotations. Each tool maps to a business operation. The server exposes tool schemas via MCP discovery and handles JSON-RPC invocations.
@McpTool("get_current_bill")
public BillResponse getCurrentBill(
@ToolParam(description="Subscriber ID") UUID subscriberId
) {
rlsContextSetter.setSubscriber(subscriberId); // Set RLS context
return billingRepo.findCurrentBill(subscriberId);
}
Step 4 AI Orchestrator with LangChain4j
Connect to vLLM-hosted Mistral model. Configure the agent with MCP tools, system prompt with telecom domain constraints, and conversation memory backed by Redis.
var agent = AiServices.builder(TelecomAssistant.class)
.chatLanguageModel(openAiModel) // vLLM with OpenAI-compatible API
.tools(billingTools, accountTools, planTools)
.chatMemoryProvider(id -> new RedisChatMemory(id, Duration.ofMinutes(30)))
.build();
Step 5 Guardrails & Security Pipeline
Implement pre-LLM filters (input validation, topic classification, jailbreak detection) and post-LLM validators (PII redaction, response-tool consistency check, business alignment).
Step 6 Voice Pipeline Integration
Add WebSocket endpoint for audio streaming. Integrate Faster-Whisper for real-time speech-to-text and Piper TTS for response synthesis. Share the same AI orchestrator for text and voice.
Step 7 Authentication Flow (OTP)
Build OTP-based verification: customer provides phone number → bot sends OTP → customer confirms → JWT issued with subscriber_id → all subsequent tool calls use this verified identity.
Step 8 Docker Compose & Testing
Containerize all services. Docker Compose for local dev: PostgreSQL, Redis, vLLM/Ollama, Whisper, Piper, Spring Boot app. Integration tests verify data isolation across subscribers.