AI Services
Gemini-powered chatbot (SSE), classification, and optional vector search
AI Services
UrbanReflex AI stack uses Gemini for generation and supports optional vector search (Pinecone) for RAG. Chatbot endpoints support streaming (SSE) and non-streaming. Classification is exposed via citizen-report API.
Chatbot (Gemini)
Endpoint: POST /ai-service/chatbot/chat
- Streaming SSE (default): emits
start,token,endevents. - Non-streaming: add
?stream=false. - Optional:
conversation_id,user_idto maintain context.
Request (non-streaming example)
Response (non-streaming)
Response (streaming SSE snippet)
Chat History
GET /ai-service/chatbot/history/{conversation_id}DELETE /ai-service/chatbot/history/{conversation_id}(204)
Report Classification (NLP + POI priority)
Endpoint: POST /api/v1/citizen-reports/classify/{entity_id}
Response (example)
Errors:
- 404 if report not found in Orion-LD
- 500 if NLP service unavailable
Vector Search (optional RAG)
Endpoint: POST /ai/search (if enabled)
Response (example)
Configuration
.env (backend):
If Pinecone keys are absent, chatbot still works without RAG. Set API keys and restart backend to enable vector search. See full API details in docs/API_REFERENCE.md.
