API Reference
Jeko AI's official API documentation for conversational integration, streaming, custom models, and production deployment.
Endpoint List
POST /api/chat
Generates conversational responses using a specified model.
Request Body:
{
"model": "llama3",
"messages": [
{ "role": "user", "content": "Hello" }
]
}
Response:
{
"reply": "Hello, how can I assist you today?"
}
WebSocket Endpoint: /ws
Supports real-time token-by-token output streaming.
Use WebSocket for lower-latency streaming; implement ping/pong and reconnection logic on the client.
Advanced Topics
Custom Models (Modelfile)
You can create custom Ollama models and integrate them directly with Jeko AI.
FROM llama3 SYSTEM "You are an enterprise-grade assistant..."
Integration with External Systems
- ERP or CRM
- WhatsApp Gateway
- Company Intranet
- Automation Flow
- Backend Microservices
Deployment Production
- Docker Compose
- Nginx or Traefik as reverse proxy
- PM2 for Node applications
- Termination TLS
- Logging and monitoring
For production use, ensure proper authentication, rate-limiting, and observability. Provide reproducible logs, model name, and versions when opening issues for faster triage.