Getting Started
Jeko AI is an AI chat interface built on top of Ollama, leveraging local or remote LLMs to deliver real-time conversational responses. It provides a modern chat UI and a flexible backend that can be integrated into external applications through REST or WebSocket APIs.
Key Features
- Direct integration with Ollama. - Responsive and modern web chat interface. - Multi-model support. - Customizable system prompts. - REST and WebSocket API options - Simple deployment and configuration - Production-ready architecture
Prerequisites
Before installing Jeko AI, ensure the following are available:
- macOS, Linux, or Windows. - Ollama installed and running. - Node.js (if using the source version). - Modern web browser.
Quick Start
# install ollama: https://ollama.com/download # Pull the required model: ollama pull llama3 # Install and run Jeko # Access the chat interface: http://localhost:3000
Installation
# git clone https://github.com/rizkybor/Jeko-AI.git (Private repo) cd jeko-ai npm install npm run dev
Environment
Salin .env.example menjadi .env lalu set API keys yang dibutuhkan.
Configuration
Jeko AI supports configuration through environment variables, JSON configuration files, or UI-based settings.
In-App Configuration
- Default Model.
- System Prompt.
- Conversation Modes.
Semua instruksi dan contoh dijalankan di lingkungan server-side Next.js dan memerlukan konfigurasi Ollama pada host yang dapat diakses oleh aplikasi.