Documentation v0.1

Jeko AI Assistance docs

FAQ

Frequently asked questions about the installation, configuration, and operation of Jeko AI. If your question isn't listed below, please open an issue in the repository or contact the development team.

General

What is Jeko AI?

Jeko AI is a web-based chat interface that leverages Ollama to run local or remote LLMs. Its goal is to provide a modern UI and API endpoints (REST/WebSocket) that can be integrated into external applications.

Is Ollama required?

Yes — Jeko AI is designed to communicate with Ollama as the model provider. You must install Ollama and ensure its daemon/service is running on an accessible host.

Which platforms are supported?

macOS, Linux, and Windows. Node.js is required for running the source; for production you can build container images according to your needs.

Installation & Setup

What are the quick installation steps?

In short:

# Install Ollama:
https://ollama.com/download

# Pull model:
ollama pull <model-name>

# Clone Jeko AI:
git clone https://github.com/rizkybor/Jeko-AI.git
cd jeko-ai
npm install
npm run dev
What should I configure in .env?

Copy .env.example to .env and fill in the relevant variables such as the Ollama URL (if not default), internal API keys, and any required CORS or proxy settings.

How do I add a new model?

Install/pull the model via Ollama (e.g. ollama pull qwen2.5), then set the model name in the configuration or mark it as the default in the application settings.

Usage & API

Which endpoint generates responses?

The primary endpoint is POST /api/generate. The minimal payload contains a prompt field and optional model,max_tokens, etc.

How to upload images for multimodal input?

Use POST /api/upload with FormData (file: <image-file>). The app stores a temporary URL and includes it when calling the model.

Is there authentication for the API?

A simple mechanism (e.g. password gate) is available via /api/login. For production, integrate a proper authentication layer (OAuth, JWT, or reverse-proxy auth).

Troubleshooting

Model not responding — is Ollama running?

Ensure the Ollama daemon is running and the endpoint is reachable from the application host. Check default ports and firewall rules; run ollama status or inspect Ollama logs for errors.

Model consumes too much memory — any solutions?

Use a smaller model, run the model on hosts with sufficient resources, or use paging/streaming if supported. For production consider quantized models or separate serving infrastructure.

Slow responses — what should I check?

Check network latency to Ollama, CPU/GPU load on the model host, and parameters (e.g. max_tokens, temperature) that produce long decoding. Enable logging to identify bottlenecks.

Security & Privacy

Are conversations stored?

By default conversations may be stored on the server for session and audit purposes. Storage policies depend on your configuration; enable encryption or non-persistence for sensitive data.

Can I use remote/cloud models?

Yes, provided Ollama or your model gateway allows connections to remote models. Ensure secure data transfer and credentials (TLS, IP allowlist).

Development & Contribution

How can I contribute?

Fork the repository, create a feature/bugfix branch, and open a pull request. Include a change description, tests, and documentation updates. If the repo is private, contact the maintainers for contribution access.

How to run the local development environment?

Clone the repository, install dependencies, and run npm run dev. Make sure Ollama is active and environment variables are configured.

If the answers above do not resolve your issue, include logs, Ollama version, model name, and reproduction steps when opening an issue to speed up assistance.

Enter Password

Sign in to continue, Your password is securely stored on the server.

Enter the password to open the application.