AI Features
Interview Edge uses AI for two things:
- Question generation — when a job is set to Active
- Interview evaluation — when a candidate submits their interview
All AI calls happen directly in the browser — nothing is routed through a third-party server.
What AI does
Question generation
When you set a job to Active, the AI reads your job title, description, and key requirements, then generates 5 tailored interview questions. Each question includes:
- Question text
- Category (Technical, Behavioural, or Situational)
- Difficulty (Easy, Medium, or Hard)
- A model answer for your reference
Click ↻ Change Question Set at any time to generate a fresh set.
Interview evaluation
After a candidate submits, AI analysis runs automatically when you open the interview detail page. It produces:
| Output | Description |
|---|---|
| Trait scores | 0–10 scores for Communication, Problem Solving, Tech Depth, Clarity, and Experience |
| Overall score | Weighted mean displayed as a single number |
| Verdict | Hire, Maybe, or No Hire |
| Verdict reason | One-sentence justification |
| Summary | Paragraph overview of the candidate's performance |
| Per-question observations | Specific feedback on each answer |
Results are saved automatically and are not re-run on subsequent visits.
Administrator: Using local AI (Ollama)
By default the platform uses a cloud AI provider. If your organisation requires that no data leaves your network, an administrator can configure Interview Edge to use Ollama — an open-source AI runtime that runs entirely on your own machine or server.
Step 1 — Install Ollama
- macOS
- Linux
- Windows
Download the installer from ollama.com/download, or install via Homebrew:
brew install ollama
Ollama installs as a background service and starts automatically on login.
curl -fsSL https://ollama.com/install.sh | sh
To start the service:
sudo systemctl start ollama
Download and run the installer from ollama.com/download. Ollama runs in the system tray and starts automatically.
Step 2 — Pull the Mistral model
ollama pull mistral
This downloads approximately 4 GB. Other supported models:
| Model | Size | Notes |
|---|---|---|
mistral | ~4 GB | Recommended — fast, great instruction following |
llama3 | ~4.7 GB | Strong reasoning, slightly slower |
gemma2 | ~5 GB | Good for technical topics |
phi3 | ~2.3 GB | Lighter option for limited RAM |
Step 3 — Start Ollama with CORS enabled
The app runs in a browser, so Ollama must allow requests from the Interview Edge domain.
- macOS
- Linux
- Windows
OLLAMA_ORIGINS=https://interview-edge-orpin.vercel.app ollama serve
Make it permanent — add to ~/.zshrc:
export OLLAMA_ORIGINS="https://interview-edge-orpin.vercel.app"
Then run: source ~/.zshrc
For a one-off session:
OLLAMA_ORIGINS="https://interview-edge-orpin.vercel.app" ollama serve
To make it permanent via systemd:
sudo systemctl edit ollama
Add and save:
[Service]
Environment="OLLAMA_ORIGINS=https://interview-edge-orpin.vercel.app"
Then:
sudo systemctl daemon-reload && sudo systemctl restart ollama
In PowerShell:
$env:OLLAMA_ORIGINS="https://interview-edge-orpin.vercel.app"
ollama serve
To make it permanent, add OLLAMA_ORIGINS as a Windows User Environment Variable:
- Search "Edit the system environment variables" in the Start menu
- Click Environment Variables… → New under User variables
- Name:
OLLAMA_ORIGINS/ Value:https://interview-edge-orpin.vercel.app - Click OK and restart any open terminals
Step 4 — Enable Ollama in your settings
Go to Settings → AI Provider, select Ollama, and click Save Keys.
No API key is needed. See AI Provider for the full walkthrough.
Because AI calls happen in the browser, Ollama must be running on the same machine as the browser being used to access the app. It is not required to be publicly accessible.