Skip to main content

AI Features

Interview Edge uses AI for two things:

  1. Question generation — when a job is set to Active
  2. Interview evaluation — when a candidate submits their interview

All AI calls happen directly in the browser — nothing is routed through a third-party server.

What AI does

Question generation

When you set a job to Active, the AI reads your job title, description, and key requirements, then generates 5 tailored interview questions. Each question includes:

  • Question text
  • Category (Technical, Behavioural, or Situational)
  • Difficulty (Easy, Medium, or Hard)
  • A model answer for your reference

Click ↻ Change Question Set at any time to generate a fresh set.

Interview evaluation

After a candidate submits, AI analysis runs automatically when you open the interview detail page. It produces:

OutputDescription
Trait scores0–10 scores for Communication, Problem Solving, Tech Depth, Clarity, and Experience
Overall scoreWeighted mean displayed as a single number
VerdictHire, Maybe, or No Hire
Verdict reasonOne-sentence justification
SummaryParagraph overview of the candidate's performance
Per-question observationsSpecific feedback on each answer

Results are saved automatically and are not re-run on subsequent visits.


Administrator: Using local AI (Ollama)

By default the platform uses a cloud AI provider. If your organisation requires that no data leaves your network, an administrator can configure Interview Edge to use Ollama — an open-source AI runtime that runs entirely on your own machine or server.

Step 1 — Install Ollama

Download the installer from ollama.com/download, or install via Homebrew:

brew install ollama

Ollama installs as a background service and starts automatically on login.


Step 2 — Pull the Mistral model

ollama pull mistral

This downloads approximately 4 GB. Other supported models:

ModelSizeNotes
mistral~4 GBRecommended — fast, great instruction following
llama3~4.7 GBStrong reasoning, slightly slower
gemma2~5 GBGood for technical topics
phi3~2.3 GBLighter option for limited RAM

Step 3 — Start Ollama with CORS enabled

The app runs in a browser, so Ollama must allow requests from the Interview Edge domain.

OLLAMA_ORIGINS=https://interview-edge-orpin.vercel.app ollama serve

Make it permanent — add to ~/.zshrc:

export OLLAMA_ORIGINS="https://interview-edge-orpin.vercel.app"

Then run: source ~/.zshrc


Step 4 — Enable Ollama in your settings

Go to Settings → AI Provider, select Ollama, and click Save Keys.

No API key is needed. See AI Provider for the full walkthrough.

note

Because AI calls happen in the browser, Ollama must be running on the same machine as the browser being used to access the app. It is not required to be publicly accessible.