How it works
A real-time AI examiner, calibrated to Year 8.
What a candidate sees and hears, how the language register is tuned, and what comes out at the end of a session. Aimed at MFL teachers and curious parents — no jargon-free promises, just the facts of how the product is built.
The session
What a candidate experiences.
The candidate sees the same kind of card the real examiner gives them: a one-line scenario in English (You are in a pharmacy in France because you have a headache), a list of six tasks they need to perform, and a microphone button. They click and start.
The AI examiner speaks first — in role, in French (or Spanish), using the kind of register a real examiner uses with a Year 8 pupil. The candidate responds. The AI listens, replies, and works through the six tasks in order. One task on every card is unpredictable: the “Listen and choose” task at Level 1, or the “Answer the question” task at Level 2, where the candidate has to react to something they couldn’t prepare in advance.
After Part 1 the candidate moves to Part 2 (text-based task) and Part 3 (open conversation), exactly as in the real exam. The whole session lasts five minutes at L1, eight minutes at L2 — matching the published timings.
The language calibration
Year 8 register, not GCSE register.
A common failure mode for AI tutors is using language above the candidate’s level — perfect tense in casual asides, subjunctive in question stems, abstract vocabulary nobody’s taught yet. We instrument against this explicitly: the AI examiner is given an allowlist of tenses, connectives, and vocabulary fields appropriate to Year 8 prep-school French. It will not say “c’était très bien” instead of “très bien”; it will not ask “qu’est-ce que tu as appris ?” when “qu’est-ce que tu apprends ?” would be age-appropriate.
At Level 1 the examiner stays mainly in the present tense, with near-future via “aller + infinitive” — and probes the candidate’s ability to use past and future appropriately, since the L1 specimen mark scheme explicitly rewards “successful attempts to refer to events in the future using the near future tense.” At Level 2 the register opens up to perfect tense, imperfect for description, and conditional politeness phrases.
Scoring & review
What the candidate (and teacher) sees afterwards.
At the end of the session the candidate sees an indicative score per part — drawn from the published ISEB mark schemes. We’re deliberate that these are practice scores, not exam grades: they’re meant to show whether the candidate is in the right ballpark, not predict their actual exam result.
For schools, the MFL teacher sees the full transcript of every session a candidate has done — useful for spotting that someone’s struggling with role-play tasks or never attempts past tense in open conversation. The class view aggregates the same data across the cohort.
What we publish
Authored against the live spec.
Every role play, stimulus card, and open-conversation question bank is written by us, against the published 2024 specimen and recent past papers. The format is exact (six numbered tasks for role play, info-bullet stimulus card with five prescribed questions for the text-based task, examiner-led open conversation on a prescribed topic). The wording is original — we don’t reproduce ISEB content verbatim, both for copyright and because fresh content is what makes practice valuable.
Want a closer look?
Schools can arrange a demo. Parents and teachers with questions — hello@tete-a-tete.ai.