Measure reasoning in action — not just recall.
Meandrix is built around Encounters: decision-based assessments powered by the LogicPath Engine. Learners apply knowledge in context, and every choice contributes to the outcome and the evidence.
The problem isn’t AI. It’s evidence.
Recall-based assessment is no longer a reliable signal. The default response has been heavier surveillance — but surveillance-first integrity strategies undermine trust, accessibility, and pedagogy.
Institutions need a scalable way to capture applied reasoning with integrity intact — and artefacts that stand up to scrutiny.
The shift: from answers → to decisions
Encounters are designed so learners must make decisions in context. Those decisions carry weight, shape the outcome, and generate feedback that makes the evidence explicit.
Present context — prompt judgement, not memorisation.
Weight decisions — movement through outcomes via decision-weighted logic.
Generate evidence — feedback and outcome reasoning make learning visible and defensible.
Encounters, powered by the LogicPath Engine
LogicPath creates decision-weighted movement (not simple branching). Learners progress through outcomes where every decision contributes to the pathway — and the evidence.
This enables educators to assess knowledge application and reasoning in action, at scale.
What gets captured
Decision patterns — how learners choose under constraints.
Outcome reasoning — why a pathway led where it did.
Decision-linked feedback — personalised guidance aligned to choices and outcomes.
Meandrix Sentinel
Optional medium security designed to disincentivise misconduct without surveillance-first approaches.
Sentinel is built to make cheating more annoying than doing the assessment — while keeping the experience fast, accessible, and credible.
Integrity by design
Sentinel supports integrity signals and deterrence patterns that protect trust and pedagogy — especially where traditional proctoring is impractical or undesirable.
Operational details are best discussed in an institutional demo to align with policy, device constraints, and learner accessibility.
Designed to pair with dialogic assessment
Encounters can be paired with a short follow-up conversation (viva-style) to validate learning and measure growth. The Encounter becomes shared evidence for reflective supervision: what happened, so what it means, and what comes next.
Ready to explore an institutional pilot?
See how Encounters capture applied reasoning at scale — with credibility built in.