SIBYL
Get Started

How SIBYL Works

The Problem With AI Answers

If you’ve ever used an AI tool before, you’ve probably noticed that it answers questions with confidence — even when it’s wrong. AI models can state incorrect facts, invent sources that don’t exist, and miss critical context, all while sounding completely authoritative. This isn’t a bug in one particular product. It’s a fundamental limitation of how every AI model works.

Here’s why: every AI model is a single system, trained on a particular dataset, built with a particular architecture, by a particular team. That means every AI model has blind spots — things it doesn’t know, things it misremembers, and patterns in its training data that subtly shape how it interprets questions. When you ask it something, you get one perspective from one system, with no independent check on whether that perspective is accurate.

Think of it like calling a single doctor for a medical opinion on something serious. They might be right. But you’d feel a lot better with a second opinion — especially if your health, your money, or your legal rights were on the line.

SIBYL was built on a simple premise: a single AI answering your question is not good enough for decisions that matter.

Our Solution: Multi-Model Deliberation

Rather than giving your question to one AI and returning whatever it says, SIBYL orchestrates a structured process involving multiple independent AI models. These models don’t just answer your question — they research it independently, then systematically challenge each other’s work before any answer reaches you.

Think of it less like asking a chatbot and more like convening a boardroom of expert reviewers. Each one brings different strengths. Each one is accountable to the others. And none of them can slip an unchecked response past the group.

Here is exactly how that process works, step by step.

Step 1: Independent Generation

When you submit a question to SIBYL, that question is simultaneously sent to multiple AI models on the SIBYL Boardroom™. Critically, each model works in isolation — we intentionally don’t let them see what the others are writing.

When AI models are aware of each other’s responses, they tend to converge. They reinforce each other’s assumptions, repeat each other’s errors, and produce what researchers call “groupthink.” By keeping the models separated during this first stage, SIBYL ensures that each one produces a genuinely independent answer drawn from its own reasoning and knowledge. The result is a diverse set of perspectives on your question rather than variations of the same response.

Step 2: Domain Classification

Before the models begin cross-examining each other, SIBYL analyzes the nature of your question and automatically classifies it into a domain — medical, legal, financial, technical, or general, among others.

This matters because different AI models have different strengths. A model that performs exceptionally well on legal analysis may not be the best choice for interpreting a nutritional facts panel or debugging a piece of software. SIBYL routes your question to the models best suited for it and weights their contributions accordingly.

You don’t need to tell SIBYL what kind of question you’re asking. It figures that out on its own and adjusts the panel to match.

Step 3: Cross-Examination

This is the stage that makes SIBYL fundamentally different from anything else available today.

Each model’s response is not treated as a finished answer. Instead, it is broken down into its individual claims — each specific statement of fact, each piece of reasoning, each conclusion. Those individual claims are then submitted to the other models for scrutiny.

Each model acts as an examiner, evaluating the claims made by its peers across four dimensions:

When a model makes a factual error — for example, claiming that Sydney is the capital of Australia when the correct answer is Canberra — the examining models are specifically designed to catch it. Errors that might slip past a reader unfamiliar with the subject are surfaced automatically, before they ever reach you.

This cross-examination process mirrors how rigorous human institutions handle high-stakes information. Courtrooms require opposing counsel. Peer-reviewed journals require independent reviewers. Investment decisions require independent due diligence. SIBYL applies the same adversarial logic to AI-generated information.

Step 4: Synthesis and Trust Scoring

Once the cross-examination is complete, SIBYL’s synthesis model assembles the final answer.

This is not simply a summary of what the models said. The synthesis model reviews the full body of responses and challenges, identifies the claims that survived scrutiny across multiple models, and constructs a final answer built on the strongest supported reasoning from the entire panel.

The final answer you receive includes several features you won’t find in a standard AI response:

Where the models agreed strongly, SIBYL reflects that consensus clearly and assigns a high Trust Score. Where the models disagreed or where a claim was challenged but not definitively resolved, SIBYL flags the uncertainty explicitly rather than papering over it with false confidence. You always know what is well-established versus what is contested.

The answer also indicates which models contributed which information, giving you a transparent record of how the conclusion was reached — not just what it is. These are called Veripoints™.

The result is an answer that has been independently generated, rigorously challenged, transparently scored, and honestly qualified. That is a fundamentally different standard than anything a single AI model can offer.

Choose Your Depth

Not every question deserves the same level of scrutiny. Asking SIBYL to help you understand a lease agreement is a different situation than asking it to fact-check a news article for casual curiosity. Using the same amount of firepower for both would be inefficient — and unnecessarily slow.

SIBYL gives you control over how deeply your question is examined through four presets. Each one represents a different configuration of generators, examiners, and synthesizers working on your behalf.

🔬 Deep

3 generators, 3 examiners, 1 synthesizer

Deep mode deploys the full panel. Three independent AI models each produce their own response to your question without seeing each other’s work. Three examiner models then cross-check every claim in every response. Finally, a synthesis model reviews the entire body of work — the original responses, the challenges, and the areas of agreement and disagreement — and produces a final, consolidated answer with full confidence scoring.

This is the appropriate setting when the stakes are high. Medical questions, legal analysis, financial decisions, academic research, any situation where being wrong has real consequences. Deep mode takes more time than the other presets, but it gives you the most thorough verification SIBYL can provide.

⚡ Fast

2 generators, 1 examiner, 1 synthesizer

Fast mode is designed for the questions you encounter every day — technical questions, general research, background information on a topic you’re exploring. Two models generate independent responses, one examiner reviews the claims across both, and a synthesizer delivers a consolidated, confidence-scored answer.

You get meaningful verification without the processing time that Deep mode requires. For the majority of questions most people ask most of the time, Fast mode hits the right balance between rigor and speed.

💡 Light

1 generator, 1 examiner, 1 synthesizer

Light mode is SIBYL’s fastest configuration. One model generates a response, one examiner independently reviews that response before it reaches you, and a synthesizer assembles the final answer.

It is worth being clear about what this means: even in Light mode, your answer has been independently examined. There is no setting in SIBYL where a single model’s output is passed to you unchecked. Light mode exists for simple, low-stakes questions where you want a quick answer you can still rely on — not for situations where you need deep scrutiny, but for the everyday moments when you just want a sanity check built in.

🛠️ Custom Boardroom

You choose the configuration

The custom mode is for users who want full control over the process. You select which AI models sit at your boardroom and which act as generators, which act as examiners, and which handles synthesis. You can mix and match based on what you know about each model’s strengths, or design a panel specifically suited to a recurring type of question you work with regularly.

If you’ve used several AI tools and developed opinions about which ones perform best for certain tasks, Custom mode lets you put that knowledge to work.

Why This Matters

Every SIBYL answer is examined before it reaches you. That is not a feature of our premium tier — it is the baseline for every preset, including the fastest one. We made this a non-negotiable part of how SIBYL works because we believe AI is only genuinely useful when you can trust what it tells you, and trust has to be earned through process, not promised through marketing.

When SIBYL shows you a Trust Score alongside an answer, that number reflects something real. It is derived from how consistently the models agreed, how many individual claims held up under cross-examination, and how strong the supporting evidence was across the panel. A high Trust Score means the models converged strongly and the claims were well-supported. A lower score means there was meaningful disagreement or uncertainty — and SIBYL will tell you that directly rather than hiding it inside a confident-sounding answer.

What SIBYL Is Not

SIBYL is a research and analysis tool. It is built to help you find accurate information, understand complex topics, pressure-test ideas, and identify what deserves further investigation. It is not a substitute for professional judgment.

If you are facing a serious medical, legal, or financial decision, you should consult a qualified professional. What SIBYL can do is help you arrive at that conversation better prepared — with a clearer understanding of the subject matter, sharper questions to ask, and a more informed sense of what you actually need to know. The final judgment should always be yours.


SIBYL is not a substitute for professional medical, legal, or financial advice. Click below to view our full disclaimers.

View our professional-use disclaimers: