What Everyone Gets Wrong About AI

Headshot of Ekjot Singh

Ekjot Singh

Tech Analyst

I'm a Math Student. Here's What Everyone Gets Wrong About AI.

Last week, I watched a senior executive confidently present an AI solution that would "increase efficiency by 87%." The room nodded. I did the math. The claim didn't add up.

This happens more often than you'd think. As a BSc Mathematics student and Technology Research Analyst at City Innovation Hub, I spend my days in two worlds: one where we prove everything, and one where we're racing to adopt AI before understanding it. The gap between these worlds is getting dangerous.

The Problem: We're Confusing Capability with Understanding

Here's what most people get wrong: they think using AI means understanding AI. It doesn't.

You can use a calculator without understanding calculus. You can drive a car without understanding combustion engines. And yes, you can use ChatGPT without understanding neural networks. But here's the catch - when the calculator gives you a wrong answer, you might notice. When AI does, you probably won't.

In my mathematics courses, we don't just learn formulas. We learn to question them. Every theorem requires proof. Every result demands verification. But in the business world, AI outputs are treated like gospel. A model says it with 95% confidence, so it must be true, right?

Wrong.

What "95% Confidence" Actually Means

Let me explain something that makes my statistics professor cringe: that 95% confidence interval everyone loves to quote? It doesn't mean what you think it means.

It doesn't mean the AI is 95% sure. It doesn't mean there's a 95% chance it's right. It means that if you ran this same test 100 times under identical conditions, about 95 of them would produce an interval containing the true value. See the difference?

Most people don't. And that's exactly the problem.

When a business leader sees "95% accuracy," they hear "almost perfect." When a mathematician sees it, they ask: "95% accurate at what? On which dataset? Under what conditions? What about the other 5%?"

These aren't pedantic questions. They're essential ones.

The Real Issue: Pattern Recognition Without Comprehension

AI is exceptionally good at finding patterns. Humans are exceptionally good at understanding why those patterns matter.

Here's a simple example from my studies: correlation does not equal causation. It's Statistics 101. Yet I've seen million-dollar AI implementations based entirely on correlated data with zero causal understanding.

Ice cream sales correlate with drowning deaths. Should we ban ice cream? Of course not. Both increase in summer. Any math student knows this. But an AI model, trained purely on data without context, might suggest reducing ice cream availability to prevent drownings.

Absurd? Yes. But I've seen corporate AI make similarly flawed recommendations, just dressed in more sophisticated language.

What My Math Background Reveals About AI

Three years of mathematical training have taught me to spot what others miss:

  • First, AI operates in defined spaces. Just like how mathematical functions have domains and ranges, AI models have boundaries. They work within specific parameters. Take them outside those parameters, and they fail. But they rarely tell you they're outside their domain. They just give you an answer anyway.
  • Second, small errors compound exponentially. In mathematics, we study error propagation. A tiny rounding error in step one becomes a massive distortion by step ten. AI systems chain multiple models together. Each introduces error. By the end, you might be nowhere near truth, but the output looks precise.
  • Third, optimisation isn't the same as correctness. AI optimises for what it's told to optimise for. If your objective function is wrong, your AI will efficiently give you the wrong answer. Garbage in, garbage out - but now at scale and speed.

The Question Everyone Should Ask (But Nobody Does)

Before implementing any AI solution, ask this: "Can I verify this output independently?"

If the answer is no, you're not using AI as a tool. You're outsourcing judgment to a system you don't understand.

This isn't anti-AI. I'm not a Luddite. I use AI tools daily. I'm excited about AI's potential. But as someone who studies the mathematics that makes AI possible, I'm alarmed by how little most users understand about what's happening under the hood.

What This Means for Innovation

At City Innovation Hub, we're supposed to be at the cutting edge of technology adoption. But here's what I've learned: the real innovation isn't using the latest AI tool. It's using it intelligently.

That means:

  • Understanding what the model was trained on
  • Knowing its limitations
  • Verifying outputs against reality
  • Being honest about uncertainty
  • Building human verification into AI workflows

The companies that will win the AI race aren't the ones that adopt fastest. They're the ones that adopt smartest.

The Uncomfortable Truth

We're in a strange moment. AI is powerful enough to be useful but not powerful enough to be reliable. It's smart enough to sound convincing but not smart enough to know when it's wrong.

And we're adopting it faster than we're understanding it.

As a math student, I'm trained to be skeptical. To demand proof. To question assumptions. These aren't just academic habits - they're essential survival skills in an AI-driven world.

So What Should You Do?

Here's my advice, from someone who studies the foundations while watching the applications:

Build AI literacy, not just AI adoption. Before you deploy an AI solution, make sure your team understands:

  • What problem it's actually solving
  • What data it's using
  • What assumptions it's making
  • How to verify its outputs
  • When to override it

Treat AI as a hypothesis generator, not a decision maker. Let AI suggest. Let humans decide. Especially for high-stakes decisions.

Invest in people who understand the math. Not just data scientists. Everyone who uses AI should understand basic statistics, probability, and critical thinking.

The Bottom Line

I'm a math student, not an AI expert. But that's exactly why I see what others miss. My job is to question, to prove, to verify.

Yours should be too.

The next time someone shows you an AI solution with impressive numbers, don't just nod. Ask the uncomfortable questions. Demand to see the math. Question the assumptions.

Because in a world where AI can confidently give wrong answers, the most valuable skill isn't using AI.

It's knowing when not to trust it.

What's your experience? Have you ever caught an AI system being confidently wrong? What questions do you ask before trusting AI outputs?