• Home
  • About
  • Contact

Coffee and PI

~ Mathematics. Machine Learning. Data Science.

Coffee and PI

Tag Archives: artificial-intelligence

The Future of AI

26 Thursday Feb 2026

Posted by Lucija Gregov in AI Ethics, AI Morality, Artificial Intelligence

≈ Leave a comment

Tags

ai, AI Ethics, AI Morality, artificial-intelligence, philosophy, technology

The Parents’ Paradox: AI, Ethics, and the Limits of Machine Morality

This post is based on a talk I gave at The AI & Automation Conference in London on February 25, 2026, and my slides. All opinions are my own and don’t represent the views of my employer or any affiliated organizations.


I’ve been working in machine learning since before it was a dinner party conversation. My background is in mathematics. And I still believe in a utopian Star Trek future – one where humanity defines itself by curiosity, kindness, and collaboration, rather than countries, borders, and status.

This is not an anti-AI talk. But I think we need to talk much more seriously about some things that aren’t getting enough attention.

The Parents’ Paradox:

We’ve raised a child who can speak but doesn’t know how to value the truth or morality

I want to start with something that I like to call “The Parents’ Paradox”. For the first time in human history, we are raising a new species. Up until now, the only way we knew how to raise a child was the following: when a child is born, it is a blank slate in terms of information about the world. It knows nothing about the world around it, and it learns as it grows. But, also, on the other hand, a human child is born with biological hardware for empathy – the capacity to feel pain when others feel pain. Millions of years of evolution gave us that. When we raise a human child, we are not installing morality from scratch. We are activating something that’s already there.

With AI, the situation is completely the opposite. This AI child knows about the world more than we do since it has been trained on the whole internet, but it doesn’t have millions of years of evolution, genes, or a nervous system to back up its morality and empathy. This means we need to install morality in AI from scratch. But how do we install something in a software system that we can’t even define ourselves? We have taught this AI child to speak before we taught it how to value truth or morality.

Can we live with the consequences? Are we ready to be parents for this new species we are trying to raise? I am not so sure. Let’s see what we as parents (humans) are doing.

Epistemic Collapse

‘Epistemic’ comes from a Greek word ‘episteme’, meaning ‘knowledge’. Let’s start with what’s happening to us, and what humans are already doing with this technology.

A study published in Nature in January 2026 showed participants deepfake videos of someone confessing to a crime. The researchers explicitly warned participants that the videos were AI-generated. But this didn’t matter. Even the people who believed the warning, who knew it was fake, were still influenced by what they saw.

Transparency didn’t work. The standard response to AI-generated misinformation is “just label it” or “tell people it’s synthetic.” This study showed that’s not enough. Knowing something is fake does not neutralise its effect on your judgement.

So, the danger isn’t that AI will deceive us in some dramatic, sci-fi way. The danger is that AI will make deception so cheap and so ubiquitous that we might stop trying to figure out what is true. Not because we are fooled, but because we are exhausted. When everything could be fake, the rational response starts to look like not trusting anything at all. It started a while ago with all of the fake information on social media, but with AI, this problem is now becoming much bigger and on a bigger scale. We are also dealing with feedback loops of training models on user data, which is often wrong, or on user data from the internet, which is often wrong as well. How do we know which information was ground truth? I imagine this as making photocopies many times, and each time the copy becomes more distorted and further away from the original. But now, after we made hundreds and thousands of copies, we have lost the original copy, so we don’t have any idea what the original looked like. That is epistemic collapse, and it is already happening.

So this is how we, as ‘parents’, like to spend our time, it seems. But what about the child (AI)?

The Child is Already Misbehaving

So that’s what humans are doing with AI. Now here’s what the AI is doing on its own.

Betley and colleagues published a paper in Nature in January 2026, showing something nobody expected. They fine-tuned a model on a narrow, specific task – writing insecure code. Nothing violent, nothing deceptive in the training data. Just bad code.

Continue reading →

Categories

  • AI Ethics (1)
  • AI Morality (1)
  • Artificial Intelligence (2)
  • Computational Geometry (1)
  • Data Science (1)
  • Finance (1)
  • GenerativeAI (1)
  • Machine Learning (7)
  • Mathematical Physics (3)
  • Nonlinear systems (2)
  • Python (4)
  • Solitons (3)
  • Topological Data Analysis (1)

Blog at WordPress.com.

  • Subscribe Subscribed
    • Coffee and PI
    • Already have a WordPress.com account? Log in now.
    • Coffee and PI
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar