What 47,000 ChatGPT Conversations Reveal About Human Behaviour and Public AI

Part 1 of 2:
Public AI shows the risks. Enterprise AI must solve them.

We All Knew – Now We Can Finally Point to the Numbers

If you’ve worked with AI for more than five minutes, none of this will come as a surprise. A new analysis of 47,000public ChatGPT conversations has been making the rounds, and it confirms exactly what most of us quietly knew already. Give people a friendly chatbot and they will treat it like an old confidant. They’ll share private details, small secrets, stray worries, and the occasional existential crisis. Not because they’re careless, but because that’s what humans do when something feels helpful, patient, and endlessly available.

The interesting part isn’t the behaviour. It’s the scale. Now we have the numbers to show how predictable it all is. And I’ll admit, there’s a certain relief in that. For years we’ve been building Exfluency on a simple principle – privacy by design, not privacy by promise. Not because it sounded good in a brochure, but because we understood how people actually behave. Tools should protect you even on your most distracted day, the way a good friend nudges your glass away when you’ve already had enough.

What 47,000 Conversations Reveal About Human Behaviour

Once you look at the data, a very human pattern emerges. People weren’t treating ChatGPT like a search engine. They were treating it like someone who actually listens. Someone who doesn’t roll their eyes, interrupt, or say, “That’s a terrible idea, mate.” And when you give people a listener with infinite patience, you get everything: the work drama, the family drama, the “I’m probably fine but let me describe all my symptoms just in case,” and occasionally the kind of secret that really should have stayed in a drawer.

Of course they shared email addresses. Of course they shared phone numbers. Of course they typed in the details of breakups, divorces, office conflicts, and medical concerns. This wasn’t a security incident. This was human nature in its pure, unfiltered form. Most people don’t wake up thinking, “Today I’ll hand private details to a trillion-parameter model.” They simply ask for help in the moment, and the moment doesn’t always include perfect judgement.

What I find most interesting is how quickly people slip into emotional conversation. It’s not vanity. It’s not carelessness. It’s the same instinct that makes us talk to dogs or inanimate objects when we’re frustrated. Humans bond with anything that feels responsive. A warm tone works on us. A polite sentence works on us. A bit of empathy works wonders. And AI systems – especially public ones – are optimised to behave exactly that way.

This is why, from the very beginning at Exfluency, we never relied on the idea that “users should know better.” That’s not a security strategy; it’s wishful thinking wrapped in policy language. We built our environment knowing that people will type whatever is on their mind. Our job has always been to make sure our system protects our users even when their attention is elsewhere. Because if 47,000 public conversations tell us anything, it’s that the problem isn’t people. It’s the assumption that people will behave like machines.

The “Default Yes” Problem: When AI Mirrors Instead of Corrects

One of the stranger statistics from the dataset is this: ChatGPT begins its replies with some version of “yes” or “you’re right” almost ten times as often as it starts with “no.” 10x(!). If a human did that, you’d assume they were either being very polite or trying to sell you something.

And that’s essentially what’s happening. Public AI isn’t optimised to challenge you. It’s optimised to keep you engaged. The friendliest dinner guest at the table. The one who nods a lot, compliments the cooking, and tells you your holiday idea sounds brilliant even if you’re clearly five minutes from booking a tent on a floodplain.

The problem is not that AI tries to be agreeable. It’s that people mistake agreeableness for accuracy. Once the model starts nodding along, the conversation drifts – sometimes gently, sometimes dramatically – into places where the system has no business giving confident answers. And because the tone is friendly, users lean in even further, revealing more personal information, more assumptions, and sometimes more… creative interpretations of reality.

But again – this was all predictable. You can’t build a chatbot to be warm, patient, and encouraging, and then act surprised when people confide in it. Nor can you build a system trained to please users and expect it to interrupt them with a firm, “No, that’s not correct, and here’s why.” Public AI is built for scale and satisfaction, not scrutiny.

This is precisely why, when we designed Exfluency’s environment, we took a very different path. We didn’t want a system that simply mirrors the user. We wanted a system that understands context, applies guardrails, and – when needed – says “no.” Not in a dramatic, judgmental way, but in the way a good friend does: gently, clearly, and with the kind of honesty that actually helps you.

Because if there’s one thing the 10:1 ratio proves, it’s that friendliness is lovely, but correctness still matters. Especially when sensitive data, real decisions, and actual consequences are involved.

Why This Was Always Predictable – And Why Exfluency Designed for It from Day One

When you step back, nothing in this dataset is mysterious. People behave like people. Models behave like models. And the gap between the two will always produce interesting, occasionally alarming, results. That’s why, even in Exfluency’s very early days, we never based our architecture on ideal behaviour. We based it on real behaviour.

The truth is simple. You can’t rely on people to treat every moment like a compliance workshop. They’re busy. They’re stressed. They’re juggling tasks, children, deadlines, inboxes, and whatever fresh chaos has landed on their desk that morning. If they have a tool that can help them write, translate, or clarify something quickly, they’ll use it. And in the process, they’ll inevitably paste things that were never meant to leave their department, let alone their organisation.

We understood that from the start – long before it became fashionable to say “privacy by design.” For us, it wasn’t a slogan. It was the only responsible foundation. If you assume perfect user judgement, you’ve already failed. If you assume humans will be humans, you can build something that protects them even on their off days.

That’s why anonymisation has always been a default, not a patch. And why provenance, gated access, secure workflows, and no-training-on-your-data weren’t late additions, but core principles. We didn’t design Exfluency like a public playground. We designed it like a well-run library: open, useful, safe, and built to ensure that what goes in is treated with care.

So yes – the ChatGPT logs make for interesting reading. But to us, they mostly feel like confirmation. Not that people overshare, or that AI is too eager to please, but that building systems around human behaviour was the right call all along. And if anything, the data simply reinforces why privacy-by-design is no longer optional. It’s table stakes for anyone who wants to bring AI into the real world without creating new risks in the process.

This Isn’t About Blaming People. It’s About Understanding Them.

If there’s one lesson to take from these 47,000 conversations, it’s that none of this should turn into a finger-wagging exercise. People aren’t the problem here. They’re just… people. And honestly, it’s reassuring. It means we’re still human in a world increasingly filled with machines that sound like us.

What worries me isn’t that users overshare. It’s that so much of the AI world still pretends they won’t. Or worse, that they shouldn’t. That’s not a strategy. That’s wishful thinking with a safety disclaimer attached. The real work starts when we design systems that expect human behaviour and make space for it – safely, respectfully, and without judgement.

That’s why this dataset is so valuable. It doesn’t expose a scandal. It exposes a gap. A gap between how people actually interact with AI and how many systems are built to handle that interaction. And now that we have the numbers, the conversation can finally move from speculation to structural design.

So that’s where we’ll go next.
Because the question isn’t “Why do people share this much?”
We already know the answer to that.

The real question – the one that matters for every organisation considering AI – is this:

What kind of systems do we need to protect people as they are, not as we hope they will be?

That’s where Part II begins.

Simon Etches

Marketing Team Lead @ Exfluency | Driving brand clarity and strategic growth across AI, language tech, and data sovereignty | Hands-on. Cross-brand. Impact-focused.

  • TRANSLATE
  • ASK & CREATE
  • DEVELOP
  • CONTACT