Can You “TRUST” ChatGPT?

If you’ve been wondering “Can I trust ChatGPT? How do I know it’s real? How do I know it’s true?”—you’re asking the question that comes up in almost every workshop, seminar, and training I do. That constant second-guessing about whether you can actually rely on what it’s giving you.

One thing I can tell you: you’re not alone.

And spoiling the ending here, there’s no silver bullet. But there are some best practices and ways you can set yourself up to succeed.

Here’s something that might surprise you: globally, 75% of people trust technology to help them, but only 50% trust AI. In the US, that number drops to just 35%. Your hesitation makes complete sense.

The real issue isn’t whether you can trust ChatGPT. It’s this: Are you asking ChatGPT to work with you or work for you? That one word makes all the difference.

The Black Box Problem (And Why Your Gut Says “Be Careful”)

Here’s what’s creating all this anxiety: ChatGPT often can’t explain how it arrives at an answer. There’s actually a term for this called “the black box problem.” It’s like having a confident friend who never tells you their sources, they just know the answer.

That might be fine for casual uses, but for high-stakes things like financial advice, medical decisions, or business analysis, we just can’t trust it at face value. When the “why” is hidden, trust becomes harder to build, especially for life-changing decisions.

And listen, this isn’t just about ChatGPT. AI is everywhere now. It’s in our phones, online shopping, streaming services. Every software being sold is touting AI. The stakes are growing. It’s no longer just a cool tool or added bonus. There are entire industries and economies built around this.

Some places are taking this seriously. In Illinois, for instance, they’ve put a ban on AI therapists. They passed what’s called the WOPR Act, banning AI inside the state of Illinois from acting as a therapist. Lawmakers cited risks to vulnerable people, saying AI might sound empathetic, but it can’t truly understand human emotions. And there have been anecdotes and stories about how AI has hallucinated and given bad advice. They’re putting violations at $10,000 per offense, making it clear that when it comes to sensitive areas like therapy, they only want humans to lead.

What Are We Trusting It to Do?

Can we trust ChatGPT? Sure. But my question back to you is: What are we trusting it for?

I’ll be honest with you. I have an AI therapy custom GPT. I have a personal counselor custom GPT that I use for guidance and conversation. But I use it being aware of the limitations. I don’t trust it implicitly. I do trust it for guidance and conversation, but I put my own mental guardrails on what’s happening there.

The shift that changes everything: Stop asking “Can I trust this?” and start asking “What am I trusting this to do?”

The Three Phases: Where You Lead, Where ChatGPT Helps

I believe most projects have three phases, and understanding this is key to building real trust:

Phase 1 – Planning and Ideation: This is where you define what you’re doing and why. The outlining stage. Understanding your goals. This is where you put your experience and expertise upfront. ChatGPT can’t read your mind or understand your business like you do.

Phase 2 – The Busy Middle: This is where we get exhausted. We put all our mental energy, all our time, and sometimes our bandwidth gets eaten up here. Research, drafting, data crunching, first-pass creation. This is where ChatGPT really shines. It can help compress this phase so you can spend more time in Phase 3.

Phase 3 – Finishing and Polishing: This is the expert phase. Where you add your experience, your insight, your expertise. Where you bring you. This is where verification happens, where you review and fact-check and edit.

Here’s the key: If you look at ChatGPT as a shortcut to get through the busy middle (not a replacement of all three phases, specifically not that third one where you put your verification, your expert insight, your value), then you’re using it right.

But when you remove the finishing and polishing because you no longer think it’s needed, that’s when we creep closer to that situation where we’re just trusting blindly.

Trust But Verify

There’s an old Reagan line from back when he was in the White House: “Trust but verify.” This is an eager intern that always returns with an answer. I’ve never had ChatGPT say “I don’t know.” But that answer isn’t always perfection, and that answer isn’t always right.

So it’s up to us to review, fact-check, and edit. What ChatGPT and tools like it allow you to do is get to that verification phase faster. The busy middle gets shrunk by using a tool like ChatGPT, but we can’t lose that third phase where you bring what you bring to the table.

If you’re using it by yourself, go through the process of asking ChatGPT questions, giving it background, giving it as much information as possible. Tell that amazing intern what you’re doing, why you’re doing it, what success looks like. Then ask: “Do you understand what I need? Do you have any questions? Because I want you to do your best work.”

That’s one way of keeping the quality high and keeping the AI working with you as a tool to expedite the busy middle.

Why Governance Follows, It Doesn’t Lead

Now, after hearing about Illinois and their WOPR Act, you might be thinking “Great, the government will figure this out for us.” But here’s the thing about governance that I learned from serving in government myself. I’ve been an elected city councilman and mayor in my town a couple times.

Most governance is reacting to things that are happening, not leading them. Critical masses come, they learn, they have experiences, and then government gets called to put safeguards around the activities.

I think governance is there to put safety rails around AI, but not to dictate how we do it and where we go. Governance follows what’s happening. It doesn’t lead the way.

So if you’re waiting for the government to tell you how to use ChatGPT safely, or what the rules are, you’re going to be waiting a long time. By the time they catch up, the technology will have already changed three times over.

We can’t wait for governance to solve the trust problem for us. We have to build our own frameworks for working with these tools responsibly.

The Training Gap For Companies

Here’s the revised section with small groups and internal champions added:

Here’s something that might surprise you: even now, there are far more people just getting started with ChatGPT, or haven’t tried it at all, than we might think. In workshops I do with business owners, when I ask how many are comfortable using ChatGPT, very few hands go up. We take things for granted if we’ve been familiar with it, but there’s an awful lot of people just being introduced to what AI even is.

The irony is that ChatGPT and tools like it are ways to become more efficient, but in order to become more efficient, you’ve got to learn it. And like everything else, it takes longer to do it at the beginning. So finding the time to learn and make room for it gets in the way of being efficient.

If you’re in a company, we cannot be pushing down best practices and “thou shalt” directives without getting folks comfortable and confident first. We can’t assume everyone understands what this thing is, what it can do, what it should do, and how to use it.

Here’s how to actually move forward as a company:

  • Start with education first. Help everyone understand what AI actually is (they’ve been using it in Google and Netflix for years)
  • Create clear policies. What data can be shared, what can’t, what requires verification
  • Identify internal champions. Find the curious folks who want to explore and let them lead the way for their peers
  • Form small groups. Have these champions work with small teams rather than trying to train everyone at once
  • Give permission to experiment. Tell your team “go play around and see what you find, you can’t break it”
  • Create sharing protocols. When someone discovers something useful, have them show the team
  • Document what works. Build your own internal best practices based on real results
  • Scale gradually. Start with low-stakes work, expand as confidence builds

When we skip the communication phase, the training phase (when we take for granted that everyone understands at the same pace and level), we’re going to lose the collective impact it can have.

A Former Google Executive’s Warning

I came across an article where a former Google exec named Mo Gawdat was predicting what he called “a disruption of humanity” because of the growing pace of AI advancements. He’s worried about over-reliance on answers that can’t be explained, over-reliance on things that can’t be understood.

As we go from curiosity to mainstream adoption, that rush of capitalism means we’re going to see companies pushing things that maybe aren’t fully understood, and people trusting things that aren’t fully understood.

That’s exactly why we need to be intentional about this.

The Bottom Line: Trust It to Do What?

Can we trust ChatGPT? My answer is always going to be: Trust it to do what?

Answer that question before you move forward. We can’t assume everyone has the same answer to that question. We can’t assume everyone has the same approach or gets the same value from what’s coming out of it.

If we trust but verify, and have ChatGPT work with us (not for us), we’re going to be better prepared for whatever comes next, including updates and changes to the tool itself.

The more you focus on the fundamentals (talking with it, giving it background, telling it what success looks like, treating it like an intern, asking questions), the better off you’ll be using any version of this tool, and the more likely you are to get results you can actually trust.