Skip to main content

field notes

How to Stress Test Your Opinions With AI

April 4, 2026 5 min read
aithinkingworkflowdecision-makingfield-note

Most people use AI to confirm what they already believe.

That is the wrong move.

The better move is to use AI to dismantle your own position, rebuild the opposing one from scratch, and then watch them collide. What survives that collision is worth keeping. The rest was always thinner than you thought.

Here is the method I use to level up my thinking on any topic where I have a strong opinion.

The three-chat method

You need three separate conversations. Not one. Three.

Chat 1: Build the strongest case for Side A.

Tell the AI it is preparing for a debate. It is arguing for Side A. Its job is to win. Have it research the best arguments, anticipate counterattacks, and build the most compelling case it can. Do not soften it. You want the steel man, not the straw man.

Chat 2: Build the strongest case for Side B.

Same thing, opposite direction. The AI is now arguing for Side B with the same intensity. Same instructions. Best arguments, best counterarguments, best research. Let it go all out.

Chat 3: Simulate the debate.

Take both documents into a third conversation. Tell the AI to simulate the debate between the two positions. Then ask it to be objective and evaluate: where does Side A win? Where does Side B win? Where are both sides weak? Where do they actually agree?

That third conversation is where the real learning happens.

Why three chats instead of one

If you ask a single chat to "give me both sides," you get a lukewarm list. The AI hedges. It tries to be balanced from the start, which means neither side gets its best shot.

When you force the AI to commit fully to one side, it digs deeper. It finds arguments you have never considered. It anticipates objections you did not know existed. It builds a case that actually has teeth.

Then when you do the same for the other side, you end up with two documents that are each genuinely strong. The debate between them is real, not performative.

What I learned the first time I did this

I ran this on a political topic where I had a firm opinion.

The AI built a case for my side that was better than anything I could have written. It pulled data I had never seen, framed arguments more cleanly than I had framed them, and anticipated counterpoints I had not thought of.

Then it did the same thing for the other side.

That second document was uncomfortable. Not because it was wrong. Because it was good. There were arguments in there that I had no answer for. Points I had never seriously engaged with. Data that genuinely complicated my position.

When the third chat compared both sides, the result was not "your side wins" or "their side wins." It was more like: Side A is stronger on these three issues, Side B is stronger on these two, and on this one they are both making the same mistake from different angles.

That was more useful than any opinion piece I have ever read.

What this is actually good for

This is not just for politics. You can run this method on anything where you hold a position.

  • A business decision you are leaning toward
  • A disagreement with a coworker or partner
  • A strategy choice between two approaches
  • An argument you keep having with someone
  • A belief you have never seriously questioned

The pattern is always the same: force the strongest version of each side into existence, then let them fight.

The fourth move

After the debate simulation, there is one more step that matters.

Ask the AI: "Now that you understand both positions fully, what would you actually recommend? What is the most honest, well-informed path forward?"

This is where synthesis happens. The AI is no longer advocating for a side. It has the full picture. And its recommendation usually pulls the best elements from both positions while discarding the weak ones.

That recommendation is not gospel. But it is a better starting point than whatever you walked in with.

What this did to my thinking

Two things happened that I did not expect.

I got more humble. I realized that on most topics, I know two or three points really well. I have stories and experience that back those points up. But there are a dozen other points I have never engaged with. The things I was confident about were real, but they were a small piece of a bigger picture.

I got more curious. Once you see how much stronger your thinking gets when you seriously engage the other side, you start wanting to do it more. The reflex shifts from "how do I defend my position" to "what am I missing."

That shift is worth more than winning any argument.

The practical takeaway

You do not need to be neutral on everything. Having strong opinions is fine. But strong opinions held without stress testing are just habits dressed up as convictions.

AI gives you a way to pressure-test any belief in thirty minutes. You do not need to find a debate partner. You do not need to read ten books. You just need three chats and the willingness to let the other side make its best case.

The opinions that survive that process are the ones worth holding onto.

The ones that do not survive were never really yours to begin with.