Yesterday, I had the privilege of speaking at the EPO’s flagship conference, Search and Examination Matters 2026.
We had over 800 participants watching live. The energy was incredible, and it confirmed one thing: The profession is waking up. We are moving past the “hype” phase of AI and entering the “deployment” phase.
But we didn’t have time for a live Q&A, and the chat logs were flooded with variations of the same three burning questions.
I recorded a video answering them in detail here, but I also wanted to break them down in writing for you.
If you are using AI in your practice, or if you’re hesitant to start, you need to know the answers to these.
1. “Am I leaking my client’s trade secrets?”
This was the most common fear. The answer is: It depends on your architecture.
You need to understand the “Three Tiers of AI Security”:
- Tier 1: Consumer AI (The Danger Zone). If you paste client data into free versions of ChatGPT, Gemini, or DeepSeek, you are paying with your client’s secrets. These models train on your input. Do not do this. It is gambling with your license.
- Tier 2: Enterprise AI (The Middle Ground). This is where you have a paid enterprise agreement (e.g., ChatGPT Enterprise) or use a legal-tech tool that wraps these models. You typically have a contract stating they won’t train on your data. It is safer, but the data still leaves your building (usually to a US server).
- Tier 3: Sovereign/Local AI (The Gold Standard). This is my preferred setup. You run the AI entirely on your own hardware. No data leaves your network. No data touches the internet. If you want to see how to set this up, I made a tutorial on installing local AI here.
Note: If you need a secure deployment plan that fits your specific firm’s IT infrastructure, I offer 1-on-1 coaching to design exactly that.
2. “Why can’t I trust AI like I trust a calculator?”
This is the most dangerous misconception in our industry.
A calculator is deterministic. It follows rigid logic rules. 2+2 is always 4. AI is probabilistic. It does not care about the truth; it cares about the statistically likely next word.
When you treat a probability machine like a calculator, you become an intellectual tourist. You scan the file, press a button, see 30 pages of fluent text, and nod along. This triggers a psychological trap called “satisficing“. Because the text looks plausible, you stop digging. You accept a “good enough” draft. But because the AI is predicting the average next word, you end up with a statistically average patent.
3. “What if the AI hallucinates?”
We know AI makes things up. It invents prior art; it invents case law.
The solution is simple: Never use AI to outsource fact-finding or decision-making. Use it to outsource friction.
I use AI mainly as a Scribe (to write text I have already structured) and as a Provocateur (to challenge my thinking).
For example, after I draft a claim set, I tell the AI: “Act as a strict EPO examiner. Raise three clarity objections against this claim.”
It usually finds an angle I missed. If it hallucinates an objection that makes no sense? Fine. I am the expert. I ignore it.
The Takeaway
My main message at the conference was this: AI assists, humans decide.
The tools are getting sharper, which means our judgment must get even stronger. You cannot be passive validators of a machine’s statistical predictions.
If you want to move from “playing around” to mastering a professional, secure, and high-leverage AI workflow, join my Future-Proof Patent Drafting Seminar. In this 3-hour intensive, I break down my entire toolbox, from the exact prompts I use to the local privacy setups that keep me compliant.
>> Secure Your Spot in the Seminar Here (next slot: 25 February 2026)
Hope this helps!
Bastian