How to Draft a Patent With Local AI

Thanks to everybody who took my survey last week. 45% of participating patent attorneys prefer a self-hosted, local AI solution. So I thought I’d share a 100% private, offline installation with you today:

Installation and Settings

  1. Install Ollama from https://ollama.com/download.
  2. In the settings, enable Airplane mode to disable cloud models and web search. Set the context length (the size of the AI’s memory). I recommend at least 32k if your hardware allows.
  3. Start a new chat and select an LLM. Ollama will automatically download the model when you enter the first prompt.

Which Local LLM is Best?

The available LLMs are constantly evolving. My current favorite picks are:

  • For text production: gpt-oss:20b or gemma3:27b 
  • For reasoning and logic: qwen3:30b. 

These run fast on my trusty 2021 MacBook Pro 16” with M1 Max and 32GB, although it will run quite hot.

Drafting Demo

In this video, I’m screencasting how I write the “Background” section of a patent application in my local AI chatbot:

Feel free to copy the prompt I’m using to draft the Background section.

And for a full private patent drafting experience, check my patent drafting toolbox. The “Sovereign” tier comes with all raw source prompts.

[Get the Toolbox]

Happy local patent drafting!
Bastian

If this was helpful, you’ll love my mailing list with deep updates on patents, AI, and the future of attorney practice. Join 1,319 insiders today:

Special Sign-up Bonus: 10% discount on all my seminars and tools.

1 thought on “How to Draft a Patent With Local AI”

Comments are closed.