Thanks to everybody who took my survey last week. 45% of participating patent attorneys prefer a self-hosted, local AI solution. So I thought I’d share a 100% private, offline installation with you today:
Installation and Settings
- Install Ollama from https://ollama.com/download.
- In the settings, enable Airplane mode to disable cloud models and web search. Set the context length (the size of the AI’s memory). I recommend at least 32k if your hardware allows.
- Start a new chat and select an LLM. Ollama will automatically download the model when you enter the first prompt.
Which Local LLM is Best?
The available LLMs are constantly evolving. My current favorite picks are:
- For text production: gpt-oss:20b or gemma3:27b
- For reasoning and logic: qwen3:30b.
These run fast on my trusty 2021 MacBook Pro 16” with M1 Max and 32GB, although it will run quite hot.
Drafting Demo
In this video, I’m screencasting how I write the “Background” section of a patent application in my local AI chatbot:
Feel free to copy the prompt I’m using to draft the Background section.
And for a full private patent drafting experience, check my patent drafting toolbox. The “Sovereign” tier comes with all raw source prompts.
Happy local patent drafting!
Bastian
1 thought on “How to Draft a Patent With Local AI”
Comments are closed.