Google DeepMind released Gemma 4 on April 2, 2026 under Apache 2.0. It’s their fourth-generation open model family, and it runs locally with surprisingly little friction. Here are three ways to get it going, depending on what hardware you have in front of you. Table of contents Option 1: On your phone No account, no […]

Read More →

If you’ve tried running a local model through Ollama with Claude Code and been greeted by this message: There’s an issue with the selected model (qwen3-coder:30b). It may not exist or you may not have access to it. Run /model to pick a different model. …even though the model is clearly installed and runs fine […]

Read More →