Privacy is becoming a luxury in the AI world. If you’re tired of sending your data to the cloud every time you ask a question, running a model locally is the answer. Today, we’re looking at Qwen 3.5 9B—a powerhouse model from Alibaba—and how to get it running on your own machine using Ollama. Whether you’re a […]

Read More →

As a software architect who has spent years wrestling with the “Big Three” of mobile automation—Appium, Espresso, and XCTest—I’ve seen it all. I’ve seen CI pipelines turn into a sea of red because of a missing Thread.sleep(). I’ve seen talented engineers spend 40% of their sprint just maintaining a brittle testing infrastructure. Mobile testing has historically […]

Read More →

DeepSeek, a powerful open-source LLM, can be easily run locally on your desktop/laptop using Ollama. I’m using an M1 MacBook Pro with 32GB. Ollama simplifies the process of running large language models, handling dependencies and providing a consistent interface. This guide will walk you through installing DeepSeek via Ollama, making it accessible with just a […]

Read More →