- Published on
5 days is enough
- Authors
- Name
- Omer Atagun
I had a trip planned. Calendar cleared, bags half-packed. Then life did its thing and plans fell through. Instead of sulking, I pivoted. One thought kept echoing:
“Well, I guess I could build that thing I’ve been putting off...”
That thought snowballed into one of the most productive weeks I’ve had in a long time. I ended up building a GitHub App that reviews PRs with AI, extended it into a local-first personal assistant, and wired up my house to respond to presence like a proper smart system.
Here’s the log. You can find it here.
1. LGTM Reviewer: A GitHub App for AI-Powered Code Reviews
This idea had been sitting in my head for months: what if a bot could take the chore out of code reviews? Not just another CI check. Something that reads the code changes and leaves human-ish feedback—like a real teammate would.
So I built it. I'm not the first, of course, but I built it my way.
The stack:
- Go + Gin: Fast, simple, clean HTTP framework. Handles GitHub webhook routing.
- go-github: Official GitHub API client for Go.
- Postgres: Stores background jobs triggered by pull request events.
- Ollama + Open WebUI: Local LLM stack, no OpenAI, no cloud APIs. Total privacy. Running on an RTX 4060 with CodeLlama 7B—it’s decent, not magical.
The workflow:
- GitHub sends a webhook when a PR is opened.
- The app queues it in a
jobs
table. - A background worker picks it up and parses the diff.
- I built a custom patch parser that extracts just the meaningful code edits—no noise.
- That context is used to craft a prompt for the LLM.
- The LLM generates concise, focused review comments.
- The app posts those comments inline on the PR.
The first version was overly verbose. Then too vague. After iterating on both the patch parsing and prompt structure, I got something closer to what I’d write myself.
Even now, I'm still not 100% satisfied—home-run AI reviews require better models. But for a local setup, it’s a surprisingly workable baseline.
Bonus: A UI for sanity
To keep things visible:
- React + Tailwind + Vite: Stood up in under an hour.
- Shows job queue and review results.
- Not flashy, just enough to debug and monitor the pipeline.
2. From Code Reviews to Chore Reviews: A Personal AI Assistant
Once the job system and LLM prompt pipeline were solid, a new idea clicked: Why limit this to code?
So I started reusing the same infra to manage me.
I extended the app to accept inputs from:
- Apple calendar & reminders (via iCloud sync tools)
- Step counts and location from the Home Assistant mobile app
- Device and sensor states from Home Assistant
- Manual jobs from iOS Shortcuts
- Bi-directional communication between Apple Shortcuts and HA, routed via Google Nest
Suddenly the assistant had some situational awareness. It knows when I’m home. It knows which room I’m in (thanks, FP2). It knows my schedule. It could (in theory) nudge me or help me plan.
“You said you’d clean the kitchen by tonight. You’re still in the office. Want me to queue a reminder for tomorrow morning?”
That’s the dream, anyway. Reality: it doesn’t manage much yet—frankly, it can’t even manage itself. But the scaffolding is there. The jobs
table and LLM worker model make it easy to add these use cases with minimal overhead.
3. Smart Home: Real Sensors, Real Automations
Home automation is a rabbit hole I fall into every few months. This week, I went deep.
Aqara FP2: Presence, not just motion
The Aqara FP2 uses mmWave to detect presence, not just motion. That means it knows when I’m sitting still at my desk or lying on the couch—not just when I move.
That granularity changes everything.
Example: if I leave a room, the lights turn off after 30 seconds. But if I’m sitting still? They stay on. No more waving my arms like a confused NPC just to keep the lights alive.
Home Assistant: Full Migration to RPi 4
Migrated everything to a Raspberry Pi 4 with 8GB RAM, running Home Assistant OS.
- Native Matter support? ✔️
- No Docker hacks? ✔️
- Easy backups and recovery? ✔️
Honestly, it’s the most user-friendly it’s ever been. Almost boring how well it works.
Elero Comfort-868 Blinds: The Final Boss
I have some lovely electric blinds that speak a proprietary 868 MHz protocol. Fancy, but dumb.
I’m working on wiring them into Home Assistant—whether that’s via USB bridge, ESPHome, or straight-up reverse engineering.
The goal: have the blinds close automatically when I leave the room and the sun is blasting through the windows. Or maybe just when it’s 3PM and I want to watch TV.
Still experimenting with serial sniffers and module options. It’s a fun little boss fight.
4. Thoughts and Takeaways
This week, one project turned into five—but everything looped back to a central theme:
What if software actually worked for me—in code and in life?
What stood out:
- Reusing architecture saved time: The background worker + job queue + LLM pipeline became the backbone for multiple ideas.
- Local AI is viable, kinda: Running models like CodeLlama and Mistral locally gave fast, private, “meh-enough” results. Not perfect, but way more usable than I expected.
- The smart home feels smart now: With mmWave sensors, local logic, and real-time context, it responds to me, not the other way around. No more shouting room names and commands like a voice butler.
Wrapping It Up
What’s next? I’m planning to shift my personal AI strategy toward RAG (Retrieval-Augmented Generation), since fine-tuning isn’t viable for a personal project at this scale. That said, long-context support still feels just out of reach—hoping better models land soon.
Also: building things in Go was so refreshing. I’ll be using it way more from here on.