[proxy] news.ycombinator.com← back | site home | direct (HTTPS) ↗ | proxy home | ◑ dark◐ light

Show HN: AI gaming copilot that uses a phone camera instead of screen capture

Built this as a side project after wanting a real-time gaming companion that could call out macro mistakes / timers / map awareness while I play.

*Project Aegis* is an AI gaming companion (starting with League of Legends) that gives spoken advice in real time.

The twist: it uses a *physically air-gapped setup*.

Why? Some games (especially with strict anti-cheat like Riot Vanguard) make screen capture / memory-reading approaches risky or impractical. So instead of reading the game directly, I point a *smartphone on a tripod at my monitor* and process the video externally.

*How it works (current version):*

* Phone camera points at the game screen * Frames are streamed over WebSockets to a local FastAPI server * OpenCV cleans up glare / perspective issues * Vision model analyzes the frame context * TTS speaks back advice (macro reminders, timers, awareness prompts, etc.)

So far this is more of a *working prototype + architecture experiment* than a polished product, but it’s functional and surprisingly fun.

GitHub: https://github.com/ninja-otaku/project_aegis

I’d love feedback on:

* whether this is genuinely useful vs. just technically interesting * what game(s) this should support next * latency / UX expectations for a tool like this * anti-cheat-safe ways to improve reliability without crossing lines

Happy to answer technical questions and share implementation details.