Video Fit AI
Workout tracking via on-device video analysis and Gemini video APIs.
- Built: Swift iOS camera pipeline + Vapor backend
- Stack: iOS AVFoundation, Gemini, Vapor
- Outcome: end-to-end prototype + architecture writeup
Dylan Duecaster
I build systems at the intersection of robotics, computer vision, and mobile. My work spans embedded hardware, AI pipelines, and iOS experiences that I prototype and iterate toward real-world use.
Currently building: a Jetson-powered robotic lamp with expressive behaviors.
Proof-oriented work with clear scope and measurable progress.
Workout tracking via on-device video analysis and Gemini video APIs.
Jetson-powered lamp with personality-driven motion behaviors.
Fast pose inference for real-time feedback on mobile.
Latest notes on systems, vision, and iterative builds.
I am an aerospace engineer by training (B.S. + M.S. at Purdue University) and currently a systems engineer at Boeing. My strength is in systems thinking, integration, and design engineering—translating high‑level requirements into implementable technical solutions. I am increasingly focused on bridging traditional systems engineering with software‑heavy, autonomy‑driven products.
Outside of work, I build end‑to‑end systems that combine software, hardware, and applied AI. My projects emphasize real‑world constraints, integration challenges, and measurable outcomes rather than isolated demos. Key interests include robotics, autonomy, embedded systems, computer vision, and iOS as a user‑facing control layer.
I value clean architecture, thoughtful tradeoffs, and systems that are reliable and maintainable. The goal for this portfolio is evidence: clear demonstrations of how I think, what I build, and how I approach complex systems.