Video Fit AI
Workout tracking via on-device video analysis and Gemini video APIs.
- Built: Swift iOS camera pipeline + Vapor backend
- Stack: iOS AVFoundation, Gemini, Vapor
- Outcome: end-to-end prototype + architecture writeup
Selected work across robotics, vision, and iOS. Proof-forward, detail-rich, and iterated in public.
Focus areas
Current build
Expressive robotic lamp with Jetson inference, STM32 control loops, and a behavior state machine for responsive motion.
Collaboration
Open to early-stage prototypes, field tests, and systems design partnerships that need quick iteration and clear proof points.
Each project focuses on tangible outcomes: system architecture, measurable performance, and a documented path to the next iteration.
Define interfaces and constraints early so software, hardware, and controls stay aligned through integration.
Ship working demos quickly, then pressure-test assumptions with real data, metrics, and usability feedback.
Capture architecture decisions, experiment results, and next steps so progress stays visible and repeatable.
Workout tracking via on-device video analysis and Gemini video APIs.
Jetson-powered lamp with personality-driven motion behaviors. PALA: Programmable Autonomous Lamp Assisstant.
Fast pose inference for real-time feedback on mobile.