Projects

Selected work across robotics, vision, and iOS. Proof-forward, detail-rich, and iterated in public.

Focus areas

  • Robotics
  • Autonomy
  • Computer Vision
  • iOS Systems
  • Embedded Controls

Current build

Expressive robotic lamp with Jetson inference, STM32 control loops, and a behavior state machine for responsive motion.

Collaboration

Open to early-stage prototypes, field tests, and systems design partnerships that need quick iteration and clear proof points.

How I build

Each project focuses on tangible outcomes: system architecture, measurable performance, and a documented path to the next iteration.

System-first planning

Define interfaces and constraints early so software, hardware, and controls stay aligned through integration.

Prototype → proof

Ship working demos quickly, then pressure-test assumptions with real data, metrics, and usability feedback.

Documented iteration

Capture architecture decisions, experiment results, and next steps so progress stays visible and repeatable.

Video Fit AI

Workout tracking via on-device video analysis and Gemini video APIs.

  • Built: Swift iOS camera pipeline + Vapor backend
  • Stack: iOS AVFoundation, Gemini, Vapor
  • Outcome: end-to-end prototype + architecture writeup

Expressive Robotics Lamp

Jetson-powered lamp with personality-driven motion behaviors. PALA: Programmable Autonomous Lamp Assisstant.

  • Built: embedded control + motion state machine
  • Stack: Jetson, STM32, ROS tooling
  • Outcome: early demo with responsive gestures

Pose Estimation Pipeline

Fast pose inference for real-time feedback on mobile.

  • Built: optimized inference + data capture tooling
  • Stack: Core ML, Python tooling, iOS
  • Outcome: on-device prototype targeting real-time performance