Minimum 25W onboard compute (Jetson Orin Nano class). We scale capabilities to match your compute budget. More power enables more features, but core functionality works at 25W. We've deployed on everything from Orin Nano to AGX Xavier.
No. Avalon deploys onto your existing onboard compute and interfaces with your autonomy stack via API. No hardware changes. No stack rewrites. We've integrated with PX4, ArduPilot, ROS2, and proprietary autonomy systems.
We've integrated with proprietary stacks before. As long as you have APIs for basic platform control (waypoints, camera feeds, telemetry), we can integrate. Most of our customers use custom autonomy.
No. The entire agent runs onboard. Zero cloud dependency for any core function. Avalon processes mission briefs, interprets voice commands, and makes decisions entirely offline. Works through DDIL, jamming, and denied environments.
20 manufacturing partners. We're selecting based on platform readiness (shipping hardware, active customers), technical capability (engineering resources), strategic fit (missions that showcase Avalon), and commitment level.
Alpha partners get preferred pricing that extends into production. We'll discuss specifics after you're accepted into the program. Investment is based on platform count and customization scope.