Computer Vision Scientist (Multimodal Sensing)

About SPAICE
SPAICE is building the autonomy operating system that empowers satellites and drones to navigate and interact with the world – regardless of the environment. From GPS-denied zones on Earth to the unexplored frontiers of space, our Spatial AI delivers unprecedented levels of autonomy, resilience, and adaptability.At SPAICE, you’ll work on real missions alongside leading aerospace and defense contractors, shaping the future of space and autonomous systems. If you''re looking for a place where your work has a real, tangible impact – SPAICE is that place.About the Role
Satellites that detect and avoid threats on their own. Drones that collaborate in GPS-denied fields. Spacecraft that rendezvous with tumbling targets in orbit. All of these feats rely on rich scene understanding from multiple sensors. That’s where you come in.As a Computer Vision Scientist (Multimodal Sensing) you’ll lead the research and design of
Spatial AI
perception algorithms that fuse cameras, LiDAR, radar, event sensors, and more. Your work will unlock reliable detection, mapping, and semantic reasoning in the harshest terrestrial and orbital environments, and will flow directly into flight-critical autonomy software used by defense customers and commercial operators alike.What you might work on
Space and defense missions.
Design perception pipelines for situational awareness, collision detection and avoidance, formation flying, and terrain mapping and surveillance.
Design Spatial AI components.
Create architectures that combine visual, inertial, and depth cues for robust, multi-sensor scene understanding.
Sensor fusion and neural representations
to enable high-fidelity world models onboard resource-limited hardware.
Semantic understanding and visual place recognition
to identify structures, landmarks, and dynamic obstacles in real time.
Camera pose estimation, monocular depth estimation, and dense 3D reconstruction
both in simulation and on-hardware testbeds.
Rapid prototyping with a team of ML and robotics engineers, followed by integration into flight processors and edge AI accelerators.
What we are looking for
PhD in Computer Vision, Robotics, or a related field
(or equivalent industry research experience pushing the state of the art).
Proven expertise in
multimodal perception and sensor fusion
(two or more: vision, LiDAR, radar, event cameras, IMU).
Publication or product track record
in multimodal sensing/neural representations/SLAM for robotics or autonomous navigation in journals and conferences (e.g. CVPR, ICLR, ICRA, ICCV, NeurIPS).
Deep knowledge of
semantic scene understanding
and
visual place recognition
under extreme lighting and viewpoint changes.
Hands-on experience with
camera pose / SLAM
and
monocular depth estimation
on embedded or real-time systems.
RandD leadership
– comfort taking vague, high-risk concepts from ideation through prototype to mission deployment while mentoring junior engineers.
Bonus:
familiarity with radiation-tolerant hardware, edge AI acceleration, or flight qualification processes.
Perks and Benefits
Competitive salary
commensurate with experience and impact.
Equity options
– you will join us at the ground floor and share in the upside.
Well-being perks
– access to premium gyms, climbing centers, and wellness programs.
Team retreats and offsites
– recent adventures include a half-marathon in Formentera and a Los Angeles retreat during Oscars weekend.
#J-18808-Ljbffr
Other jobs of interest...

Perform a fresh search...
-
Create your ideal job search criteria by
completing our quick and simple form and
receive daily job alerts tailored to you!