Open Roles
We move fast. And we want you to move with us! See our list of open positions here:
Ready to get started? Fill out the form application form and we'll be in touch.
-
About the Role
Minerva Humanoids is developing rugged humanoid robots to do the most dangerous jobs on the planet. Our Whole-Body Controls group sits at the intersection of reinforcement learning, optimal control, and physical robotics. We train policies that learn a broad repertoire of whole-body skills and deploy them on real hardware in unstructured, high-consequence environments.
As a Research Scientist on the Whole-Body Controls team, you will lead the design and investigation of learning-based control methods for life-saving applications. This is a research role with a strong deployment mandate: the ideas you develop will be tested on physical robots, not just in papers. You will have the freedom to define your own research agenda within the team’s mission, publish your work at top venues, and shape the technical direction of a core capability.
What You’ll Work On
● Formulate and investigate novel approaches to whole-body policy learning, including hierarchical, multi-task, and compositional architectures for skill orchestration across locomotion, manipulation, and transitional behaviors.
● Develop methods to improve sim-to-real transfer for contact-rich, whole-body tasks, including system identification, domain randomization strategies, and techniques for adapting to hardware degradation and field wear.
● Design rigorous experimental protocols to benchmark policy performance across diverse scenarios, terrains, and operator-directed tasks, and build the evaluation infrastructure to support rapid iteration.
● Collaborate closely with the perception and hardware teams to co-design observation and action spaces, sensor configurations, and onboard compute pipelines that serve learned policies.
● Publish and present your research at leading venues (CoRL, RSS, ICRA, NeurIPS, ICLR) and contribute to Minerva’s presence in the research community.
● Mentor junior researchers and engineers from our academic partners, and help build a research culture grounded in scientific rigor and engineering pragmatism.
What You’ll Bring
● PhD in robotics, computer science, mechanical engineering, or a closely related field, with a dissertation focus on learning-based control, reinforcement learning for robotics, or whole-body motion planning.
● A strong publication record at venues such as CoRL, RSS, ICRA, NeurIPS, ICLR, or equivalent, demonstrating original contributions to robot learning or control.
● Hands-on experience deploying learned control policies on real legged robots (incl. quadrupeds or humanoids), with a deep appreciation for the gap between simulation results and physical performance.
● Expertise in modern RL algorithms (PPO, SAC, model-based methods) and practical fluency with at least one major RL/simulation framework (IsaacLab, MJX, RSL-RL, or similar).
● Strong software skills in Python and PyTorch. Comfort using LLM-based coding tools to accelerate iteration, and the judgment to critically evaluate their outputs.
● Exceptional independence and scientific taste: you identify the right problems to work on, design clean experiments, and draw honest conclusions from the results.
Nice to Have
● Experience with optimization-based control methods (MPC, QP solvers, trajectory optimization) and a perspective on how they complement and constrain learned policies.
● Prior work on contact-rich manipulation, dexterous grasping, or loco-manipulation on physical hardware.
● Experience leading a small research team or co-advising graduate students.
● Familiarity with hardware-in-the-loop testing, real-time control systems, or embedded deployment of neural network policies.
Expected Compensation:
$200,000 – $300,000 annual salary + 1–2% equity + benefits
Pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. The total compensation package for this position may also include other elements dependent on the position offered. Details of participation in these benefit plans will be provided if a candidate receives an offer of employment.
-
About the Role
Minerva Humanoids is developing rugged humanoid robots to do the most dangerous jobs on the planet. Our perception stack is central to understanding complex environments for safe traversal and manipulation, both autonomously and with human direction via telepresence.
As a Perception R&D Lead, you will lead the development of algorithms that allow our robots to see, think, and act in their surroundings in real time. You will also have the opportunity to grow the team, manage our cohort of research interns this summer, and shape the direction of our perception roadmap.
What You'll Do
Develop scene segmentation pipelines to extract objects of interest from the environment, including interactable objects and collision geometry to avoid.
Write robust algorithms for height map extraction from robot-mounted sensors in motion across unstructured terrain.
Perform state estimation and SLAM using onboard optical, inertial, tactile, radar, and LiDAR sensors.
Ingest, time-synchronize, and fuse heterogeneous sensor streams to build an accurate, low-latency representation of the robot's environment.
Mentor and direct graduate researchers, collaborating with academic partners to publish research and translate results into deployed capabilities.
What You'll Bring
M.S. or PhD in Robotics, Computer Science, or a related field (or equivalent industry experience).
Expertise in scene segmentation, sensor fusion, and SLAM.
Experience specifying sensors based on functional requirements and interfacing with them at the driver level.
Deep knowledge of learning-based perception techniques (foundation models, neural implicit representations, or similar).
Production-level proficiency in Python; comfort with C++ a plus.
Nice to Have
Relevant publications at top venues (RSS, ICRA, CoRL, CVPR, or similar).
Prior experience mentoring or leading a small research team.
Familiarity with ROS 2, Isaac, or real-time perception deployment on embedded hardware.
Experience with perception in degraded-visibility or GPS-denied environments.
Expected Compensation$200,000 - $300,000/annual salary + 1-2% equity + benefits
Pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. The total compensation package for this position may also include other elements dependent on the position offered. Details of participation in these benefit plans will be provided if an employee receives an offer of employment.
-
About the Role
Minerva Humanoids is developing rugged humanoid robots to do the most dangerous jobs on the planet. Our Hardware team is responsible for designing our purpose-built humanoid robot for deployment in hazardous industrial environments.
You will lead the robot's systems design and integration, co-designing subsystems, selecting and integrating sensor suites, and working across hardware and software boundaries to deliver robustness and performance from first prototype through field deployment.
What You'll Do
Lead the design and integration of sensing (vision, force/torque, tactile, inertial) and actuation (motors, linkages, parallel effectors) systems.
Develop characterization, calibration, and validation procedures for each major subsystem.
Perform data analysis on hardware performance and contribute to structured validation testing campaigns.
Collaborate with autonomy and controls teams to ensure subsystem and integrated system data meets downstream requirements.
Debug hardware-software interactions across the sensing and actuation stacks from prototype through production integration.
Inform mechanical and electrical design decisions with hands-on test data and failure analysis.
What You'll Bring
Master's or PhD in Mechanical Engineering, Electrical Engineering, Mechatronics, Robotics, or a related field (or equivalent industry experience).
Proficiency with CAD tools (e.g. Onshape, SolidWorks).
Experience with robotic perception systems (cameras, LiDAR, IMUs, force sensors).
Prior work integrating and validating sensors and actuators on robotic or mechatronic platforms.
Familiarity with communication protocols (SPI, I2C, CAN, EtherCAT).
Proficiency in Python, MATLAB, or similar tools for data analysis and visualization.
Nice to Have
Familiarity with common test and measurement tools (DAQ systems, thermal chambers, vibration tables).
Exposure to quality processes such as FMEA, DVP&R, or ISO standards.
Experience designing or qualifying hardware for harsh operating environments (dust, moisture, temperature extremes).
Expected Compensation$200,000 - $300,000/annual salary + 1-2% equity + benefits
Pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. The total compensation package for this position may also include other elements dependent on the position offered. Details of participation in these benefit plans will be provided if an employee receives an offer of employment.
-
About the Role
Minerva Humanoids is developing rugged humanoid robots to do the most dangerous jobs on the planet. Our VR stack is central to enabling expert operators to interact with objects and environments through their robot avatars in real time, across hazardous industrial sites.
As a VR Interface Engineer, you will own and develop our VR application, which translates human intent into robot actions and relays rich environmental feedback to the user through an intuitive interface. You will work at the intersection of robotics, human-computer interaction, and real-time systems.
What You'll Do
Develop and maintain an intuitive VR user interface in Unity, including switching and compositing video feeds, displaying kinematic target ghost overlays, identifying and highlighting objects of interest, and building an ergonomic control scheme for the app.
Seek and implement feedback from our expert operators and end-users to continuously improve usability and task completion speed using our humanoid robot
Translate human input (wrists and fingers) into robot actions using visual or inertial sensors.
Derive height, torso orientation (roll/pitch/yaw), and 2D pose translation intent from a VR headset and optional IMUs.
Ingest, time-synchronize, and relay telepresence information back to the user (video, audio, tactile) with minimal latency.
What You'll Bring
Strong experience with Unity or other VR/AR platforms.
Ability to write production-quality code in C#, Python, or C++.
Comfort working in a fast-moving, cross-disciplinary team alongside robotics and controls engineers.
Clear communication skills and ability to iterate on feedback.
Nice to Have
Experience with tactile, inertial, or vision-based sensors.
Prior work in state estimation, visual odometry, SLAM, or 3D reconstruction.
Understanding of robotics fundamentals: geometry, linear algebra, kinematics, dynamics.
Familiarity with ROS, gRPC, or low-latency streaming protocols.
Expected Compensation$150,000 - $250,000/annual salary + 0.25-1% equity + benefits
Pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. The total compensation package for this position may also include other elements dependent on the position offered. Details of participation in these benefit plans will be provided if an employee receives an offer of employment.