The position involves conducting research in robotics and machine learning to develop general-purpose robots capable of performing dexterous tasks, utilizing large-scale models, multi-modal sensing, and reinforcement learning, with opportunities for experimentation, collaboration, and publication.
Key Responsibilities
Develop and experiment with algorithms for robust policy learning using multiple sensing modalities such as proprioception, images, 3D representations, force, and tactile sensing.
Scale machine learning approaches to large models trained on diverse datasets including web-scale text, images, and videos.
Utilize test-time computation to enhance embodied robot applications.
Improve learned policies efficiently and rapidly.
Advance continual learning and adaptation techniques for robotics.
Create multi-modal reasoning models and structured hierarchical reasoning systems.
Implement reinforcement learning with language-based action models.
Leverage history and memory to develop policies for long-context tasks.
Enhance robustness and few-shot generalization using sub-optimal and self-play data.
Design interactive agents capable of reducing ambiguity and seeking clarification during embodied tasks.
Requirements
A research scientist who is comfortable working with both existing large static datasets as well as a growing dynamic corpus of robot data.
Experience in developing data-efficient and general algorithms for learning robust policies using multiple sensing modalities including proprioception, images, 3D representations, force, and dense tactile sensing.
Experience in scaling learning approaches to large-scale models trained on diverse sources of data, including web-scale text, images, and video.
Ability to leverage test time computation for embodied applications.
Ability to quickly and efficiently improve learned policies.
Experience or knowledge in Continual Learning and Adaptation.
Experience in developing Multi-Modal Reasoning Models.
Experience in structured hierarchical reasoning using learned models.
Experience with Reinforcement Learning with Language Action Models.
Ability to leverage history and memory for learning policies for long context tasks.
Experience in improving robustness and few-shot generalization by using sub-optimal and self-play data.
Experience in developing interactive agents that can reduce embodied and instructional ambiguity and can seek help and clarification.
A minimum of a Ph.D. or equivalent experience in Computer Science, Robotics, Electrical Engineering, or related fields.
Experience in working with both simulated and real physical robots to run experiments.
Ability to collaborate in code infrastructure development and participate in publishing work to peer-reviewed venues and open-sourcing code.
Benefits & Perks
Salary range between 176,000 and 264,000 USD per year (California-based roles)
Work with both simulated and real physical robots
Collaborate in code infrastructure and team members
Participate in publishing work to peer-reviewed venues
Open-source code contributions
Medical, dental, and vision insurance
401(k) eligibility
Paid time off including vacation, sick leave, and parental leave
Annual cash bonus
Ready to Apply?
Join Toyota Research Institute and make an impact in renewable energy