A research scientist position focused on developing large-scale machine learning models and algorithms for dexterous robot manipulation, utilizing diverse data sources and simulation to advance general-purpose robotics in unstructured environments.
Key Responsibilities
Develop and research algorithms for learning robust policies using multiple sensing modalities such as proprioception, images, 3D representations, force, and tactile sensing.
Scale learning approaches to large-scale models trained on diverse data sources including web-scale text, images, and videos.
Leverage test-time computation to enhance embodied robot applications.
Improve learned policies efficiently and rapidly.
Advance continual learning and adaptation techniques for robotics.
Design and develop multi-modal reasoning models and structured hierarchical reasoning using learned models.
Implement reinforcement learning with language action models and utilize history and memory for long-context policy learning.
Enhance robustness and few-shot generalization by incorporating sub-optimal and self-play data.
Create interactive agents capable of reducing embodied and instructional ambiguity, seeking help and clarification when needed.
Collaborate on code infrastructure, run experiments with simulated and real robots, and contribute to research publications and open-source projects.
Requirements
A research scientist who is comfortable working with both existing large static datasets as well as a growing dynamic corpus of robot data.
Experience in developing data-efficient and general algorithms for learning robust policies using multiple sensing modalities including proprioception, images, 3D representations, force, and dense tactile sensing.
Experience in scaling learning approaches to large-scale models trained on diverse sources of data, including web-scale text, images, and video.
Ability to leverage test time computation for embodied applications.
Ability to quickly and efficiently improve learned policies.
Experience or knowledge in Continual Learning and Adaptation.
Experience in developing Multi-Modal Reasoning Models.
Experience in structured hierarchical reasoning using learned models.
Experience in Reinforcement Learning with Language Action Models.
Ability to leverage history and memory for learning policies for long context tasks.
Experience in improving robustness and few-shot generalization by using sub-optimal and self-play data.
Experience in developing interactive agents that can reduce embodied and instructional ambiguity and can seek help and clarification.
A minimum of a Ph.D. or equivalent in Computer Science, Robotics, Machine Learning, or a related field.
Proficiency in collaborating in code infrastructure, working with team members, and running experiments with both simulated and real physical robots.
Benefits & Perks
Salary range between 176,000 and 264,000 per year (California-based roles)
Medical insurance
Dental insurance
Vision insurance
401(k) eligibility
Paid time off including vacation, sick leave, and parental leave
Annual cash bonus
Ready to Apply?
Join Toyota Research Institute and make an impact in renewable energy