The role involves designing, developing, and deploying agentic AI solutions using large language models to drive enterprise automation and digital transformation, with a focus on secure, scalable, and reliable AI systems integrated into enterprise platforms.
Key Responsibilities
Design and develop production-grade Agentic AI solutions using LLMs, multi-agent workflows, and RAG pipelines
Implement and refine prompt engineering strategies, including structured prompts, tool-calling, and agent orchestration
Build secure backend services and REST APIs in Python, integrating AI agents with enterprise systems using microservices architecture
Establish robust guardrails such as output validation, fallback strategies, rate limiting, and safe tool invocation
Develop and maintain model evaluation frameworks, including automated prompt testing, retrieval validation, and performance benchmarking
Deploy and operate AI services on AWS, utilizing services like EC2, S3, Lambda, RDS, and containerization tools such as Docker and Kubernetes
Apply security best practices including access control, data protection, audit logging, and secure API design
Ensure observability, monitoring, and reliability of AI-driven services in production
Requirements
3 to 6 years of professional software engineering experience building backend or distributed systems.
Strong proficiency in Python and developing RESTful APIs.
Hands-on experience building LLM-based or Agentic AI applications, including RAG, embeddings, and vector database integrations.
Practical experience in Prompt Engineering and understanding of LLM behavior, limitations, and optimization techniques.
Experience deploying and operating applications in AWS cloud environments, including services such as EC2, S3, Lambda, RDS, and containerization using Docker and Kubernetes.
Experience implementing AI guardrails, model validation techniques, and production monitoring.
Solid understanding of cloud-native security practices and secure system design.
Ability to design and develop production-grade Agentic AI solutions using LLMs, multi-agent workflows, and RAG pipelines.
Ability to implement and refine Prompt Engineering strategies, including structured prompts, tool-calling, and agent orchestration to ensure reliable and deterministic AI behavior.
Ability to build secure backend services and REST APIs in Python, integrating AI agents with enterprise systems using microservices architecture.
Ability to establish robust guardrails, including output validation, fallback strategies, rate limiting, safe tool invocation, and human-in-the-loop patterns where appropriate.
Ability to develop and maintain model evaluation frameworks, including automated prompt testing, retrieval validation, regression testing, and performance benchmarking.
Ability to deploy and operate AI services on AWS, leveraging services such as EC2, S3, Lambda, RDS, and containerization tools like Docker and Kubernetes.
Knowledge of strong security practices, including access control, data protection, audit logging, and secure API design.
Ability to ensure strong observability, monitoring, and reliability of AI-driven services in production.
Willingness to work from the Santa Clara, CA office in compliance with company policies, unless on PTO, work travel, or other approved leave.
Benefits & Perks
Salary range: 149,000 - 224,000 USD annually
Work environment: primarily in-office in Santa Clara, CA
Work schedule: flexible time off, with accommodations available for disabilities
Additional benefits: wellness resources, company-sponsored team events, potential incentive pay and equity, support for growth and development, inclusive and diverse workplace culture
Ready to Apply?
Join Pure Storage and make an impact in renewable energy