This role involves developing and automating infrastructure features for fault-tolerant, distributed data platforms using Python, with a focus on managing Big Data technologies like Kafka and Spark, to support enterprise multi-cloud and on-premise data solutions.
Key Responsibilities
Automate data platform operations, including fault-tolerant replication, TLS, installation, and backups
Develop infrastructure features for data platforms through automation
Collaborate with a distributed team to design and implement solutions
Write high-quality Python code to create new automation features
Debug issues related to data platform automation and infrastructure
Provide domain-specific expertise on data systems to other teams
Requirements
Proven hands-on experience in software development using Python.
Proven hands-on experience in distributed systems, such as Kafka and Spark.
Have a Bachelor’s or equivalent degree in Computer Science, STEM, or a similar field.
Willingness to travel up to 4 times a year for internal and external events.
Benefits & Perks
Competitive base pay based on location, experience, knowledge, and skills
Annual compensation review
Recognition rewards
Annual holiday leave
Parental Leave
Employee Assistance Programme
Fully remote working environment
Personal learning and development budget of 2,000 USD per annum
Opportunity to travel to new locations to meet colleagues twice a year
Travel upgrades and Priority Pass for long-haul company events
Additional benefits and rewards such as annual bonuses and sales incentives (depending on role and location)
Ready to Apply?
Join Canonical and make an impact in renewable energy