INFRASTRUCTURE ENGINEER, DATA PRODUCTIVITY
$0
Full-time · Mid-Senior level
1,001-5,000 employees · Technology, Information and Internet
About the job
Who we are
About Stripe
Stripe is a financial infrastructure platform for businesses. Millions of companies—from the world’s largest enterprises to the most ambitious startups—use Stripe to accept payments, grow their revenue, and accelerate new business opportunities. Our mission is to increase the GDP of the internet, and we have a staggering amount of work ahead. That means you have an unprecedented opportunity to put the global economy within everyone’s reach while doing the most important work of your career.
About The Team
The Data Platform group is responsible for the core data tools and infrastructure that move, store, process, and analyze data , both at rest and in motion. Our platform powers everything from money movement across the globe to ML-based products like Radar and Identity to analytics in the Stripe dashboard.
What you’ll do
In this role you will be joining the Data Productivity team. The team works across Data Platform systems with a focus on maximizing the productivity of Stripes who build on top of our data infrastructure. You will own key data productivity metrics and build tools and abstractions that enable faster and more predictable data pipeline and product development.
Responsibilities
- Build tools and libraries that serve as the interface to data infrastructure for all of Stripe’s engineering teams
- Work closely with data users across Stripe to gain a deep understanding of their present and future needs
- Debug productivity blockers across data systems and identify missing user-facing abstractions
- Define data productivity metrics, instrument them across our systems, and lead projects to move the metrics in the right direction
- Improve the productivity of Data Platform users by enabling a best in class user experience and tooling
Who you are
We’re looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.
Minimum Requirements
- Experience developing, maintaining and debugging distributed systems built with open source tools
- A track record of building data infrastructure or building on top of data infrastructure at scale
- Experience building tools to improve developer productivity and empathy for your end users
- Experience writing high quality code in a major programming language (like Python, Java or Scala) and using big data technology (like Spark, Flink, Airflow, Kafka)
- Ability to plan and managing projects that involve designing data pipelines across multiple groups or teams
Preferred Qualifications
- Experience optimizing the end to end performance of distributed systems.
- Experience working with AWS tools and services
