Overview

Join one of the most exciting companies in Silicon Valley that is poised to disrupt the way we live our daily lives.  Named CNBC’s Top 50 Disruptors and on its way to becoming the brand for the way we learn to get jobs, Udacity is looking for people to join its Data team to have a 10X impact on the world.  If you love a challenge, can take risks and truly want to make a difference, read on.

At Udacity, our mission is to democratize education.  We aim to bring accessible, affordable, engaging, and highly relevant higher education to the world. We believe that education is no longer limited to four years or four walls – it’s a lifelong pursuit. Technology is advancing rapidly, and there is a growing skills gap between job-seekers and career opportunities that Udacity is dedicated to closing through education. To do this, we’re rethinking how education is made and delivered by empowering our students to advance themselves personally and professionally, and help them land their dream job.

The data team’s mission is to make simple insights easy and complex insights possible. Your mission – make this possible!

  • We use the state of the art tools to design our data pipelines to enable – personalization, recommendations, analysis, emails and notifications.
  • Our stack – AWS Redshift, PostgreSQL, Apache Airflow for job orchestration. We’re also experimenting with Druid and Elasticsearch to enable realtime notifications.
  • You will work on setting up Kafka streaming listeners one day, and the next day you will work with the data scientist to productionize an algorithm that helps students learn better.
  • We ship really fast and iterate rapidly to provide the best experience for our students.

What we’re looking for:

  • 7+ relevant years of work experience writing and maintaining distributed data pipelines.
  • Knowledge of Kafka and Zookeeper. Experience with writing Kafka consumers and/or producers.
  • Prior experience with AWS Redshift and/or PostgreSQL preferred.
  • Ability to translate the algorithms provided by data scientists and implement them in production.
  • Knowledge of Linux, network and file system, and database level troubleshooting.
  • Ability to manage, mentor, and grow a team
  • Experience in Python/Java.

Our Tech Stack

  • AWS Redshift & Postgres – Data warehousing
  • Airflow – Data pipelines and ETL
  • AWS Database Migration Service (DMS) – for ETL
  • Scikit-learn for ML algorithms
  • Docker Stacksfor sharing a common dev environment
  • Github, Docker Hub, CircleCI, Datadog, New Relic, Airbrake, Pager Duty

We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.