About the Data Engineer position
Data fuels everything we do at Gravyty. It is part of our core value – to turn data into action for nonprofits to achieve their mission. We place high value in things like fast no-BS implementations, data quality/transparency, and building scalable data systems. As a Data Engineer at Gravyty you will be a core part of designing and implementing ETL (extract-transform-load) data flows for our customers. We are looking for a person who wants the fast-paced environment of a startup operating in the mission-driven world of nonprofits and wants to evolve into a senior data engineer with world-class abilities in ETL data workflows, database systems for processing, and big data applications.
Data Engineer Responsibilities
- Scheduling jobs that recover from errors, such as hitting rate limits or data loading errors, and can do retries when failures occur
- Storing data into various targets such as flat files, data stores, or outgoing REST APIs
- Doing transformation tasks on data – such as cleansing, validation, filtering, partitioning, aggregation, calculating metrics, and reporting
- Understand existing schemas and define the best possible new data functionality
- Helping customers understand how processing is best done to achieve their non-technical objectives
- Understanding the input/output formats of the data and constraints on data processing
Data Engineer Requirements
At least 1 year of full-time backend programming experience in a major programming language such as Python, Java, C#, Scala, PHP, or Ruby
Good practical knowledge of (ANSI) SQL, database design, and data modeling
Experience in programming ETL (extract-transform-load) workflows with a programming language such as Python or Scala
As part of the job, be prepared to learn Python for writing ETL scripts and Django for integrating your work into web applications
Knowledge of programmatically fetching and sending data over REST APIs with some programming language such as Python, Ruby, Java, PHP, etc.
At least a basic understanding and some experience in the topic of data quality and query performance
Optional, but big pluses include doing prior work with nonprofit CRM systems such as Salesforce, Ellucian, Blackbaud, etc., and experience or interest in big data, including deployment of systems such as Hadoop.