**Job Summary**
We are seeking an experienced Data Engineer to join our team and help build and maintain our data infrastructure. The successful candidate will be responsible for designing, building, and maintaining large-scale data systems, including data pipelines, data warehouses, and data lakes.
* Design, build, and maintain large-scale data systems.
* Design and implement data warehouses using tools such as Amazon Redshift, Google BigQuery, and Snowflake.
* Develop and maintain data pipelines using tools such as Apache Beam, Apache Spark, and AWS Glue.
* Develop and maintain data lakes using tools such as Apache Hadoop, Apache Spark, and Amazon S3.
* Work with data architects to design and implement data models and data architectures.
* Collaborate with data scientists to develop and deploy machine learning models and data products.
* Ensure data quality and integrity by developing and implementing data validation and data cleansing processes.
Qualifications:
* 5+ years of experience in data engineering or a related field.
* 2-4 years of experience in Ruby programming languages.
* 5+ years of experience with programming languages such as Python, Java, and Scala.
* 3+ years of experience with data modeling and data architecture.
* 3+ years of experience with data engineering tools such as Apache Beam, Apache Spark, AWS Glue, Amazon Redshift, Google BigQuery, and Snowflake.
* Strong experience with data warehousing and data lakes.
* Strong experience with data validation and data cleansing.
Benefits:
* This is a fully remote opportunity with the potential to become a permanent position.
* Candidates should have strong collaboration and communication skills.