We are seeking a seasoned Data Engineer to design and implement large-scale data systems, ensuring seamless collaboration between data architects, scientists, and stakeholders. This fully remote opportunity offers the chance to build a lasting impact on our business.
Key Responsibilities:
* Design, build, and maintain data pipelines using Apache Beam, Apache Spark, and AWS Glue.
* Create scalable automated testing solutions with Ruby/Selenium-based frameworks.
* Develop and maintain data warehouses using Amazon Redshift, Google BigQuery, and Snowflake.
* Collaborate with data architects to design and implement data models and architectures.
* Work with data scientists to develop and deploy machine learning models and data products.
Qualifications:
* 5+ years of experience in data engineering or a related field.
* 2-4 years of experience in Ruby programming, including Ruby on Rails framework.
* 5+ years of experience with programming languages such as Python, Java, and Scala.
* Strong experience with data modeling, data architecture, data warehousing, and data lakes.
* Excellent collaboration and communication skills.
Nice to Have:
* Experience with machine learning and data science.
* Familiarity with cloud-based data platforms such as AWS, GCP, or Azure.
This is an exciting opportunity for professionals looking to grow and develop their expertise in data engineering. We look forward to hearing from qualified candidates who share our passion for innovative technologies.