Data Engineer (Ruby, AWS, Python, Apache Spark)
">
Design and build large-scale data systems to drive business growth.
We are seeking a skilled Data Engineer with expertise in Ruby, AWS, Python, and Apache Spark to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining data systems that meet the needs of our business. This is an exciting opportunity to work on complex projects, collaborate with cross-functional teams, and contribute to the development of cutting-edge data solutions.
About the Role
* Design and implement scalable automated testing solutions using Ruby/Selenium-based frameworks.
* Develop and maintain data pipelines using tools such as Apache Beam, Apache Spark, and AWS Glue.
* Collaborate with data architects to design and implement data models and data architectures.
* Work closely with data scientists to develop and deploy machine learning models and data products.
* Ensure data quality and integrity by developing and implementing data validation and data cleansing processes.
* Stay up-to-date with new technologies and trends in data engineering and make recommendations for adoption.
Key Responsibilities:
Requirements
* 5+ years of experience in data engineering or a related field.
* 2-4 years of experience in Ruby products, including Ruby on Rails framework.
* 5+ years of experience with programming languages such as Python, Java, and Scala.
* Strong experience with data warehousing and data lakes.
* Strong collaboration and communication skills.
* Bachelor's degree in Computer Science, Engineering, or a related field.
Nice to Have
* Experience with machine learning and data science.
* Experience with cloud-based data platforms such as AWS, GCP, or Azure.
* Experience with containerization using Docker and Kubernetes.
* Experience with agile development methodologies such as Scrum or Kanban.
Note: We thank all candidates in advance. Only selected candidates for interviews will be contacted.