Data Engineering Position
We are seeking a skilled Data Engineer to join our team and help build and maintain our data infrastructure. This is a remote opportunity with the potential to become a permanent position.
Key Responsibilities:
* Design, build, and maintain large-scale data systems.
* Implement data warehouses using tools such as Amazon Redshift, Google BigQuery, and Snowflake.
* Develop scalable automated testing solutions using Ruby/Selenium-based frameworks.
* Develop and maintain data pipelines using tools such as Apache Beam, Apache Spark, and AWS Glue.
* Develop and maintain data lakes using tools such as Apache Hadoop, Apache Spark, and Amazon S3.
* Collaborate with data architects to design and implement data models and data architectures.
* Work with data scientists to develop and deploy machine learning models and data products.
* Evaluate and improve data quality by developing and implementing data validation and cleansing processes.
* Collaborate with other teams to ensure that data systems meet business needs.
* Stay up-to-date with new technologies and trends in data engineering and recommend adoption.
Requirements:
* 5+ years of experience in data engineering or a related field.
* 2-4 years of experience in Ruby products, including Ruby on Rails framework.
* 5+ years of experience with programming languages such as Python, Java, and Scala.
* 3+ years of experience with data modeling and data architecture.
* Strong experience with data warehousing and data lakes.
* Strong collaboration and communication skills.
* Bachelor's degree in Computer Science, Engineering, or a related field.
Nice to Have:
* Experience with machine learning and data science.
* Experience with cloud-based data platforms such as AWS, GCP, or Azure.
* Experience with containerization using Docker and Kubernetes.
* Experience with agile development methodologies such as Scrum or Kanban.