Data Engineer - Remote Opportunity
This is an exciting opportunity for a Data Engineer to join a collaborative environment and help build and maintain the data infrastructure.
About the Role:
* Design, build, and maintain large-scale data systems.
* Implement data warehouses using tools such as Amazon Redshift, Google BigQuery, and Snowflake.
* Develop and maintain data pipelines using tools such as Apache Beam, Apache Spark, and AWS Glue.
* Work with data architects to design and implement data models and data architectures.
This is a fully remote opportunity with the potential to become a permanent position. We are looking for someone with strong experience in data engineering tools, data warehousing, and data lakes.
Key Responsibilities:
* Design scalable automated testing solutions using Ruby/Selenium-based frameworks.
* Develop and maintain data lakes using tools such as Apache Hadoop, Apache Spark, and Amazon S3.
* Collaborate with data scientists to develop and deploy machine learning models and data products.
* Ensure data quality and integrity by developing and implementing data validation and data cleansing processes.
We are looking for someone who is passionate about working with big data and has a strong background in programming languages such as Python, Java, and Scala.
Qualifications:
* 5+ years of experience in data engineering or a related field.
* Strong experience with data modeling and data architecture.
* Strong experience with data engineering tools such as Apache Beam, Apache Spark, AWS Glue, Amazon Redshift, Google BigQuery, and Snowflake.
Benefits:
* Fully remote work environment.
* Potential for long-term employment.
Others:
* Experience with machine learning and data science is a plus.
* Experience with cloud-based data platforms such as AWS, GCP, or Azure is a plus.
* Experience with containerization using Docker and Kubernetes is a plus.
We thank all candidates in advance. Only selected candidates for interviews will be contacted.