Big Data Engineer Position
We are seeking a highly skilled and experienced Big Data Engineer to join our team. As a key member of our data infrastructure group, you will be responsible for designing, building, and maintaining large-scale data systems.
The successful candidate will have a strong background in data engineering and experience with data modeling, data architecture, and data warehousing. They will also have expertise in programming languages such as Python, Java, and Scala, as well as experience with data engineering tools like Apache Beam, Apache Spark, AWS Glue, Amazon Redshift, Google BigQuery, and Snowflake.
You will work closely with data architects, data scientists, and other stakeholders to ensure that the entire data systems meet the needs of our business. This is a fully remote opportunity with the potential to become a permanent position.
* Main Responsibilities:
* Design, build, and maintain large-scale data systems.
* Design and implement data warehouses using tools such as Amazon Redshift, Google BigQuery, and Snowflake.
* Develop and maintain data pipelines using tools such as Apache Beam, Apache Spark, and AWS Glue.
* Develop and maintain data lakes using tools such as Apache Hadoop, Apache Spark, and Amazon S3.
* Work with data architects to design and implement data models and data architectures.
* Collaborate with data scientists to develop and deploy machine learning models and data products.
* Ensure data quality and integrity by developing and implementing data validation and data cleansing processes.
* Collaborate with other teams to ensure that data systems meet the business's needs.
* Stay up-to-date with new technologies and trends in data engineering and make recommendations for adoption.
* Required Skills and Qualifications:
* 5+ years of experience in data engineering or a related field.
* 2-4 years of experience in Ruby products, including Ruby on Rails framework.
* 5+ years of experience with programming languages such as Python, Java, and Scala.
* 3+ years of experience with data modeling and data architecture.
* 3+ years of experience with data engineering tools such as Apache Beam, Apache Spark, AWS Glue, Amazon Redshift, Google BigQuery, and Snowflake.
* Strong experience with data warehousing and data lakes.
* Strong experience with data validation and data cleansing.
* Strong collaboration and communication skills.
* Bachelor's degree in Computer Science, Engineering, or a related field.