An exciting opportunity awaits for a skilled Data Engineer to design and maintain large-scale data systems.
As a key member of our team, you will work closely with data architects, data scientists, and other stakeholders to ensure that our data systems meet the needs of our business.
Your primary responsibilities will include designing and implementing data warehouses using tools such as Amazon Redshift, Google BigQuery, and Snowflake.
You will also be responsible for developing and maintaining data pipelines using tools like Apache Beam, Apache Spark, and AWS Glue.
Additionally, you will collaborate with data scientists to develop and deploy machine learning models and data products.
To succeed in this role, you will need strong experience with data engineering tools, data modeling, and data architecture.
A Bachelor's degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering or a related field.
Nice to have skills include experience with machine learning and data science, cloud-based data platforms, containerization, agile development methodologies, and data governance.
If you are passionate about building and maintaining complex data systems, we encourage you to apply for this exciting opportunity.
* Design and implement scalable automated testing solutions using Ruby/Selenium-based frameworks.
* Develop and maintain data lakes using tools such as Apache Hadoop, Apache Spark, and Amazon S3.
* Ensure data quality and integrity by developing and implementing data validation and data cleansing processes.
This is a fully remote opportunity with the potential to become a permanent position.
We thank all candidates in advance. Only selected candidates for interviews will be contacted. For other exciting opportunities, please visit us at VTRAC Consulting Corporation.
However, this text has been removed due to company information: Toronto. Houston. New York. Palo Alto.
Benefits and Qualifications
Benefits:
* Competitive salary based on experience
* Comprehensive health benefits package
* Generous paid time off policy
* Opportunities for career growth and professional development
Qualifications:
* 5+ years of experience in data engineering or a related field
* 2 - 4 years of experience in Ruby products, including Ruby on Rails framework
* 5+ years of experience with programming languages such as Python, Java, and Scala
* 3+ years of experience with data modeling and data architecture
* 3+ years of experience with data engineering tools such as Apache Beam, Apache Spark, AWS Glue, Amazon Redshift, Google BigQuery, and Snowflake
* Strong experience with data warehousing and data lakes
* Strong experience with data validation and data cleansing
* Strong collaboration and communication skills
* Bachelor's degree in Computer Science, Engineering, or a related field
Nice to Have:
* Experience with machine learning and data science
* Experience with cloud-based data platforms such as AWS, GCP, or Azure
* Experience with containerization using Docker and Kubernetes
* Experience with agile development methodologies such as Scrum or Kanban
* Experience with data governance and data security
We are an equal-opportunity employer and welcome applications from diverse candidates. Please submit your application, including your resume and cover letter, through our online portal.