Emprego
Meus anúncios
Meus alertas e-mail de emprego
Fazer login
Encontrar um emprego Dicas de emprego Fichas de empresas
Pesquisar

Data engineer

Rio Grande
buscojobs Brasil
Anunciada dia A 22 h atrás
Descrição

1st local recruitment site. Check out the latest Vacancies near you! Yournext job awaits you at...


Overview

We are seeking a Data Engineer (short-term contractor) to design and implement data acquisition and ETL processes in Microsoft Azure to support enterprise reporting needs. You will be responsible for integrating multiple technical datasets (ServiceNow, PeopleSoft, and vendor systems) into a central Azure SQL Database, ensuring cleansed, reliable, and query-ready data is available for downstream dashboards.


Core Responsibilities

* Build and maintain ETL pipelines in Azure Data Factory (or equivalent) to automate ingestion from APIs, files, and vendor systems.
* Develop and optimize SQL stored procedures, scheduled jobs, and views in Azure SQL Database.
* Apply data governance and quality checks to ensure dataset accuracy and consistency.
* Collaborate with technical staff and SMEs to document sources, transformations, and dependencies.
* Support Power BI developers by delivering clean, structured datasets.
* (Optional) Utilize Alteryx or similar tools for advanced data transformations.
* Provide knowledge transfer to internal teams for sustainability post-engagement.


Required Skills

* Strong experience with Microsoft Azure (Data Factory, SQL Database, Blob/Data Lake).
* Advanced SQL (stored procedures, scheduled jobs, indexing, views).
* Proven ETL development experience (batch + scheduled pipelines).
* Solid understanding of data governance, lineage, and quality frameworks.
* Familiarity with Power BI data models (not visualization).
* Excellent communication skills for collaboration and mentoring.


Nice-to-Have

* Hands-on experience with Alteryx for data preparation.
* Prior exposure to healthcare IT data (helpful, but not required).

This position is fully remote and is a PJ contract. Pay range is 25-29/hr USD.

Required Skills & Experience

- 5+ years of experience as a Data Engineer with minimum 3 years of experience in Azure tool Stack (Databricks, ADF, Synapse Analytics)

- Good to have experience with other Azure services Azure Functions, Event Grid, etc

- Strong expertise in writing complex /SQL and transformations.

- Ability to write, review and examine code written in Python and PySpark

- Proficient in Devops CI/CD (High level understanding)

- Expertise in RDBMS

- Fundamental knowledge of Data warehousing concepts

- SQL & SQL Tuning within Data Lake (Delta Format)

- Cost Optimizations

Nice to Have Skills & Experience

* Azure Solution Architecture Experience (IaaS/PaaS/SaaS)

Job Description

A Fortune 100 client is seeking to add an experienced Data Engineer to their team, specifically within LATAM. Responsibilities include:

* Provide solutions for various integrations and data preprocessing.
* Support Sustain activities on data assets developed within the domain.
* Integrate data from different sources into the enterprise data foundation.
* Follow up on the issues with stake holder teams for resolution.
* Work on user stories assigned for data ingestion into bronze and silver tables.
* Coordinate with Data Governance, Steward, and Modelling teams for any questions.
* Track backlog tech debt items in the POD and collaborate with the Scrum Master to create stories for each iteration.

Our client is a U.S.-based company that provides technical expertise, testing, and certification services to the global food and agricultural industry. Their mission is to ensure food safety, quality, and sustainability across international supply chains. This role is critical to building, maintaining, and modernizing data pipelines that process large-scale regulatory data from around the world and transform it into usable datasets for downstream applications and APIs. The engineer will work hands-on with Python, SQL, and related tools to untangle legacy “spaghetti code” pipelines, migrate processes to more maintainable platforms such as Airflow, and ensure that our data is accurate, reliable, and ready for client-facing products. This role requires both strong technical ability and a consulting mindset—able to learn undocumented systems, troubleshoot gaps, and design forward-looking solutions that will scale as our data environment evolves.


Qualifications

* Minimum 7 years’ experience using Python for analyzing, extracting, creating, and transforming large datasets.
* Proficiency in Python 3+ and common Python libraries and tools for data engineering, specifically Pandas, NumPy, and Jupyter Notebooks.
* Deep experience with SQL and relational data using Oracle, Postgres, or MS SQL Server.
* Solid understanding of database design principles, data modeling, and data warehousing concepts.
* Excellent troubleshooting skills and instincts.
* Curious, self-motivated, and self-directed; comfortable working within an Agile software development team with short, iterative delivery cycles.
* College degree or equivalent experience in computer science, software development, engineering, information systems, math, food science, or other applicable field of study.


Preferred Qualifications

* NoSQL database design and development using MongoDB, AWS DynamoDB, or Azure Cosmos DB.
* Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and services related to data storage/processing
* Exposure to Terraform or other Infrastructure-as-Code tooling.
* Proficient in Azure DevOps for source code and pipeline management.


Company and Compliance

We are looking for a Senior Data Engineer to design and maintain scalable data pipelines on AWS, ensuring performance, quality, and security. You will collaborate with data scientists and analysts to integrate data from multiple sources and support AI/ML initiatives.

What can you expect from us?

* Professional development and constant evolution of your skills, always in line with your interests.
* Opportunities to work outside Brazil
* A collaborative, diverse and innovative environment that encourages teamwork.

What do we offer?

* Health insurance
* Life insurance
* Gympass
* TCS Cares – free 0800 that provides psychological assistance (24 hrs/day), legal, social and financial assistance to associates
* Partnership with SESC
* Reimbursement of Certifications
* Free TCS Learning Portal – Online courses and live training
* International experience opportunity
* Discount Partnership with Universities and Language Schools
* Bring Your Buddy – By referring people you become eligible to receive a bonus for each hire
* TCS Gems – Recognition for performance
* Xcelerate – Free Mentoring Career Platform


Compliance and Privacy

Aubay Portugal privacy notice: Personal Data collected by Aubay Portugal as Data Controller will be processed for application analysis in accordance with data protection laws. Data may be transferred to clients and other CompassUol Group companies as needed for the selection process. For questions regarding data protection rights, contact Aubay Portugal’s DPO.

We are looking for a Senior Data Engineer to join our Threat Research team with responsibilities including threat intelligence ingestion, validation, export automation, and data pipeline maintenance.


Threat Research Role – Responsibilities

* Design, develop, and maintain data pipelines for ingesting threat intelligence data from various sources into our data ecosystem.
* Implement data validation processes to ensure data accuracy, completeness, and consistency.
* Collaborate with threat analysts to understand data requirements and design appropriate solutions.
* Develop automation scripts and workflows for data export processes to external systems or partners.
* Optimize and enhance existing data pipelines for improved performance and scalability.
* Monitor data pipelines and troubleshoot issues as they arise, ensuring continuous data availability and integrity.
* Document technical specifications, data flows, and procedures for data pipeline maintenance and support.
* Stay updated on emerging technologies and best practices in data engineering and incorporate them into our data ecosystem.
* Provide technical guidance and support to other team members on data engineering best practices and methodologies.

Requirements

* Proven experience as a Data Engineer or similar role, with a focus on data ingest, validation, and export automation.
* Strong proficiency in Python.
* Experience with data pipeline orchestration tools such as Apache Airflow, Apache NiFi, or similar.
* Familiarity with cloud platforms such as Snowflake, AWS, Azure, or Google Cloud Platform.
* Experience with data validation techniques and tools for ensuring data quality.
* Experience building and deploying images using containerization technologies such as Docker and Kubernetes.
* Excellent problem-solving skills and attention to detail.
* Strong communication and collaboration skills, with the ability to work effectively in a team environment.

End of description.

#J-18808-Ljbffr

Se candidatar
Criar um alerta
Alerta ativado
Salva
Salvar
Vagas parecidas
Emprego Rio Grande
Emprego Rio Grande do Sul
Emprego Sul
Página principal > Emprego > Data Engineer

Jobijoba Brasil

  • Dicas de emprego

Encontre vagas

  • Vagas de emprego por cargo
  • Pesquisa de vagas de emprego por área
  • Empregos por empresas
  • Empregos por localização

Contato / Parceria

  • Entre em contato
  • Publique suas ofertas no site Jobijoba

Menções legais - Menções legais e termos de uso - Política de dados - Gerir os meus cookies - Acessibilidade: Não conforme

© 2025 Jobijoba Brasil - Todos os direitos reservados

Se candidatar
Criar um alerta
Alerta ativado
Salva
Salvar