Senior Data Engineer (Oracle / ODI) – Data Warehouse & ETL Specialist
About the Role
We are looking for a Senior Data Engineer with strong expertise in Oracle and ODI (Oracle Data Integrator) to join our team and support enterprise‑grade data initiatives.
This role is ideal for professionals who thrive in complex data environments, working with large‑scale Data Warehouses and mission‑critical systems ‑ especially within financial services or similar industries.
You will be responsible for designing, developing, and optimizing ETL/ELT pipelines, ensuring high standards of data quality, performance, and reliability.
Key Responsibilities
Design and develop ETL/ELT processes using ODI
Build and maintain scalable Data Warehouse solutions
Develop complex SQL and PL/SQL logic (procedures, packages, functions)
Implement data quality, reconciliation, and auditing frameworks
Develop and manage ODI Load Plans and workflows
Troubleshoot and debug data pipelines and mappings
Translate business requirements into technical specifications
Required Qualifications
6+ years of experience in Data Engineering / ETL development
3+ years of hands‑on experience with ODI
Strong expertise in Oracle SQL and PL/SQL
Proven experience in Data Warehouse implementations (end‑to‑end)
Deep understanding of ETL/ELT concepts and data modeling
Experience with Load Plans and scheduling
Experience with error handling and data reconciliation
Strong analytical and problem‑solving skills
Ability to work independently in fast‑paced environments
Fluent or advanced English communication skills
Nice to Have
Experience in financial services industry
Exposure to modern data platforms (Databricks, Snowflake, etc.)
Cloud experience (AWS, Azure, or GCP)
What We Offer
Opportunity to work on high‑impact enterprise data projects
Collaborative and technically strong team environment
Exposure to both legacy and modern data architectures
Career growth aligned with evolving data technologies
Data Engineer – Salesforce integration and ETL in Azure‑based environments
Job Description
Data Extraction – Extract legacy data from SQL Server using Azure Synapse / Azure Data Factory.
Salesforce Loads – Load, validate, and reconcile data into Salesforce platforms.
Quality & Testing – Conduct unit testing and support UAT.
Delivery – Manage work items, defects, and priorities in Jira.
Standards – Contribute to reusable migration processes and playbooks.
Required Skills
Data Mapping – Develop and implement source‑to‑target mappings aligned to Salesforce data models.
Preferred/Desired Skills
Data Transformation – Build transformations using Databricks
Other relevant experience as listed
Equality & Opportunity for All
We are a proud equal‑opportunity employer committed to providing equal employment opportunities to all applicants and employees without regard to race, religion, sex, color, age, national origin, pregnancy, sexual orientation, disability or genetic information, or any other protected classification, in accordance with federal, state and/or local laws.
Principal Data Engineer – Boldin
About This Role
The Principal Data Engineer is a senior technical authority responsible for defining Boldin’s data architecture, setting long‑term technical strategy, and tackling our most complex data engineering challenges. This role shapes company‑wide data standards, and partners with executive and cross‑functional leaders to ensure our data platform scales with the business.
Key Responsibilities
Define and evolve long‑term data architecture and vision
Design resilient and scalable data platform and pipelines
Set standards for data modeling, reliability, observability, and governance
Lead complex, high‑risk technical initiatives and migrations
Influence tool selection and technology adoption across the data stack
Partner with leadership to align data strategy and business goals
Enable analytics, ML, and product use cases (KPIS + Targets)
Uptime: Consistently meets SLA for business‑critical pipelines
Freshness: All Tier 1 datasets delivered within SLA
Delivery predictability: Majority of sprint commitments completed as planned
Cost optimization: Year‑over‑year efficiency improvement as data scales
Documentation: Full coverage for all production‑grade assets
Qualifications
Technical Skills• Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience)• 10+ years of experience in data engineering or related disciplines• Proficient in SQL, Python, or related languages• Strong experience with data warehouse, data lakes, and distributed systems• Strong experience with modern data stack (Athena, BigQuery, Glue, Spark, Dataproc, Kafka, Flink, dbt, Kestra, Fivetran, or equivalent)• Proven ability to build and maintain production‑grade ELT/ETL pipelines• Experience with workflow orchestration (Airflow, Dagster, Prefect, Cloud Composer)• Experience implementing data quality and observability frameworks• Performance and cost optimization in cloud warehouses• Good spoken English
Product & Business Partnership• Experience supporting product analytics and experimentation• Ability to translate business requirements into scalable data models• Strong ownership and accountability for SLAs
Nice to Have
Experience working with Kubernetes
Experience structuring data for ML or AI use cases
Familiarity with Amplitude or product event pipelines
Experience in a high‑growth SaaS or fintech environment
Influencing technical direction without direct managerial authority
Collaborative and innovative work environment.
Flex PTO for any reason, including sick days (no specified limits), flexible work schedule.
Personal laptop.
Health and wellness package.
Budget for English lessons.
Kindly submit your application and CV in English.
Senior Data Engineer – Signify Technology
Location
Fully Remote US East Coast team – some overlap required.
Contract
Until the end of 2026 (likely extension). Rate: $40 per hour +.
Interview Process
2 stages (fast turnaround).
What You’ll Do
Deliver the final 30% of a complex data migration (GCP → AWS)
Work with and adapt an existing Scala‑based Spark codebase
Translate pipelines for AWS compatibility and performance
Support Airflow (Python) orchestration
Ensure robust testing, validation, and monitoring across pipelines
What We’re Looking For
Proven experience with data pipeline migrations (GCP/AWS)
Solid understanding of testing, validation, and data quality
Comfortable working with high‑impact, business‑critical datasets
Senior Data Engineer – Remote from Brazil (MLOps)
About the Role
We’re looking for a Senior Data Engineer with strong MLOps expertise to join a top‑tier US‑based company. The role is full‑time, 100% remote, exclusive to candidates located in Brazil.
What You’ll Do
Design, develop, and maintain robust, scalable data infrastructure across real‑time and batch workloads.
Build and support ML pipelines for model training, deployment, and monitoring.
Collaborate cross‑functionally with data scientists, engineers, and product teams to deliver high‑performance data and ML solutions.
Develop APIs and services for data ingestion, transformation, and querying.
Ensure the reliability of ML systems through strong observability and operational tools.
Contribute to architectural decisions and mentor team members.
5+ years as a Data Engineer or MLOps Engineer.
Strong experience with Python, Java, or Scala.
Hands‑on with GCP (preferred), AWS or Azure.
Experience with BigQuery and ML frameworks (TensorFlow, PyTorch), and container orchestration (Docker, Kubernetes).
Familiarity with Apache Kafka, Spark or similar tools is a big plus.
Experience with ETL, CI/CD, git, and monitoring pipelines.
Strong communication skills and fluency in English (written & spoken) mandatory.
Bachelor’s or Master’s degree in Computer Science or related fields.
What We Offer
Top‑tier hourly rate paid in USD
Long‑term contract opportunity
Fully remote work – collaborate with international teams from the comfort of your home
A high‑impact role within a data‑driven, mission‑oriented company
Senior Data Engineer – Oil & Gas ERP Client
Industry Overview
The client provides business‑automating, enterprise resource planning (ERP) accounting software for the oil & gas industry. The platform streamlines complex processes such as revenue distribution, billing, order management, production accounting, accounts payable, contract management, and more for over 1,700 customers across 9 countries.
Responsibilities
Build and maintain scalable data pipelines powering clients’ products, supporting core product functionality, customer reporting, and internal analytics.
Take end‑to‑end ownership of data flows, from ingestion to delivery, focusing on data quality and reliability.
Create and maintain production‑grade database views and transformation logic.
Develop, optimize, and manage SQL view scripts and transformations.
Design, configure, and maintain database architectures, distributed systems, and data storage solutions.
Ensure data integrity by implementing validation processes for accuracy, consistency, and reliability.
Monitor system performance, troubleshoot issues, and optimize data queries and workflows for efficiency.
Build and support data integrations with internal services and third‑party APIs, managing authentication, schema evolution, rate limits, and error handling.
Research and recommend innovative approaches for project execution, providing status reports to stakeholders.
Assist the team or supervisor in identifying data anomalies and providing business solutions.
Document procedures using text, workflow diagrams, and screenshots from the application.
Required Experience
Bachelor’s degree in Computer Science, Information Systems, Finance, Accounting, or related field.
Excellent English verbal and written communication skills.
5+ years of data engineering experience.
3+ years SQL and Python experience.
Knowledge of database architecture and management.
Experience working with Cloud Platforms (Azure, AWS, GCP, etc.).
Ability to read code and convert programming logic from one language to another.
Familiarity with APIs.
Preferred Experience
Experience with data analytics tools (Power BI, QuickSight, Looker, Sisense).
Experience with Apache Airflow.
Experience with AI coding tools and/or AI assistants.
Background in Math, Stats, Machine Learning.
Additional Information
Knowing your ideas are heard and matter.
Getting to own your job and be recognized for contributions.
Working with smart and creative people.
Making mistakes is human – learn from them; be transparent.
No presumptions or judgments – be extraordinary.
15 days paid time off, 1 floating day, 3 sick days, and designated national holidays.
Start: ASAP.
About Velozient
Velozient is a privately held, nearshore software development company providing outsourced development resources to North American companies. The mission is to offer development talent who enjoy taking on challenging work, want to grow skills, and excel in a fast‑paced, dynamic team environment.
#J-18808-Ljbffr