We’re building a talent pool for Data Engineers. This pipeline role is for engineers with strong foundation in data modelling (schema design) and SQL experience. You’ll be working across a modern data stack—Airflow, Snowflake, Superset/Looker/HEX, Python, to help power analytics, decision‑making, and product insights.
Key Responsibilities
Taking raw production data and transforming it for analytics use.
Creating data marts organized by business use vs. data origin.
Building AI‑ready semantic layer for LLM querying.
You’ll be ideal if you have
Expertise in SQL and data modelling.
Proficiency with Airflow, Snowflake, and data visualization tools like Looker, Superset, or HEX.
Strong Python skills for data workflows and transformations.
Bonus: experience with LLM prompt engineering, RAG, or related AI tools.
Problem solver with a strong sense of ownership.
Communicates clearly—both written and verbal.
Business‑minded: focused on outcomes, value, and impact.
Thrives in fast‑paced environments and ambiguity.
Background in consumer tech or SaaS is a plus.
Why Join Kake?
Kake is a remote‑first company with a global community—fully believing that it’s not where your table is, but what you bring to the table that matters. We provide top‑tier engineering teams to support some of the world’s most innovative companies, and we’ve built a culture where great people stay, grow, and thrive. We’re proud to be more than just a stop along the way in your career — we’re the destination.
The icing on the Kake
Competitive Pay in USD – Work globally, get paid globally.
Fully Remote – Simply put, we trust you.
Better Me Fund – We invest in your personal growth and passions.
Compassion is Badass – Join a community that invests in social good.
SQL & Snowflake Engineer (Kake)
As a SQL & Snowflake Engineer, you will play a key role in the full development lifecycle, translating complex data structures into actionable technical insights with a specific focus on Snowflake and SQL queries. You will be responsible for both querying and analyzing large datasets within the Data Lake to support feature development, ensuring seamless data availability from ingestion through to reporting within an Agile framework. Beyond technical execution, you will act as a key contributor in data modeling, actively participating in technical discussions to help the engineering team understand underlying data patterns and quality issues.
Responsibilities and Duties
Digest requirements from the engineering team and specify the implementation of complex SQL queries;
Analyze and map underlying data schemas within Snowflake to support application logic;
Optimize SQL performance for large‑scale data retrieval and reporting;
Follow Agile processes and participate actively in data exploration phases;
Participate in technical discussions, data quality reviews, and schema design sessions.
Solid Experience with Snowflake (architecture, warehousing, and data sharing);
Strong proficiency in Advanced SQL (CTEs, Window Functions, Stored Procedures);
Familiarity with ETL/ELT processes and data ingestion tools;
Solid experience with version control for database scripts (Git, Liquibase, or dbt).
Highly Desirable Skills
Experience with Python or scripting for data manipulation;
Familiarity with semi‑structured data (JSON, Parquet) with Snowflake.
About Encora
Encora is the preferred digital engineering and modernization partner of some of the world’s leading enterprises and digital native companies. With over 9,000 experts in 47+ offices and innovation labs worldwide, Encora’s technology practices include Product Engineering & Development, Cloud Services, Quality Engineering, DevSecOps, Data & Analytics, Digital Experience, Cybersecurity, and AI & LLM Engineering.
Encora EEO Statement
At Encora, we hire professionals based solely on their skills and qualifications, and do not discriminate based on age, disability, religion, gender, sexual orientation, socioeconomic status, or nationality.
Senior Data Engineer – Oracle / ODI (Encora)
About the Role: We are looking for a Senior Data Engineer with strong expertise in Oracle and ODI (Oracle Data Integrator) to join our team and support enterprise‑grade data initiatives. This role is ideal for professionals who thrive in complex data environments, working with large‑scale Data Warehouses and mission‑critical systems—especially within financial services or similar industries. You will be responsible for designing, developing, and optimizing ETL/ELT pipelines, ensuring high standards of data quality, performance, and reliability.
Key Responsibilities
Design and develop ETL/ELT processes using ODI;
Build and maintain scalable Data Warehouse solutions;
Develop complex SQL and PL/SQL logic (procedures, packages, functions);
Implement data quality, reconciliation, and auditing frameworks;
Develop and manage ODI Load Plans and workflows;
Troubleshoot and debug data pipelines and mappings;
Translate business requirements into technical specifications.
Required Qualifications
6+ years of experience in Data Engineering / ETL development;
3+ years of hands‑on experience with Oracle Data Integrator (ODI);
Strong expertise in Oracle SQL and PL/SQL;
Proven experience in Data Warehouse implementations (end‑to‑end);
Deep understanding of ETL/ELT concepts and data modeling;
Experience with Load Plans and scheduling;
Error handling and data reconciliation;
Strong analytical and problem‑solving skills;
Ability to work independently in fast‑paced environments;
Fluent or advanced English communication skills.
Nice to Have
Experience in financial services industry;
Exposure to modern data platforms (Databricks, Snowflake, etc.);
Cloud experience (AWS, Azure, or GCP).
What We Offer
Opportunity to work on high‑impact enterprise data projects;
Collaborative and technically strong team environment;
Exposure to both legacy and modern data architectures;
Career growth aligned with evolving data technologies.
Senior Data Engineer – Remote Brazil
Are you passionate about building scalable data platforms and cutting‑edge MLOps solutions? Do you want to work with a top‑tier US company revolutionizing e‑commerce and circular fashion? We’re looking for a Senior Data Engineer with strong MLOps expertise to join our growing data engineering team. This is a full‑time, 100% remote opportunity exclusive to candidates located in Brazil. You’ll be working with a fast‑paced US‑based company, leading innovations in sustainable commerce and data‑driven operations.
What You’ll Do
Design, develop, and maintain robust, scalable data infrastructure across real‑time and batch workloads.
Build and support ML pipelines for model training, deployment, and monitoring.
Collaborate cross‑functionally with data scientists, engineers, and product teams to deliver high‑performance data and ML solutions.
Develop APIs and services for data ingestion, transformation, and querying.
Ensure the reliability of ML systems through strong observability and operational tools.
Contribute to architectural decisions and mentor team members.
5+ years as a Data Engineer or MLOps Engineer.
Strong experience with Python, Java, or Scala.
Hands‑on with GCP (preferred), AWS or Azure.
Experience with BigQuery, ML frameworks (TensorFlow, PyTorch), and container orchestration (Docker, Kubernetes).
Familiarity with Apache Kafka, Spark, or similar tools is a big plus.
Experience with ETL, CI/CD, git, and monitoring pipelines.
Strong communication skills and fluency in English (written & spoken) is mandatory.
Bachelor’s or Master’s degree in Computer Science or related fields.
What We Offer
Top‑tier hourly rate paid in USD;
Long‑term contract opportunity;
Fully remote work – collaborate with international teams from the comfort of your home;
A high‑impact role within a data‑driven, mission‑oriented company.
Senior Data Engineer – Signify (GCP → AWS Data Migration)
Signify Technology is partnering with a client on a large‑scale GCP → AWS data migration. We’re looking for a Senior Data Engineer to help deliver the final phase of a business‑critical data platform.
Location: Fully Remote US East Coast team – some overlap required.
Contract: Until the end of 2026 (likely extension).
Rate: $40 per hour +.
What You’ll Do
Deliver the final 30% of a complex data migration (GCP → AWS).
Work with and adapt an existing Scala‑based Spark codebase.
Translate pipelines for AWS compatibility and performance.
Support Airflow (Python) orchestration.
Ensure robust testing, validation, and monitoring across pipelines.
What We’re Looking For
Proven experience with data pipeline migrations (GCP/AWS).
Solid understanding of testing, validation, and data quality.
Comfortable working with high‑impact, business‑critical datasets.
#J-18808-Ljbffr