Senior Data Pipeline EngineerLocation: Brazil (Remote)About Sphise TechnologiesSphise Technologies is a global outsourcing and talent solutions company, partnering with high-growth businesses to build strong, high-impact teams across finance, technology, and operations.Our trusted high-growth healthcare technology partner is hiring a Senior Data Pipeline Engineer to own and scale the data infrastructure that powers our platform, building integrations, transformations, and orchestration that ensure reliable, analytics-ready data for hundreds of customer workspaces.Position Overview:For our Client we're hiring a Senior Data Pipeline Engineer to own and scale the data infrastructure powering our platform. You'll build integrations from marketing and sales platforms, transform raw data into analytics-ready models, and orchestrate pipelines that run reliably across hundreds of customer workspaces. This foundational role requires architectural thinking, experience with ELT patterns, columnar storage, and multi-tenant systems. If you love designing clean data models, building robust pipelines, and connecting new data sources end-to-end, this is the role for you.What You'll DoExpand our data integrations — add new source connectors (ad platforms, social media, payment processors) with full extraction, loading, transformation, and orchestration flowsWrite and maintain incremental SQL transformation models across staging, core, and analytics layers optimized for a columnar analytics engineDesign and maintain orchestration workflows — event-driven sensors, schedules, partitions, and automated pipeline chainsOptimize our analytics warehouse — schema design, partitioning strategies, materialized views, sort keys, and data retention policies for performance at scaleScale our multi-tenant architecture — each customer workspace gets isolated databases, and you'll manage the per-tenant pipeline lifecycleBuild data quality checks, freshness monitoring, alerting, and pipeline health dashboardsWhat You Bring5+ years of progressive experience in data engineering — we care about your growth trajectory, not just time servedStrong SQL skills — CTEs, window functions, complex aggregations, and performance tuning for analytical workloadsProduction experience with columnar analytics databases (ClickHouse, Snowflake, BigQuery, Redshift, or similar OLAP engines)Hands-on experience with ELT pipelines and SQL transformation frameworks (dbt or similar)Experience with modern data orchestration tools (Dagster, Airflow, Prefect, or similar)Comfortable writing data pipeline code in PythonExperience with cloud-based data integration platforms for managing source connectors and sync configurationsBachelor's degree in Computer Science, Engineering, or equivalent practical experienceNice to HaveExperience with marketing and ads platform APIs (TikTok, Meta, Google Ads, Pinterest)Background in multi-tenant data architecture — per-customer database isolation and credential managementE-commerce domain knowledge — orders, customers, transactions, RFM segmentation, customer lifetime valueExperience with PostgreSQL and cross-database patterns between relational and analytical databasesFamiliarity with Next.js or TypeScript — our console is built with Next.js, and wiring integrations into the frontend is a plusUnderstanding of how data models feed into AI/LLM agent systemsWho You AreYou have a genuine drive to learn and grow — we hire for potential and curiosity as much as current skillYou think in systems — you see how extraction, transformation, orchestration, and the analytics layer fit together as one wholeYou take ownership end-to-end — from designing the schema to monitoring the pipeline in productionYou're comfortable with ambiguity and can make sound architectural decisions without a playbook for every situationYou communicate clearly and enjoy working directly with founders and cross-functional teammatesBenefits:Competitive salary to recognise and reward your achievements.Flexible work environment.Opportunities for professional and personal growth.