About The ProductAt Boldin, we believe financial confidence should be accessible to everyone. Money decisions shape our lives, yet too often people are left without the clarity or tools they need to make informed choices. We exist to change that. Boldin is a comprehensive financial planning platform that helps people understand their financial picture, make smarter decisions with their money and time, and plan for the future with confidence.With over $20M raised and strong momentum, Boldin is entering a pivotal phase of growth. This is an opportunity to join a mission-driven team and help shape the future.About This RoleThe Principal Data Engineer is a senior technical authority responsible for defining Boldin’s data architecture, setting long-term technical strategy, and tackling our most complex data engineering challenges. This role shapes company-wide data standards, and partners with executive and cross-functional leaders to ensure our data platform scales with the business.Key Responsibilities:Define and evolve long-term data architecture and visionDesign resilient and scalable data platform and pipelinesSet standards for data modeling, reliability, observability, and governanceLead complex, high-risk technical initiatives and migrationsInfluence tool selection, and technology adoption across the data stackElevate engineering excellencePartner with leadership to align data strategy and business goalsEnable analytics, ML, and product use cases 1KPIS + TargetsUptime: Consistently meets SLA for business-critical pipelinesFreshness: All Tier 1 datasets delivered within SLADelivery predictability: Majority of sprint commitments completed as plannedCost optimization: Year-over-year efficiency improvement as data scalesDocumentation: Full coverage for all production-grade assetsQualificationsTechnical Skills:Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience)10+ years of experience in data engineering or related disciplinesProficient in SQL, Python, or related languagesCloud Platforms (AWS, GCP)Strong experience with data warehouse, data lakes, and distributed systemsStrong experience with modern data stack (Athena, BigQuery, Glue, Spark, Dataproc, Kafka, Flink, dbt, Kestra, Fivetran or equivalent)Proven ability to build and maintain production-grade ELT/ETL pipelinesExperience with workflow orchestration (e.g., Airflow, Dagster, Prefect, Cloud Composer)Experience implementing data quality and observability frameworksPerformance and cost optimization in cloud warehousesGood spoken EnglishProduct & Business Partnership:Experience supporting product analytics and experimentationAbility to translate business requirements into scalable data modelsStrong ownership and accountability for SLAsNice to Have:Experience working with KubernetesExperience structuring data for ML or AI use casesFamiliarity with Amplitude or product event pipelinesExperience in a high-growth SaaS or fintech environmentInfluencing technical direction without direct managerial authorityBenefits:Collaborative and innovative work environment.Flex PTO for any reason, including sick days (no specified limits), flexible work schedule.Personal laptop.Health and wellness package.Budget for English lessons.Kindly submit your application and CV in English.