About The Product
At Boldin, we believe financial confidence should be accessible to everyone. Money decisions shape our lives, yet too often people are left without the clarity or tools they need to make informed choices. We exist to change that. Boldin is a comprehensive financial planning platform that helps people understand their financial picture, make smarter decisions with their money and time, and plan for the future with confidence.
With over $20M raised and strong momentum, Boldin is entering a pivotal phase of growth. This is an opportunity to join a mission-driven team and help shape the future.
About This Role
The Principal Data Engineer is a senior technical authority responsible for defining Boldin’s data architecture, setting long-term technical strategy, and tackling our most complex data engineering challenges. This role shapes company-wide data standards, and partners with executive and cross-functional leaders to ensure our data platform scales with the business.
Key Responsibilities:
* Define and evolve long-term data architecture and vision
* Design resilient and scalable data platform and pipelines
* Set standards for data modeling, reliability, observability, and governance
* Lead complex, high-risk technical initiatives and migrations
* Influence tool selection, and technology adoption across the data stack
* Elevate engineering excellence
* Partner with leadership to align data strategy and business goals
* Enable analytics, ML, and product use cases 1KPIS + Targets
* Uptime: Consistently meets SLA for business-critical pipelines
* Freshness: All Tier 1 datasets delivered within SLA
* Delivery predictability: Majority of sprint commitments completed as planned
* Cost optimization: Year-over-year efficiency improvement as data scales
* Documentation: Full coverage for all production-grade assets
Qualifications
Technical Skills:
* Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience)
* 10+ years of experience in data engineering or related disciplines
* Proficient in SQL, Python, or related languages
* Cloud Platforms (AWS, GCP)
* Strong experience with data warehouse, data lakes, and distributed systems
* Strong experience with modern data stack (Athena, BigQuery, Glue, Spark, Dataproc, Kafka, Flink, dbt, Kestra, Fivetran or equivalent)
* Proven ability to build and maintain production-grade ELT/ETL pipelines
* Experience with workflow orchestration (e.g., Airflow, Dagster, Prefect, Cloud Composer)
* Experience implementing data quality and observability frameworks
* Performance and cost optimization in cloud warehouses
* Good spoken English
Product & Business Partnership:
* Experience supporting product analytics and experimentation
* Ability to translate business requirements into scalable data models
* Strong ownership and accountability for SLAs
Nice to Have:
* Experience working with Kubernetes
* Experience structuring data for ML or AI use cases
* Familiarity with Amplitude or product event pipelines
* Experience in a high-growth SaaS or fintech environment
* Influencing technical direction without direct managerial authority
Benefits:
* Collaborative and innovative work environment.
* Flex PTO for any reason, including sick days (no specified limits), flexible work schedule.
* Personal laptop.
* Health and wellness package.
* Budget for English lessons.
Kindly submit your application and CV in English.