Key Data Architect Role
* Achieve business value through data driven decision making.
* Design and deploy Databricks Lakehouse instances tailored to product-level needs.
The ideal candidate has experience with modern dimensional modeling, lakehouse design patterns for mixed workloads, and is proficient in monitoring tools like CloudWatch, Datadog, or New Relic.
-----------------------------------
Key Skills and Qualifications:
* Robust data ingestion pipelines using Spark (PySpark/Scala) and Delta Lake.
* Integrate AWS-native services (S3, Glue, Athena, Redshift, Lambda) with Databricks.
* Data models, query performance optimization, and warehouse governance best practices.
* Cross-functional collaboration with product teams, data scientists, and DevOps.
* Maintain CI/CD, preferably DBX for data pipelines using GitOps and Infrastructure-as-Code.
* Monitor data jobs and resolve performance bottlenecks or failures across environments.
-----------------------------------
Benefits and Advantages:
The role offers opportunities to expand knowledge in data engineering, working closely with cross-functional teams.
-----------------------------------