Data Warehouse Expertise
Lead the design and implementation of a cutting-edge data warehouse for a key product line.
* Create a bespoke Databricks Lakehouse instance tailored to client’s product-level data needs, driving business insights and informed decision-making.
* Design and implement robust data ingestion pipelines using Spark (PySpark/Scala) and Delta Lake, ensuring seamless integration with AWS-native services for optimized performance and scalability.
Key Responsibilities:
1. Develop and refine data models, optimize query performance, and establish best practices for data governance.
2. Collaborate with cross-functional teams, including product managers, data scientists, and DevOps engineers, to streamline data workflows and drive process efficiency.
3. Maintain continuous integration and delivery (CI/CD), leveraging DBX and GitOps principles for data pipelines management.
About This Role
This is an exciting opportunity for an experienced Data Warehouse Architect to spearhead the design and implementation of a new data warehouse instance. The successful candidate will have a strong background in data warehousing, data engineering, and cloud-based technologies, with expertise in Databricks, Spark, and Delta Lake.
What We Offer
A competitive compensation package, comprehensive benefits, and opportunities for career growth and professional development.