Job Title: Data Warehouse Architect
We are seeking an experienced Data Warehouse Architect to design and deploy a new Databricks Lakehouse instance tailored to our clients' product-level data needs. This role will involve architecting and implementing robust data ingestion pipelines using Spark (PySpark/Scala) and Delta Lake, as well as integrating AWS-native services (S3, Glue, Athena, Redshift, Lambda) with Databricks for optimized performance and scalability.
Key Responsibilities:
1. Design and deploy a new Databricks Lakehouse instance that meets the client's product-level data requirements
2. Architect and implement data ingestion pipelines using Spark and Delta Lake to ensure seamless data flow
3. Integrate AWS-native services with Databricks to optimize performance and scalability
4. Develop and maintain data models to support business intelligence and analytics
5. Collaborate cross-functionally with product teams, data scientists, and DevOps engineers to streamline data workflows and improve data quality
6. Monitor data jobs and resolve performance bottlenecks or failures across environments
About Us: We value collaboration, innovation, and continuous learning in a fast-paced environment.