Data Warehouse Architect
Design and implement a Databricks Lakehouse instance tailored to the client's product-level data needs.
* About This Role:
* We're seeking a skilled Data Warehouse Architect to design and deploy a new Databricks Lakehouse instance, architect robust data ingestion pipelines using Spark (PySpark/Scala) and Delta Lake, and integrate AWS-native services with Databricks for optimized performance and scalability.
* Your Key Responsibilities Will Include:
* Designing and deploying a Databricks Lakehouse instance tailored to the client's product-level data needs.
* Architecting and implementing robust data ingestion pipelines using Spark (PySpark/Scala) and Delta Lake.
* Integrating AWS-native services with Databricks for optimized performance and scalability.
* Defining data models, optimizing query performance, and establishing warehouse governance best practices.
* Collaborating cross-functionally with product teams, data scientists, and DevOps to streamline data workflows.
* Maintaining CI/CD, preferably DBX for data pipelines using GitOps and Infrastructure-as-Code.
* Monitoring data jobs and resolving performance bottlenecks or failures across environments.
You'll Thrive in This Role If You Have:
* Strong Technical Skills:
* Experience with Databricks, Spark (PySpark/Scala), and Delta Lake.
* Familiarity with AWS-native services.
* Knowledge of data modeling, query optimization, and warehouse governance.
* Excellent Collaboration and Communication Skills:
* Ability to work effectively with cross-functional teams.
* Strong written and verbal communication skills.
* A Passion for Innovation and Problem-Solving:
* Willingsness to learn and adapt to new technologies.
* Ability to analyze complex problems and develop creative solutions.