Job Summary:
A seasoned professional is sought to spearhead the development of a cutting-edge data warehouse infrastructure. This pivotal role will entail designing and deploying scalable pipelines, optimizing lakehouse performance, and integrating seamlessly with diverse real-time and batch data sources across AWS.
* Design and deploy a new Databricks Lakehouse instance tailored to meet the client's product-level data requirements.
* Architect and implement robust data ingestion pipelines using Spark (PySpark/Scala) and Delta Lake.
Key Responsibilities:
* Define data models, optimize query performance, and establish warehouse governance best practices.
* Collaborate cross-functionally with product teams, data scientists, and DevOps to streamline data workflows.
Requirements:
Strong expertise in data warehousing, cloud computing, and big data technologies is essential for success in this position. The ideal candidate will have experience working with Databricks, Spark, and Delta Lake, as well as a strong understanding of data modeling, data governance, and data integration.
What We Offer:
This is an exceptional opportunity to join a dynamic team and contribute to the development of innovative data solutions. As a member of our team, you can expect a collaborative and supportive work environment, opportunities for growth and professional development, and a competitive compensation package.