Job Opportunity
We are seeking a skilled Databricks Data Engineer to rebuild five data quality scorecards using SAP ECC data in the Lakehouse. The ideal candidate will design and validate profiling logic, build rule-based data quality checks in PySpark, generate field-level and row-level results, and publish business-facing scorecards in Power BI.
Responsibilities
* Rebuild Data Quality scorecards in Databricks
* Develop profiling logic (nulls, distincts, pattern checks)
* Build PySpark-based Data Quality rules and row/column-level metrics
* Create curated DQ datasets for Power BI scorecards
* Establish reusable DQ rule templates and standardized development patterns
* Work with SAP ECC data models
* Support and mentor a junior developer on rule logic and development standards
Qualifications
* Strong Databricks engineering experience (PySpark, SQL, Delta Lake)
* Hands-on experience building Data Quality rules, frameworks, or scorecards
* Experience in profiling large datasets and implementing metadata-driven DQ logic
* Ability to mentor, review code, and explain concepts clearly
* Excellent communication skills
* Familiarity with SAP ECC tables and key fields
* Experience with Unity Catalog or Purview
* Exposure to Lakehouse Monitoring or DQX accelerators
What We Offer
The successful candidate will have the opportunity to work on a challenging project, develop their skills, and contribute to the organization's success.