Key Responsibilities
We are seeking an experienced Data Engineer to rebuild data quality scorecards using SAP ECC data. This role will involve:
* Designing and validating profiling logic for large datasets;
* Building rule-based data quality checks in PySpark;
* Generating field-level and row-level results;
* Publishing business-facing scorecards in Power BI.
The ideal candidate will have strong experience with Databricks (PySpark, SQL, Delta Lake) and be able to mentor junior developers.
Achieving this requires delivering high-quality work products that meet or exceed customer requirements and expectations.
This is a Data Engineering position that focuses on building scalable systems to extract, transform, load, and analyze complex data sets. The successful candidate will need to demonstrate expertise in leveraging advanced data engineering technologies and tools, such as PySpark, AWS Glue, Apache Airflow, etc.