A large number of businesses worldwide depend on efficient data systems to function properly.
As a seasoned Data Engineer with PySpark, you will play a crucial role in designing and implementing these systems.
Key Responsibilities
* Develop and maintain large-scale data processing pipelines using PySpark.
* Collaborate with cross-functional teams to identify business needs and develop tailored solutions.
* Design and implement data architectures that meet the evolving needs of the organization.
* Analyze complex data sets to identify trends and insights, informing business decisions.
Requirements
* At least 4 years of experience in data engineering.
* Strong expertise in PySpark, including its core APIs and Spark SQL.
* Proficiency in AWS services such as Lambda, Glue, and others.
* Familiarity with Databricks and its capabilities.
* Excellent communication and problem-solving skills.
Benefits
* Opportunity to work on challenging projects and collaborate with experienced professionals.
* Chance to grow professionally and take on increasing responsibilities.
* Competitive salary and benefits package.