Big Data Specialist
A skilled Big Data Specialist with a strong background in SQL-based frameworks and orchestration tools, proficient in Python, and expertise in Spark for transformation, streaming, tuning, and debugging.
Key Responsibilities:
1. Design and implement data pipelines using AWS Glue and dbt to ensure seamless data integration.
2. Develop and maintain data warehouses using Apache Spark and Iceberg to guarantee high-performance data storage.
3. Collaborate with cross-functional teams to guarantee data quality, privacy, and security.
Benefits of Working as a Big Data Specialist:
* Opportunity to work on diverse projects involving data engineering, architecture, and analytics.
* Chance to collaborate with experienced professionals in the field.
* Potential for professional growth and development in a dynamic environment.
Skills and Qualifications:
* Strong proficiency in Python programming language.
* Familiarity with SQL-based frameworks and orchestration tools.
* Experience with modern data architectures, ingestion tools, and formats.