Data Science Specialist
Job Description:
* API Development
* Data Ingestion and Processing
* Pipeline Optimization
* Data Management
* Quality Assurance
The ideal candidate will have extensive experience in AWS, proficiency in PySpark, and expertise in integrating and manipulating data in REST APIs. They should also possess good practices in version control (Git) and CI/CD, as well as knowledge of infrastructure as code (Terraform or CloudFormation). Additionally, they should be able to utilize data quality tools and have familiarity with Step Functions, EventBridge, or Kinesis. Key Skills: AWS, PySpark, API Gateway, Lambda, Fargate, S3, Glue Data Catalog, Great Expectations, Soda, Terraform, CloudFormation, Step Functions, EventBridge, Kinesis.
Required Skills and Qualifications:
* Experience with AWS services, particularly API Gateway, Lambda, and S3.
* Proficiency in PySpark for data processing and analysis.
* Familiarity with integration and manipulation of data in REST APIs.
* Knowledge of Git and CI/CD practices.
* Familiarity with infrastructure as code using Terraform or CloudFormation.
Benefits:
* Opportunity to work with a cutting-edge technology stack.
* Chance to develop skills in data science and engineering.
* Collaborative team environment.
Others:
About the Role:
This is an exciting opportunity to join our team as a data science specialist. You will be responsible for designing, developing, and maintaining APIs using AWS API Gateway, implementing integrations with Lambda or Fargate for data ingestion, developing processing pipelines in PySpark, and managing data in S3 and Glue Data Catalog. The ideal candidate will have a strong background in data science and engineering, with experience working with big data technologies and cloud-based platforms. If you are passionate about data-driven decision-making and want to be part of a dynamic team, apply now!
],