Join to apply for the Azure Data Specialist role at BimedaResponsible for developing and managing our cloud data infrastructure in Microsoft Azure (Data Factory). This professional will be responsible for designing and optimizing data pipelines, integrating information from diverse sources, and ensuring data quality, security, and availability for analysis and reporting. This professional will be responsible for ensuring the efficiency, security, and scalability of corporate data solutions, supporting BI, data science, and data modernization initiatives.Key responsibilities include:Design and implement scalable data pipelines using Azure Data Factory, Azure Synapse Analytics, Azure Databricks, and others.Create and maintain data models, ETL/ELT, and integrations between different data sources.Ensure data governance, security, and compliance in accordance with corporate and regulatory policies.Collaborate with engineering, BI, and data science teams, and business stakeholders to translate requirements into technical solutions.Monitor, diagnose, and improve the performance of data environments and pipelines in Azure.Automate data ingestion, transformation, and delivery processes.Support the implementation of data lakes, data warehouses, and modern data architecture.Stay up to date with best practices and innovations in the Azure data ecosystem and services.Data Integration: Consolidate structured and unstructured data into analysis-ready formats.Pipeline Development: Create and optimize ingestion, transformation, and movement pipelines, focusing on incremental loads.Data Platform Management: Implement, monitor, and adjust solutions using Azure Data Lake Storage Gen2, Azure Synapse Analytics, and Azure Databricks.Knowledge, Skills And Abilities Required For The RoleBachelor’s degree in computer science, Engineering, Information Systems, or related fields.Hands-on experience with Azure tools such as:Azure Data Factory (ADF)Azure Synapse AnalyticsAzure Data Lake Storage (ADLS)Azure DatabricksAzure SQL / SQL ServerProficiency in data manipulation and transformation languages: SQL, Python, PySpark, or Scala.Knowledge of data modeling (dimensional, relational, and domain-oriented).Experience with code versioning (e.g., Git) and data DevOps practices.DesirableExperience with Power BI (data modeling, integration, performance optimization).Knowledge of Scala, especially for advanced Spark/Databricks workloads.Background in data governance and data quality initiatives.Microsoft certification DP-203 (Data Engineering on Microsoft Azure).Behavioral SkillsAnalytical and solution-oriented thinkingProactivity and autonomyGood interpersonal communicationAbility to work in a multidisciplinary teamSeniority level: Entry levelEmployment type: Full-timeJob function: Information TechnologyIndustries: Veterinary
#J-18808-Ljbffr