Senior Data Engineer

  • Type Régie
  • BUDGET Tarif selon profil
  • Durée (mois) 6
  • Pays Royaume-Uni
  • Remote NON
  • Offres0
  • Moyenne Tarif selon profil
Réalisez votre mission en étant porté chez
Gagnez 940,43 net / mois En savoir plus

Publiée le 22 mai 2024

Active

Description de la mission

Senior Data Engineer

Location: London, UK

Duration: 9 months

Role / Position Overview

As a Senior Data Engineer, you will lead the team’s thinking when it comes to building, deploying, and hosting data engineering pipelines and solutions in production. You will be responsible for designing and implementing scalable data pipelines, managing data infrastructure, and tooling, and working closely with stakeholders to deliver quality solutions.

This role will be a key contributor to the ongoing development of our data architecture and data governance capabilities. We want to minimize manual processes and strong software engineering proficiency in this role is key.

Key Responsibilities:

– Design and implement scalable data pipelines that extract, transform and load data from various sources into the data lakehouse.

– Help teams push the boundaries of analytical insights, creating new product features using data.

– Develop and automate large scale, high performance data processing systems(batch and real time) to drive growth and improve product experience.

– Develop and maintain infrastructure tooling for our data systems.

– Collaborate with software teams and business analysts to understand their data requirements and deliver quality fit for purpose data solutions.

– Ensure data quality and accuracy by implementing data quality checks, data contracts and data governance processes.

– Contribute to the ongoing development of our data architecture and data governance capabilities.

– Develop and maintain data models and data dictionaries.

Skills and Qualifications:

– Significant Experience with data modelling, ETL processes, and data warehousing.

– Significant exposure and hands on at least 2 of the programming languages – Python, Java, Scala, GoLang.

– Significant experience with Hadoop, Spark and other distributed processing platforms and frameworks.

– Experience working with Open table/storage formats like delta lake, apache iceberg or apache hudi.

– Experience of developing and managing real time data streaming pipelines using Change data capture (CDC), Kafka and Apache Spark.

– Experience with SQL and database management systems such as Oracle, MySQL or PostgreSQL.

– Strong understanding of data governance, data quality, data contracts, and data security best practices.

– Exposure to data governance, catalogue, lineage and associated tools.

– Experience in setting up SLAs and contracts with the interfacing teams.

– Experience working with and configuring data visualisation tools such as Tableau.

– Ability to work independently and as part of a team in a fast-paced environment.

– Experience working in a DevOps culture and willing to drive it. You are comfortable working with CI/CD tools (ideally IBM UrbanCode Deploy, TeamCity or Jenkins), monitoring tools and log aggregation tools. Ideally, you would have worked with VMs and/or Docker and orchestration systems like Kubernetes/OpenShift.

Morgan McKinley is acting as an Employment Agency and references to pay rates are indicative.

BY APPLYING FOR THIS ROLE YOU ARE AGREEING TO OUR TERMS OF SERVICE WHICH TOGETHER WITH OUR PRIVACY STATEMENT GOVERN YOUR USE OF MORGAN MCKINLEY SERVICES.

Compétences Techniques Requises

DockerJavaService

Compétences Fonctionnelles Requises

DesignDevOpsETLPostgreSQLServices

À propos du Donneur d'ordres

Frédérique
14326 mission(s) publiée(s) 0 deal(s) gangné(s)
FREELANCER BIDDING (0)

Il n'y a pas d'offres.