The Data Engineering Technical Lead will be part of the existing central Data Team, working to take the teams data practices to the next level. You will assist in refining our current platform while contributing to the development and implementation of our next-generation data platform using industry best practices such as the Medallion Architecture, best-fit collection of cloud-based data technologies, and automation through DevOps.

As technical lead, you will collaborate with leaders, senior engineers, and architects of the broader Engineering team to drive the technical data platform architecture roadmap, in alignment with the technology roadmap. Then you will lead the Data Engineers and Data Analysts to effectively implement the data platform architecture and build data and analytics use-cases to meet end-user requirements and demand.

Key Responsibilities:

  • Lead the design, orchestration, and implementation of robust data pipelines for batch and real-time processing.
  • Ensure best practices in data engineering, monitoring, and observability are followed, utilising tools like Databricks, Azure Data Factory, Synapse or Airflow.
  • Mentor junior team members and foster their growth into experts.
  • Collaborate with executives and stakeholders to scale data platform capabilities and integrate them with other digital products.
  • Ensure the ongoing development and performance management of the Data Engineering team.
  • Keen focus on customer service and value delivery
  • Takes a leading role in owning the team’s systems, platform, products, and services ensuring they have appropriate data development practices, platform and pipeline monitoring and observability while maintaining service delivery SLAs (performance SLOs, Data value Metrics, security, availability, etc).
  • Establishes data right-practices across all teams through refining and evangelising good processes to facilitate flow and understanding.
  • A great teacher & mentor, actively mentoring less experienced colleagues and helping to turn them into experts.
  • Works within the team, with preferred Data partner, and with other teams to discuss, decide and unblock technical projects.
  • Work with the executive leadership team to ensure ongoing maturing and scaling of the Data platform and data capabilities, including data governance and data management.
  • Work with other engineering teams to ensure integration and interoperability with other XXXXX digital products and platforms, as well as alignment to IT governance and standards.
  • Excellent verbal and written communication and presentation skills.
  • Strong ability and passion to lead a team of people towards growth and success, manage performance, and resolving conflict.

In the Data team, we value delivering value on the quickest sustainable path, leaning on automation to make us more efficient & collaborative and detailed problem investigation and diagnosis. We operate as a shared services team to internal teams and stakeholders. We advocate and promote shared Data ownership and responsible usage of data through lean and practical data governance. These values guide our collaboration with our stakeholders to optimally meet their Analytics and Reporting needs and contribute to their strategic success because of the insights they access from XXXXX’s Data.

What we are looking for:

We are seeking someone who fulfils these requirements to enhance our team's capabilities.

Key behaviours and functional skills:

Required experience to be successful in this role:

  • Leadership Experience: 2+ years of experience leading teams, mentoring, and coaching, with a passion for leadership and team growth.
  • Data Engineering Expertise: 5+ years of experience building modern, cloud-native data platforms, with deep hands-on expertise in:
    • Designing scalable and resilient pipelines for big data processing and real-time streaming using technologies like Apache Spark and Databricks.
    • Orchestrating data workflows using tools like Airflow, Databricks Jobs, Azure Synapse or Azure Data Factory.
    • Working with streaming platforms like Apache Kafka, RabbitMQ, or Azure Event Hubs for real-time data processing.
    • Developing metadata-driven pipelines for flexible and dynamic data workflows.
    • Strong understanding of data security, governance, metadata management, and data architecture.
    • Proficiency with Linux, Python programming (object-oriented), and scripting for data automation.
    • Experience with DevOps, DataOps, and CI/CD pipelines, implementing best practices in version control, testing, and deployment.
  • Cloud Technology: 3-5 years of experience with Microsoft Azure (Azure Synapse, Azure Data Factory, Databricks) with exposure to other cloud platforms (AWS/GCP) being a plus.
  • Real-Time Processing: Hands-on experience building and maintaining streaming architectures using Kafka, Event Hubs, or similar technologies.
  • BI & Visualization: 2+ years working with tools like Power BI, Grafana, or other visualization platforms, focusing on secure and optimised deployments.
  • Monitoring & Observability: Familiarity with tools like Application Insights, Azure Monitor, and Grafana for tracking pipeline health, logging, and optimising resource usage.
  • Leadership Skills: Proven ability to lead teams, drive performance, resolve conflicts, and align team.

Additional Skills/Experience (Beneficial)

  • Data Model Design: Familiar with data lakes, data warehousing, and Kimball Dimensional Modelling principles.
  • Data Development Lifecycle (DDLC): Experience guiding teams from requirements to deployment.
  • Mentorship & Documentation: Ability to coach team members and create high-quality documentation to support knowledge transfer.
  • DevOps & Security: Experience with DevEx in Agile environments and working knowledge of the ISO 27001 security framework.