Data Engineer / DevOps Engineer
Strong people, strong brands. Engineering solutions for efficient group wide data exploitation is at the heart of the companies transformational media strategy. Data is behind everything our users engage with on television, radio, in-print, and online. It's up to the architecture built by our Data Platform Engineers to keep that data flowing.
As a Cloud Data Platform Engineer you are a specialist in building services that bring a variety of internal and external data sources together. You are able to build the tools and services necessary for engineers, analysts, and data scientist to ingest, perform reliable ETL upon, reproducibly analyse, and production predict large volumes of data, no matter their nature, in batch and streaming. Others are able to rely on your infrastructures and processes 24 hours a day, 7 days a week to meet exacting business demands. As a Cloud Data Platform Engineer you ensure the reliability and stability necessary for the rest of the organisation outside the department to meet and exceed broad business objectives. You understand how to balance the costs of running cloud process with the benefits of reliable speed of delivery.
We expect a mindset where you want to continually improve production systems. You understand what makes a good Service Level Indicator and how to set and measure appropriate Service Level Objectives. You understand how to alert people to real problems without fatiguing them, and what appropriate reactions are depending on the criticality of the events. We need creative development solutions to hard operational problems. Much of our focus is on building infrastructure and eliminating toil. We live by our post-mortems and iteratively improve the lives of the engineers that we serve.
How will you make a difference?
- Co-develop and co-operate the cloud-based data platform from inception and design, through deployment, operation and refinement
- Support internal customers (engineers) to design cost efficient data flows, help them improve their monitoring capabilities, and make sure best practice is followed
- Maintain infrastructure through code and ensure overall system health
- Oversee the automatic scaling of evolving systems
- Push for changes to our communities of practice that improve reliability and velocity of business insight and response
Your key strengths
- Practice best effort incident response and blameless postmortems
Your must-have knowledge and experience:
- Two or more years implementing systems that are highly available, scalable, and self-healing on big data platforms ( Cloudera, Hortonworks, MapR, AWS, Google Cloud)
- Experience with at least one cloud provider with a preference at AWS
- Experience developing in at least one of the following in the context of data engineering: Scala, Python, Go, Java, Shell scripting
- Understanding of modern development and operations processes and methodologies
- DevOps experience (set-up of CI/CD pipelines, set-up of systems ...)
Nice to have knowledge:
- Experience building highly automated infrastructures
- Implement and manage continuous delivery systems and methodologies
- Expertise in designing, analyzing and troubleshooting large-scale distributed systems
- Define and deploy monitoring, metrics, and logging systems, and automated security controls
- Ability to debug and optimize code and automate routine tasks
- Systematic problem-solving approach, coupled with strong communication skills
- Driven to deliver value and provide excellent customer service
You either have an AWS certification or are willing to achieve AWS certification within 6 months (minimum AWS Certified Associate).