top of page

Data engineer (medior)

Amsterdam, Netherlands

Job Type

Hybrid (remote / on location)

About the Role

Medior Data Engineer

Requirements

Requirements

  • 3+ years of professional experience in a data engineering or backend data role

  • MSc in Computer Science, Data Engineering, Software Engineering, or any other relevant field

  • Strong Python skills (intermediate++)

  • Solid understanding of relational databases and proficiency in SQL

  • Proven experience with data processing libraries like pandas and numpy

  • Experience designing, building, and maintaining ETL/ELT pipelines

  • AWS experience (S3, Lamda)

  • Good coding practices (PEP8, unit tests, mocking, logging, debugging, code reviews, GitHub, etc.)

  • Strong verbal and written communication skills in English

  • Living in Amsterdam or willing to relocate


Nice to have

  • Experience working with time series or event-based data

  • Exposure to MLOps or machine learning workflows

  • Knowledge of data warehousing and dimensional modeling

About the Company

As part of the overall data strategy, the Medior Data Engineer is responsible for designing, implementing, and maintaining scalable data pipelines, ensuring high data availability, quality, and performance. This role plays a key part in enabling analytics, reporting, and machine learning workflows across the company. This position is for you if:

  • You enjoy working across all phases of the data lifecycle: data ingestion, transformation, validation, monitoring, and optimization.

  • You take pride in ensuring data quality, availability, and performance.

  • You enjoy working collaboratively, and you raise your hand quickly when you’re blocked or uncertain.

  • You’re intentional in your work — you plan first, code second.

  • You stand by your architectural decisions and take ownership of the systems you build.

  • You can clearly communicate technical concepts to non-technical colleagues, and enjoy cross-functional collaboration.


Key Responsibilities

  • Designing, building, and maintaining scalable and high-performance ETL/ELT pipelines.

  • Ensuring data integrity, availability, and performance across the data infrastructure.

  • Oversee and optimize database performance and reliability

  • Ensure high standards of data quality and reliability

  • Identify and mitigate risks associated with data processing and storage.

  • Developing and maintaining unit tests, logging, debugging, and code review processes.

  • Utilizing cloud-based services, particularly AWS, to enhance data processing capabilities.

  • Collaborating with cross-functional teams to translate business needs into scalable data engineering solutions.

  • Support downstream analytics and machine learning use cases with clean, well-structured data

  • Processing and managing Big Data workflows efficiently.

  • Documenting technical implementations and processes for knowledge sharing and collaboration.

  • Adhere to security and compliance policies for data handling.


Key Performance Indicators

  • Data pipeline reliability and performance.

  • Compliance with best practices in coding, security, and data management.

  • Efficient processing and transformation of large-scale datasets.

  • Effective collaboration across teams and successful delivery of data solutions.

bottom of page