Requirements
Must have:
Solid understanding of database optimisation techniques including parallelisation, partitioning, and indexing Strong SQL skills across at least one major database platform (e.g. PostgreSQL, Oracle) — including transformations, validation, and troubleshooting 5+ years of professional experience in data engineering or data wrangling Experience with at least one scripting language, preferably Python or Bash Fluency in English; B2 German required
Nice to have:
Experience with CI/CD pipelines and containerisation (Docker, Kubernetes) Hands-on experience with stream processing frameworks (e.g. Kafka Streams, Apache Flink)
Responsibilities:
Design and implement high-performance data pipelines for a modern, event-driven API-centric product Write efficient SQL queries and stored procedures; own data quality and pipeline reliability Build and maintain automated testing setups to monitor pipeline performance and data integrity
Company:
Hands-on role within a growing engineering organisation, ownership over data infrastructure in a product that handles high availability and integrity at scale. Flexible and hybrid working arrangements available, with some travel in Switzerland and strong investment in continuous learning and professional development.