Bechtle is Germany’s largest IT system house and a leading IT e-commerce provider in Europe. Corporate and public clients of all shapes and sizes rely on our unique blend of streamlined IT procurement and end-to-end services. That’s how we drive your future. On site. In Europe. Around the world.
Our mission
For a running project with an assignment duration from now until end of September we are looking for a pragmatic Data Analyst who combines strong technical craftsmanship with coordination and team leadership. The position is split roughly 50/50 between administrative/coordination duties and hands‑on engineering: you will keep projects on track, maintain clear documentation and governance, and also design, build and optimise large‑scale data workflows using functional programming approaches.
Data Analyst (100%)
Responsibilities
Operational leadership & coordination
* Lead and coordinate the team’s day‑to‑day data activities and stakeholder communication.
* Organise, track and report on engineering tasks, timelines and deliverables.
* Keep documentation up to date for pipelines, workflows and system landscapes.
* Ensure processes and outputs comply with internal standards and data governance.
* Act as a bridge between technical and non‑technical stakeholders and drive continuous process improvements.
Technical engineering
* Design, implement and maintain distributed, resilient data applications with a functional programming mindset.
* Develop and tune large‑scale Spark workloads in Databricks (or equivalent frameworks), ensuring performance and elasticity.
* Build robust, testable ETL/data transformation pipelines and production‑grade code in Python.
* Apply strong knowledge of runtime environments, execution contexts and functional design to deliver predictable systems.
* Contribute to architecture discussions for parallel and distributed data processing.
Requirements
Must‑have qualifications
* Proven hands‑on experience with functional programming paradigms applied to ETL or data pipeline development.
* Practical experience with Apache Spark (primary requirement) or comparable technologies (Flink, Hadoop, Kafka Streams).
* Familiarity with actor model systems (e.g., Akka / Apache Pekko), map‑reduce concepts or category‑theory influenced libraries.
* Solid understanding of runtime/execution models (compiled, JIT, interpreted) and pure function design.
* Proficient in Python and at least one of Scala/Java, C/C++ or Rust.
* Experience working in project environments with planning, coordination and multi‑disciplinary teamwork.
Nice to have
* Prior work with Databricks in production.
* Experience with Cats / Cats Effect or similar functional libraries.
Key information
* Start: immediately
* Duration: 30.09.2026
* Contract model: temporary
Note
* International applications can only be considered from EU-27 countries or with a valid work permit for Switzerland.
* Billing via own company is not possible.
* We do not consider applications via recruitment agencies.
#J-18808-Ljbffr