Streaming Data Engineer — On-site in Geneva (Banking Industry)
Client:
Leading company in the
Banking Industry
(name confidential)
Location:
On-site — Geneva, Switzerland
Duration:
12-month contract
Seniority:
Senior (8+ years' experience)
Rate:
Based on experience
Application deadline:
Friday, 10 October 2025
DropaCode is supporting NTT DATA in the search for an experienced Data Platform Engineer to join their team on-site in Geneva, Switzerland for a major project with a leading global financial institution.
The client is one of the world's largest and most respected banks. Its business model focuses on commercial banking products and services for retail, SMEs, and corporate clients. With a global presence serving more than 140 million customers across North America, Europe, and South America, the bank operates a network of 14,400 branches and manages over 1 trillion euros in customer assets. It employs 187,000 people worldwide and continues to innovate within the financial sector.
This role focuses on designing, implementing, and maintaining real-time data processing systems to handle the high-volume, high-velocity data streams that are critical to our data-driven decision-making processes. The Streaming Data Engineer will work closely with data engineers, data scientists, and business stakeholders to build scalable, high-performance data pipelines and systems that can process real-time data with low latency, ensuring insights are delivered instantaneously to drive operational efficiency and improve customer experience.
Key Responsibilities:
* Real-time Data Pipeline Development:
design, implement, and manage scalable real-time data pipelines to process streaming data from various sources, including transactional systems, APIs, and external data providers.
* Data Processing Frameworks:
leverage stream processing technologies (e.g., Apache Kafka, Apache Flink, Apache Pulsar) to build reliable and performant data workflows.
* System Integration:
collaborate with backend and front-end teams to ensure seamless integration between real-time data systems and business applications, dashboards, and reporting tools.
* Data Quality & Monitoring:
implement monitoring solutions to ensure the health and performance of streaming data pipelines, addressing issues proactively and ensuring data quality and integrity.
* Optimization & Performance:
optimize streaming data platforms for low-latency processing, high throughput, and efficient resource usage.
* Scalability:
design systems that can scale horizontally to handle growing data volumes, ensuring high availability and fault tolerance.
* Collaboration with Data Scientists & Analysts:
work closely with data scientists, business analysts, and other stakeholders to deliver real-time analytics capabilities, ensuring that real-time insights are actionable and aligned with business needs.
* Compliance & Security:
ensure that all data processing complies with industry regulations (e.g., GDPR, PSD2) and internal security standards to safeguard sensitive financial data.
* Automation:
automate repetitive tasks such as deployment, monitoring, and scaling to streamline operations and reduce manual overhead.
Required Skills & Qualifications:
* Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related field.
* Proven experience (8+ years) in developing and maintaining real-time streaming data platforms.
* Expertise in stream processing frameworks and technologies, including Apache Kafka, Apache Flink, or Apache Pulsar.
* Strong knowledge of cloud platforms (AWS, Azure, or Google Cloud) and tools such as AWS Kinesis, Google Cloud Pub/Sub, or Azure Stream Analytics.
* Proficiency in programming languages such as Java, Scala, Python, or Go.
* Experience with distributed systems and message-oriented middleware.
* Solid understanding of data integration techniques, including data transformation, filtering, and enrichment in real-time.
* Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).
* Experience in working with large-scale, high-throughput data environments.
* Knowledge of data security and privacy concerns, particularly in the context of financial data.
* Excellent troubleshooting and problem-solving skills in a complex and high-performance environment.
* Strong communication skills, with the ability to clearly convey technical concepts to both technical and non-technical stakeholders.
Preferred Qualifications:
* Experience with machine learning or AI-based stream analytics.
* Familiarity with data governance and real-time data quality practices.
* Experience with DevOps or CI/CD pipelines in data engineering environments.
* Certifications in cloud platforms or big data technologies (e.g., AWS Certified Big Data - Specialty, Google Cloud Professional Data Engineer).