About the Company:
Our client is one of the worlds leading independent commodity trading companies, operating worldwide. Known for innovation and integrity, the company is a market leader in the trade, transport, storage, and optimization of energy products. This is an opportunity to work in a global, dynamic environment at the heart of the energy sector.
Role Overview:
This position is a high-priority engagement focused on developing business-facing capabilities, particularly the foundational data needed by the research and trading teams. The Data Engineer will operate within the Hub and Spokes operational model, likely embedded within a Spoke Pod in Geneva to ensure proximity with the business and direct support for the critical trading desk project.
Key Responsibilities:
• Time-Series & Curve Mastery: Design, implement, and maintain complex data structures for trading, mastering multi-feed validation, unit/calendars/normalization, versioned curve stores, and scenario/scalarisation to support quant strategies.
• Streaming Data Pipelining: Architect and implement scalable data ingestion and processing systems, specializing in streaming at scale with technologies like Kafka/Flink, ensuring robust handling of stateful operations, exactly-once semantics, and late/out-of-order data.
• Industrialization and Discipline: Enforce expert programming discipline, including strong testing discipline (unit/contract/property-based), packaging, CI/CD, and GitOps practices.
• Data Quality and Orchestration: Utilize orchestration tools (e.g., Airflow, Data Factory, Dagster) and enforce data quality contracts using dbt/Spark/SQL. Pipelines must aim for a Data Quality Compliance > 95%.
• Platform Reusability: Develop reusable ETL/ELT templates, contributing to the goal of achieving a Template Reuse Rate > 70%.
Core Qualifications:
• Bachelor's or Master's degree in Computer Science, Engineering, or a quantitative discipline.
• At least 5 years of experience as a Data Engineer, ideally in a cloud-native environment.
• Prior experience in a trading, commodity, energy, or financial services environment.
• Strong expertise in Azure Data Factory, Azure Synapse, SQL Server, and Azure Data Lake.
• Solid experience with SQL and ETL/ELT development.
• Proven experience with data modeling (fact/dimension tables), data warehouse design, and BI support.
• Comfortable working with big data sets and optimizing data delivery and pipeline performance.
• Fluent in English (written and spoken) is mandatory.
• Excellent analytical, problem-solving, and communication skills.
Desirable Skills:
•
Domain Fluency:
Trades/orders, ETRM/CTRM integration patterns (e.g., using CDC/event adapters), positions/exposures maths, NEAR REAL TIME/telemetry.
• Knowledge of data governance, data quality, and metadata management.
• Azure certification (e.g., Azure Data Engineer Associate) is a plus.
• Experience in commodity trading or energy industries is a strong advantage.
Your Data:
By submitting your resume, you agree to the retention and use of your personal data by TSG for recruitment purposes, including sharing with our clients in the context of your application.