About the job
You'll work on the C++ layer that powers local AI, porting and enhancing inference engines like llama.cpp, ONNX and similar, to run efficiently on edge devices. Your focus is on the runtime: making models load faster, run leaner, and perform well across different hardware. You'll ensure that the inference layer is stable, optimized, and ready for integration with the rest of the stack.
This role is for engineers who want to work close to the metal, enabling private and fast on-device AI without relying on cloud infrastructure.
Responsibilities
Work on deploying machine learning models to edge devices using the frameworks: llama.cpp, ggml, ONNX.
Collaborate closely with researchers to assist in coding, training and transitioning models from research to production environments.
Integrate AI features into existing products, enriching them with the latest advancements in machine learning.
Qualifications
Excellent programming skills in C++, experience in Javascript is a bonus
Strong experience with Llama.cpp and ggml inference engines, which facilitates the deployment of models to specific GPU architectures
Good understanding of deep learning concepts and model architectures
Experience with transformers and LLMs
Demonstrated ability to rapidly assimilate new technologies and techniques
A degree in Computer Science, AI, Machine Learning, or a related field, complemented by a solid track record in AI & R&D
#J-18808-Ljbffr