About the job
We are looking for an experienced AI Model Engineer with deep expertise in kernel development, model optimization, fine‑tuning, and GPU acceleration. The engineer will extend the inference framework to support inference and fine‑tuning for language models with a strong focus on mobile and integrated GPU acceleration using Vulkan.
Responsibilities
Implement and optimize custom inference and fine‑tuning kernels for small and large language models across multiple hardware backends.
Implement and optimize full and LoRA fine‑tuning for small and large language models across multiple hardware backends.
Design and extend datatype and precision support (int, float, mixed precision, ternary QTypes, etc.).
Design, customize, and optimize Vulkan compute shaders for quantized operators and fine‑tuning workflows.
Investigate and resolve GPU acceleration issues on Vulkan and integrated/mobile GPUs.
Architect and prepare support for advanced quantization techniques to improve efficiency and memory usage.
Debug and optimize GPU operators (int8, fp16, fp4, ternary).
Integrate and validate quantization workflows for training and inference.
Conduct evaluation and benchmarking (e.g., perplexity testing, fine‑tuned adapter performance).
Conduct GPU testing across desktop and mobile devices.Collaborate with research and engineering teams to prototype, benchmark, and scale new model optimization methods.
Deliver production‑grade, efficient language model deployment for mobile and edge use cases.
Work closely with cross‑functional teams to integrate optimized serving and inference frameworks into production pipelines designed for edge and on‑device applications, defining clear success metrics such as improved real‑world performance, low error rates, robust scalability, optimal memory usage, and ensuring continuous monitoring and iterative refinements for sustained improvements.
Qualifications
Proficiency in C++ and GPU kernel programming.
Proven expertise in GPU acceleration with Vulkan framework.
Strong background in quantization and mixed‑precision model optimization.
Experience and expertise in Vulkan compute shader development and customization.
Familiarity with LoRA fine‑tuning and parameter‑efficient training methods.
Ability to debug GPU‑specific performance and stability issues on desktop and mobile devices.
Hands‑on experience with mobile GPU acceleration and model inference.
Familiarity with large language model architectures (e.g., Qwen, Gemma, LLaMA, Falcon, etc.).
Experience implementing custom backward operators for fine‑tuning.
Experience creating and curating custom datasets.
Demonstrated ability to apply empirical research to overcome challenges in model development.
#J-18808-Ljbffr