* Design, build, and operate scalable AI-powered features (e.g. Generative AI assistants, RAG, predictive analytics) used by real users in production
* Integrate, evaluate, and monitor LLMs, including prompting strategies, quality metrics, and observability to ensure reliable outcomes
* Collaborate closely with product, design, front-end, and infrastructure teams to turn ambiguous requirements into valuable AI solutions
* Communicate complex AI and engineering concepts in clear, simple terms to support cross-functional decision-making
* Own the end-to-end lifecycle of AI systems, from experimentation and MLOps pipelines to deployment, monitoring, and continuous improvement
Requirements
* BSc/MSc in Computer Science, Software Engineering, or similar
* 5+ years of professional software engineering experience, including cloud services in production
* Proven experience with LLMs (prompting, evaluation, monitoring)
* Strong skills in Python or a strongly typed language (TypeScript or Java)
* Solid understanding of cloud-native architectures (AWS or equivalent), Docker, and Kubernetes/ECS
* Hands-on MLOps experience (CI/CD for models, pipelines, evaluations, feature stores)
* Strong data intuition – comfortable digging into logs, metrics, and quality signals
* A product-oriented mindset and empathy for end users
Maybe not for you, but for someone else?
#J-18808-Ljbffr