Design, build, and operate scalable AI-powered features (e.g. Generative AI assistants, RAG, predictive analytics) used by real users in productionIntegrate, evaluate, and monitor LLMs, including prompting strategies, quality metrics, and observability to ensure reliable outcomesCollaborate closely with product, design, front-end, and infrastructure teams to turn ambiguous requirements into valuable AI solutionsCommunicate complex AI and engineering concepts in clear, simple terms to support cross-functional decision-makingOwn the end-to-end lifecycle of AI systems, from experimentation and MLOps pipelines to deployment, monitoring, and continuous improvementRequirementsBSc/MSc in Computer Science, Software Engineering, or similar5+ years of professional software engineering experience, including cloud services in productionProven experience with LLMs (prompting, evaluation, monitoring)Strong skills in Python or a strongly typed language (TypeScript or Java)Solid understanding of cloud-native architectures (AWS or equivalent), Docker, and Kubernetes/ECSHands-on MLOps experience (CI/CD for models, pipelines, evaluations, feature stores)Strong data intuition – comfortable digging into logs, metrics, and quality signalsA product-oriented mindset and empathy for end usersMaybe not for you, but for someone else?
#J-18808-Ljbffr