Location:
Other, Eastern Europe
Seniority:
Senior
Technologies:
Data, Data Science, Devops

Pandora, the world’s largest jewellery brand, is undergoing a significant digital transformation aimed at redefining the global retail experience through data and AI. As part of this initiative, our team is supporting Pandora in building scalable and production-grade machine learning systems that deliver business-critical insights, from personalization and inventory optimization to Generative AI applications. The MLOps Engineer will play a vital role in helping Pandora’s AI and Data Platform teams deploy, monitor, and maintain machine learning models across their cloud infrastructure. You will work closely with Pandora’s internal teams to streamline MLOps pipelines, enhance model observability, and support the operationalization of cutting-edge AI solutions in a fast-moving, cloud-native environment.

  • Design, develop, and maintain end-to-end MLOps pipelines to support the deployment, monitoring, and retraining of machine learning models in production.

  • Collaborate with Pandora’s AI, data engineering, and platform teams to ensure smooth integration of ML models into scalable systems.

  • Automate key processes including model validation, testing, and rollout, with a strong focus on reliability and repeatability.

  • Monitor model performance in live environments and implement feedback loops to improve model accuracy and robustness over time.

  • Ensure compliance, security, and cost-efficiency across the ML infrastructure.

  • Support infrastructure provisioning using Infrastructure as Code (IaC) tools like Terraform or Bicep.

  • Contribute to the deployment of Generative AI use cases and assist in the evolution of Pandora’s AI capabilities.

  • Stay current with industry best practices in MLOps, ML system design, and DevOps automation.

  • Proven experience as an MLOps Engineer, ML Engineer, DevOps Engineer, or a similar role with a focus on operationalizing machine learning models.

  • Strong programming skills in Python and hands-on experience with ML frameworks such as TensorFlow or PyTorch.

  • Solid experience with Infrastructure as Code (IaC) tools such as Terraform, Bicep, or ARM templates.

  • Proficiency in containerization (Docker) and orchestration tools (Kubernetes) for scalable deployment.

  • Familiarity with CI/CD practices and tools for automating ML workflows.

  • Experience working with cloud platforms, preferably Azure, but AWS or Google Cloud are also valuable.

  • Understanding of model monitoring, model versioning (e.g., MLflow or DVC), and related evaluation metrics.

  • Practical experience with big data processing tools such as Databricks, Apache Spark, Snowflake, or Hadoop.

  • Exposure to Generative AI model deployment is a plus.

  • Understanding of system-level architecture including data pipelines, scaling strategies, and reliability considerations for production ML systems.

Discover what it’s like to work with us
Join Our Team!
Attaching my CV:
Your message is sent. Thank you for contacting us, we will get in touch with you soon.