Building the bank of tomorrow requires a unique combination of skills and collaboration.
We are the Swiss Leader in Online Banking, providing trading, investing, and banking services to over 500,000 clients through our digital platforms.
* We have a team of over 1000 employees who work in a flexible environment without dress code and collaborate in multicultural teams.
* Our employees contribute significantly to the industry and grow their skills portfolio while advancing their careers in a fast-paced environment.
* We welcome candidates from diverse backgrounds, experiences, and perspectives to join our organization and contribute to our shared success.
Job Description
We are seeking a versatile Machine Learning Operations Engineer to establish ML Ops practices within our organization.
This role is critical in taking over models from data scientists and deploying them to production environments, ensuring efficient model maintenance and continuous improvement.
Key Responsibilities:
* Collaborate with data scientists and ML engineers to deploy models into production environments.
* Work with the IT department to define infrastructure requirements for the present and future.
* Develop CI/CD pipelines tailored for ML workflows, automating model versioning, testing, and deployment.
* Implement monitoring systems to track model performance, data drift, and system health, enabling proactive maintenance and scalability.
* Establish ML Ops best practices, laying the foundation for future growth and scalability.
Qualifications:
* Bachelor's degree in Computer Science, Engineering, or a related quantitative field.
* At least 2 years of experience as an ML Ops Engineer or in a similar role, preferably in dynamic and growing teams.
* Experience working closely with data scientists to transition models from development to production.
* Strong programming skills in Python, with knowledge of Java being an asset.
* Proficiency in ML frameworks (TensorFlow, PyTorch, Scikit-learn).
* Experience with generative AI models, including proprietary APIs and on-site RAG systems.
* Strong experience with cloud platforms.
* Proficiency in CI/CD tools (GitLab CI or similar).
* Expertise in containerization and orchestration (Docker, Kubernetes).
* Experience with data pipeline orchestration tools (e.g., Airflow) is an asset.
* Familiarity with data streaming and monitoring tools like Kafka and Elasticsearch is an asset.
* A high degree of autonomy and proactive behavior.
* Versatility and adaptability to take on new challenges in a fast-growing environment.
* Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams.
* A problem-solving mindset with a proactive and self-starter attitude.