* As part of a DevOps team, you areresponsible for developing and maintaining an AWS-based datainfrastructure and pipelines.
* You design,implement and optimize Spark ETL to ensure efficient dataprocessing and you support the gradual operationalization ofmachine learning models in the long term.
* Youwork closely with data owners and data analysts to develop andmaintain robust and reliable datapipelines.
* You analyse requirements, designdata products and implement them in our big datastack.
* Through process automation, you ensuretrouble-free operation and continuously improve our developmentstack.
* In close cooperation with businessstakeholders and the architecture team, you specifically aligntechnical solutions with businessneeds.
* For this role, you have auniversity degree in IT or equivalent professional experience, aswell as extensive specialist knowledge in data processing withApache Spark, SQL and Python.
* Ideally, you haveexperience in software development (Scala, Java) and AWS analyticsservices (Glue, SageMaker, Athena, Lambda,Batch).
* You are interested in CI/CD,infrastructure-as-code (Terraform) and DevOps bestpractices.
* You thrive in an agile workenvironment and have a growth-orientedmindset.
* You have a good written and spokencommand of German or French and good proficiency inEnglish.
* 6 weeks'holiday
* ParentalLeave
* Mobile and flexibleworking
* Fair employmentconditions
* Half Fare Travelcard or contributiontowards GA
* Support with basic and advancedtraining
* Staffvouchers