Be an essential part of everyday life as a Big Data Architect.
We are looking for a talented individual to join our AI, Data & Analytics global centre of excellence.
Your role will be to provide best-fit architectural solutions to shape our global data ecosystem.
You will work with key stakeholders, business partners, IT counterparts and the enterprise architects to identify use-case requirements, define architectures, support development and delivery of data products and promote data quality.
Key Responsibilities:
* Scout for new technologies, execute proof of concepts (PoC's) and manage delivery of minimum viable products (MVPs) for innovative solutions and conduct technology exploration in cloud environments and Big Data.
* Understand data analytics use-case: Be it Reporting, GenAI or predictive Machine Learning algorithm, you will need to understand use-case requirements and propose and design technological/data architecture fulfilling them.
* Understand applications & data sources: Although the application teams are responsible for providing the details of application data, you must understand where data is created and maintained.
* Designing data movement: You will be leading and coordinating data engineering efforts of data movement, the source and destination of each step, how the data is transformed as it moves, and any aggregation or calculations.
* Define Data Products: Data Products will draw together data from various source systems across the domains. Some source systems will provide database or API while others will push data in real-time.
* Work in conjunction with Data Engineering Team in designing, developing, testing and reviewing data ingestion and processing pipelines meeting high quality criteria for stability, performance, cost effectiveness and resiliency.
* Engage in architectural discussions and workshops with stakeholders and customers to comprehend their business and technical needs, to develop tailored technical architectures and solutions in the Cloud, focusing on data engineering, data lakes, lake houses, business intelligence and machine learning/AI.
* Cost Optimization: You will be continuously trying to optimize run costs – both on platform level as well as making sure our pipelines/application are cost effective.
Requirements:
* University Degree in Computer Science, Information Systems, Statistics, or related field.
* Minimum 7 years of experience in the IT industry with expert knowledge of On-Premise and/or Cloud Data Management technologies.
* At least 5 years of designing, developing, and supporting Big Data solutions for data lakes and data warehouses.
* Expertise in cloud-based Big Data solutions is required – preferably with Azure Data Lake and related technological stack: ADLS Gen2, Spark/Databricks, Delta Lake, Kafka/Events Hub, Stream Analytics, Azure Data Factory, Azure DevOps, etc.
* Experience with designing and running Data Warehouses and/or Lakehouses e.g. Databricks SQL, Snowflake, Synapse Analytics, Microsoft Fabric, etc.
* Comfortable understanding and writing complex SQLs in Data Analytics projects.
* Excellent Python/Scala and Spark programming skills.
* Solid understanding of delivery methodology and process.