Lead BigData Lead Engineer

Salary: 100.00 -  125.00
Posted: 06-03-2025
Category: Architecture, Planning Construction
Banana Shire, 

Job Description

The project focuses on building a modern cloud-based data processing platform, enabling efficient management and real-time analysis of large datasets. Key aspects include implementing ETL/ELT pipelines, optimizing data processing, and ensuring compliance with industry regulations. The solution leverages Azure, Apache Spark, and Data Lakehouse architecture to support strategic business decisions. Responsibilities: Engineer reliable data pipelines for sourcing, processing, distributing, and storing data in different ways, using cloud (Azure) data platform infrastructure effectively. Transform data into valuable insights that inform business decisions, making use of our internal data platforms and applying appropriate analytical techniques. Develop, train, and apply data engineering techniques to automate manual processes, and solve challenging business problems. Ensure the quality, security, reliability, and compliance of our solutions by applying our digital principles and implementing both functional and non-functional requirements. Build observability into our solutions, monitor production health, help to resolve incidents, and remediate the root cause of risks and issues. Understand, represent, and advocate for client needs. Codify best practices, methodology and share knowledge with other engineers in UBS . Shape the Data and Distribution architecture and technology stack within our new cloud-based datalake-house. Be a hands-on contributor, senior lead in the big data and data lake space, capable to collaborate and influence architectural and design principles across batch and real-time flows. Have a continuous improvement mindset, who is always on the lookout for ways to automate and reduce time to market for deliveries. Mandatory Skills Description: Your Expertise Experience in building Data Processing pipeline using various ETL/ELT design patterns and methodologies to Azure data solution, building solutions using ADLSv2, Azure Data factory, Databricks, Python and PySpark. Experience with at least one of the following technologies: Scala/Java or Python. Deep understanding of the software development craft, with focus on cloud based (Azure), event driven solutions and architectures, with key focus on Apache Spark batch and streaming, Datalakehouses using medallion architecture. Knowledge of DataMesh principles is added plus. Ability to debug using tools Ganglia UI, expertise in Optimizing Spark Jobs. The ability to work across structured, semi-structured, and unstructured data, extracting information and identifying linkages across disparate datasets. Expert in creating data structures optimized for storage and various query patterns for e.g. Parquet and Delta Lake. Experience in traditional data warehousing concepts (Kimball Methodology, Star Schema, SCD) / ETL tools (Azure Data factory, Informatica). Experience in data modelling at least one database technology such as: Traditional RDBMS (MS SQL Server, Oracle, PostgreSQL). Understanding of Information Security principles to ensure compliant handling and management of data. Nice-to-Have Skills Description: Ability to clearly communicate complex solutions. Strong problem solving and analytical skills. Working experience in Agile methodologies (SCRUM). Languages: Seniority level: Mid-Senior level Employment type: Full-time Job function: IT Services and IT Consulting #J-18808-Ljbffr

Job Details

Salary: 100.00 -  125.00
Posted: 06-03-2025
Category: Architecture, Planning Construction
Banana Shire, 

Related Jobs

loading image.

Sign up to our Newsletter