1 Senior Data Engineer

Salary: 80.00 -  100.00
Posted: 15-02-2025
Category: Database, Analytics BI
South Canberra, 

Job Description

Location: ACTSecurity Clearance: Australian Citizen must be able to obtain BaselineJob details We are looking for a Data Engineer to join the Digital Transformation Program in the Australian Bureau of Agricultural and Resource Economics and Sciences (ABARES) to work across several data and analytics platforms. The candidate will develop and optimise data pipelines in Azure Databricks , with a strong focus on Python and SQL . The candidate will have expertise in Azure Data Factory, Azure DevOps, CI/CD , and Git version control , as well as a deep understanding of Kimball dimensional modelling and Medallion architecture . This role requires strong collaboration skills to translate business requirements into effective technical solutions. Key duties and responsibilities Key Responsibilities: Develop, optimise, and maintain data pipelines using Python and SQL within Azure Databricks Notebooks . Design and implement ETL/ELT workflows in Azure Data Factory , ensuring efficient data transformation and loading. Apply Kimball dimensional modelling and Medallion architecture best practices for scalable and structured data solutions. Collaborate with team members and business stakeholders to understand data requirements and translate them into technical solutions. Implement and maintain CI/CD pipelines using Azure DevOps , ensuring automated deployments and version control with Git . Monitor, troubleshoot, and optimise Databricks jobs and queries for performance and efficiency. Work closely with data analysts and business intelligence teams to provide well-structured, high-quality datasets for reporting and analytics. Ensure compliance with data governance, security, and privacy best practices . Contribute to code quality improvement through peer reviews, best practices, and knowledge sharing. Preferred Skills & Experience Strong proficiency in Python for data transformation, automation, and pipeline development. Advanced SQL skills for query optimisation and performance tuning in Databricks Notebooks . Hands-on experience with Azure Databricks for large-scale data processing. Expertise in Azure Data Factory for orchestrating and automating data workflows. Experience with Azure DevOps , including setting up CI/CD pipelines and managing code repositories with Git . Strong understanding of Kimball dimensional modelling (fact and dimension tables, star/snowflake schemas) for enterprise data warehousing. Knowledge of Medallion architecture for structuring data lakes with bronze, silver, and gold layers. Familiarity with data modelling best practices for analytics and business intelligence. Strong analytical and problem-solving skills with a proactive approach to identifying and resolving issues. Excellent collaboration and communication skills , with the ability to engage both technical and business stakeholders effectively. Essential Criteria Demonstrated experience in developing ETL/ELT processes for complex and/or large data movement, transformation and/or visualisations, particularly in a cloud environment. Experience preparing data optimised for query performance in cloud computed engines. E.g. • Distributed computing engines (Spark) • Azure SQL • Python • R Experience working collaboratively within agile development teams and with DevOps practices. Strong problem-solving skills, with the ability to analyse and resolve complex integration challenges. #J-18808-Ljbffr

Job Details

Salary: 80.00 -  100.00
Posted: 15-02-2025
Category: Database, Analytics BI
South Canberra, 

Related Jobs

loading image.

Sign up to our Newsletter