Data Engineer, Sydney

Employment Type:  Permanent (Full Time)
Business Area:  Operations & Commercial
Division:  Advanced Analytics
Location: 

Barangaroo, New South Wales, AU

About GrainCorp
What did you have for breakfast today?
Whether it’s the flour in your toast or the grain in your cereal, it’s highly likely that GrainCorp helped get it onto your plate!  As we find new ways to connect rural communities with food, animal feed and industrial customers around the world, we are proud to say we’re leading the way in sustainable agriculture.


About the role
GrainCorp is currently seeking a Data Engineer to work with a motivated and collaborative team to delivery data, infrastructure and system activities to support advanced analytic use cases’ success. The responsibilities of the role include:
•    Transform the design and the product vision into working products for the end users, foster a culture of sharing, re-use, scale stability and user-first design.
•    Design, develop, optimise, and maintain data engineering solution for analytical use cases including architecture, data products/pipelines and infrastructure with industry best practices.
•    Continue learning about new techniques and understand how to apply changing industry best practice to individual projects.
•    Support team continuous growth.

 

About your experience
Candidates will be able to demonstrate experience working on data engineering solution delivery with collaboration with other teams. Candidates will also display:
•    5+ years demonstrated working experience on use case data engineering solution delivery with good problem solving and communication skills.
•    Demonstrated experience in DataOps, InfraOps, DevOps and MLOps(optional) best practices (Aure preferred) development with at least one programming language (Python preferred). 
•    Data warehouse design and implementation to support BI and analytical reporting.
•    Experience in SAP/Azure integration and Gen-AI solution implementation.
•    Bachelor’s degree in: Computer Science, MIS, or Engineering preferred.

 

About your skills

Technical: 
•    ETL using Pyspark, Python, SQL etc. to create data products for AI/ML and BI consumers.
•    Data warehouse / Lakehouse design and implementation.
•    Delta lake storage and Apache Spark engine performance optimization.
•    Azure: Databricks (Data Engineering and Gen-AI), IaC(Bicep preferred), DevOps, Azure ML workspaces, vnet etc. 
•    SAP BTP (DataSphere, SAC, etc) 
Non-technical:
•    Good problem solving and effective communication.
•    Good team player and agile project delivery management

 

Ready to apply? / How do you apply?
The next steps are easy!  Simply submit your application and one of our team will reach out for a chat to discuss your background in more detail. We are looking forward to speaking to you.
Progressed candidates will be asked for  proof of working rights (citizenship or permanent residency) and suitable professional references.