Job title - Senior Engineer Hive, Hadoop & Spark

Domain - Banking & Financial Services


Location - Melbourne, Australia (Visa will be sponsored by the Organization)


Reporting to - Lead Engineer


If you are an extraordinary developer and who loves to push the boundaries to solve complex business problems using creative solutions, then we wish to talk with you. As a Senior Data Engineer, you will work with the Technology team that helps deliver our Data Engineering offerings at large scale to biggest clients in Australia. The role is responsible for building and maintaining technology services. As a Senior Data Engineer, you are passionate about data and technology solutions, are driven to learn about them and keep up with market evolution. You are hands-on throughout the entire engagement cycle, specializing in modern data solutions including data ingestion/data pipeline frameworks, data warehouse & data lake architectures, cognitive computing and cloud services. You are enthusiastic about all things data, have strong problem-solving and analytical skills and have a solid understanding of software development life cycle. For this role you must be currently based in Australia:-



Role and responsibilities:


  • Be a technical expert with strong domain knowledge in Big Data Technologies like Hadoop /Spark in both infrastructure and data solutions
  • Understand a clients overall Data estate, IT and business priorities and success measures to design implementation architectures and solutions
  • Understand the needs of Client and how that will impact the design and development of enablement solutions.
  • Independently drive change through to production to support Client solutions, as necessary.
  • Assist the business in bringing new data sets together in order to facilitate impact analysis.
  • Source, stitch and curate large volumes of data from multiple sources across multiple technologies; continually validating to ensure fit-for-purpose
  • Drive continuous improvements of the system, development practice and processes.
  • Review all deliverables to ensure quality and conformance to standards.
  • Extraction of data and understanding of how to leverage the data to create value and respond to customer requirements for information.
  • Automate applications and infrastructure deployments. Produce build and deployment automation scripts to integrate between services

Qualifications & Experience



• 6+ years experience in working with Hadoop ecosystem/EMR and Spark

• Development experience in at least one of these programming languages: Python, Java, Scala

• Extensive experience with Spark Optimization Techniques and Performance Tuning

• Strong SQL skills

• Strong analytical and data cleansing skills

• Expert-level understanding of data modelling and data warehousing concepts

• Experience in leading a team of data engineers, working in an onsite-offshore model

• Solid understanding of end-to-end behaviour of enterprise applications, from front end through to database, authentication, and access control

• Experienced with tools such as Bitbucket, JIRA, Confluence and test driven methodologies.

• Banking Data experience is a plus.

• BTech / MTech or equivalent in Computer Science or related fields


Email: info@whizrobo.com

Post a Comment

 
Top