Key Responsibilities
· Writing defensive, fault tolerant and efficient code for production level data processing systems.
· Configuring and deploying software using tools such as Spark, Hadoop, Scala, Elasticsearch, with our platform being hosted on both private and public virtual clouds, such as Google cloud, Microsoft Azure and Amazon.
· Collaborate with both our solution architects and our R&D engineers to champion solutions and standards for complex big data challenges.
Required Qualifications
· Hands-on technical development, with at least 18 months of industry experience in a data engineering role or equivalent
· Experience with a variety of modern development tooling (e.g. Git, Gradle, Nexus) and technologies supporting automation and DevOps (e.g. Jenkins, Docker and a little bit of good old Bash scripting)
· Knowledge of testing libraries of common programming languages (such as ScalaTest or equivalent).