For a large financial client, we are looking for 4 Big Data Engineers.
You will be working in the clients Data Engineering team as a BigData Engineer. Here you’ll have a once in a lifetime opportunity to be a key part of designing and building the clients next core data platforms.
As a BigData Engineer you will develop, maintain, test and evaluate BigData solutions within the clients next generation BigData Platform.
Key responsibilities include:
- Building distributed and highly parallelized BigData processing pipeline which process massive amount of data (both structured and unstructured) in near real-time
- Leverage Spark to enrich and transform corporate data to enable searching, data visualization, and advanced analytics
- Work closely with DevOps, QA and Product Management teams in a Continuous Delivery environment
- Experience in Scala or Java/Spark (and willingness to learn Scala),
- Knowledge of Hadoop stack: YARN, MR, Sqoop, Hive, Kafka, Impala,
- Hands on development experience with Jenkins or Bamboo, JIRA, Bitbucket and Git/Stash,
Ideally you also have:
- Experience of implementing data security capabilities such as encryption and anonymization,
- Excellent communication skills and experience of distributed global teams.
- Data driven thinking capabilities,
- Experience in using Agile methods,
- Solid grounding in Financial Service.
Start: 1st July 2019
Duration: 3-6 Months
Work location: Copenhagen, Denmark
Requirements: Min. 5 years of professional IT experience.
Job type: Freelance
Please write in your application that you've seen the job at Jobfinder.