What is the scope of duties in this role?

  • You will translate complex functional and technical requirements into detailed design,
  • you will implement scalable and high-performance data processing solutions using Spark and Scala,
  • you will design and implement software to process large and unstructured datasets (noSQL, Data Lake Architecture),
  • you will optimize and test modern Big Data solutions, also in cloud and Continuous Delivery / Continuous Integration environment.

We are looking for you if:

  • you have commercial experience working in projects in Data Engineering, Big Data and/or Cloud environment using Apache Spark and Scala,
  • you know well at least one (non)relational database system and SQL language,
  • you are familiar with one or more of the listed (or similar) technologies and tools: Oozie, Hive, Hadoop, Sqoop, Kafka, Flume, Hbase,
  • you have very good command of English.