What will you do with us?

  • you will implement data processing solutions using modern technologies and Big Data or/and Cloud tools (incl. streaming, cloud, clustered computing, real-time processing, advanced analytics),
  • you will design and implement pySpark applications based on Cloudera environment (former Apache Hadoop solution),
  • you will optimize and test modern Big Data solutions, also in cloud and Continuous Delivery/ Continuous Integration environment.

We are looking for you, if:

  • you have commercial experience working in projects in Data Engineering, Big Data and/or Cloud environment using Apache Spark,
  • you can code in Python or Scala,
  • you know well at least one (non)relational database systems and (no)SQL language,
  • you have very good command of English, German is advantage.