wave-hero

Data Engineer (with Scala)

icon-location

Poznan, Wroclaw

Great Britain flag
Apply now

Department description:

Insights & Data practice delivers cutting-edge data centric solutions.  

 

Most of our projects are Cloud & Big Data engineering. We develop solutions to process large, also unstructured, datasets, with dedicated Cloud Data services on AWS, Azure or GCP.  

 

We are responsible for full SDLC of the solution: apart from using data processing tools (e.g., ETL), we code a lot in Python, Scala or Java and use DevOps tools and best practices. The data is either exposed to downstream systems via API, outbound interfaces or visualized on reports and dashboards. 

 

Within our AI CoE we deliver Data Science and Machine Learning projects with focus on NLP, Anomaly Detection and Computer Vision. 

 

Additionally, we are exploring the area of Quantum Computing, searching for practical growth opportunities for both us and our clients. 

 

Currently, over 250 of our Data Architects, Engineers and Scientists work on exciting projects for over 30 clients from different sectors (Financial Services, Logistics, Automotive, Telco and others) 

 

Come on Board!

Your daily tasks:

  • Translating complex functional and technical requirements into detailed design;  
  • implementation of scalable and high-performance data processing solutions using Spark and Scala 
  • design and implementation of software to process large and unstructured datasets (noSQL, Data Lake Architecture); 
  • optimization and testing of modern Big Data solutions, also in cloud and Continuous Delivery / Continuous Integration environment.  

Frequently used technologies:

Spark

star-icon-fill star-icon-fill star-icon-fill star-icon-fill star-icon-fill

Scala

star-icon-fill star-icon-fill star-icon-fill star-icon-fill star-icon-fill

AWS/Azure/GCP

star-icon-fill star-icon-fill star-icon-fill star-icon-fill star-icon-empty

Python

star-icon-fill star-icon-fill star-icon-fill star-icon-empty star-icon-empty

SQL

star-icon-fill star-icon-fill star-icon-fill star-icon-empty star-icon-empty

Our expectations:

  • at least 3 years of commercial experience working in projects in Data Engineering, Big Data and/or Cloud environment using Apache Spark and Scala;
  • knowledge of at least one (non)relational database system and SQLlanguage;
  • familiarity with one or more of the listed (or similar) technologies and tools: Oozie, Hive, Hadoop, Sqoop, Kafka, Flume, Hbase;
  • very good command of English (willingness to learn German would be an advantage).

Our offer:

  • permanent employment contract from the first day, 
  • hybrid, flexible working model, 
  • possibility of using increased tax-deductible costs in the case of creative work, 
  • co-financing to equip a workplace at home, 
  • development opportunities: 
    • substantive support from project leaders, 
    • a wide range of internal and external trainings (technical, language, leadership), 
    • certification support in various areas, 
    • mentoring and a real impact on shaping your career path, 
    • access to a database of over 2,000 training courses on Pluralsight, Coursera, Harvard platforms, 
    • internal communities (including Agile, IoT, Digital, Security, Women@Capgemini), 
    • the opportunity to participate in conferences both as a listener and an expert; 
  • relocation package; 
  • benefits as part of the social package (including Multisport card, medical care for the whole family, group insurance on preferential terms, cafeteria). 

 

 

See all our benefits!

Do you have any question? Feel free to contact us!

contact-person-image

Aniela Kicała

Recruitment Specialist

wave