Job Title

Big Data Engineer: Spark, Scala

Share This Job


8 Active Positions

Job Description

Primary mission:

As a Data Engineer, you will interact with data scientists, big data architects, software developers, and business experts to understand how data needs to be converted, loaded, processed and presented for analytics products.

You will contribute to the design, development, testing, deployment, performance in production and maintenance of the data-centric software including APIs, libraries, toolbox…

Core activities:

Development to industrialize the following workflow:
• Collecting Data (Batch + Real Time Streaming)
• Data Backend Development (ETLs, APIs and Data Preparation) • Algorithms Development (Data Mining and Machine Learning) • BI Development (SQL, ETLs, Datawarehouse)

Technical and professional skills:

• Language (required): Python, Scala, SQL
• Languages (appreciated): .NET, Java
• Big Data Frameworks: Spark, Hive, Kafka, Kubernetes
• Data Base: relational (SQL Server) & non-relational (MongoDB, ELK, HBase, CosmosDB…) • Experience in Agile methodology
• As a bonus, knowledge of Machine Learning (Scikit-Learn, R, Spark ML…)

Soft skills and competencies:

  • Passion for learning new tools, languages and frameworks
  • Ability to be creative and innovation-minded, self-motivated and proactive.
  • Fast adaptation to changing requirements and strong problem solving skills.
  • To work in a collaborative model, side by side with the business
  • Fluency in English to work in a multi-cultural environment


  • MS’s degree in Computer Science, Engineering, or related field
  • 4+ years’ experience in software development
  • 2+ years’ experience in Big Data Frameworks
  • As a bonus, contribution to open source projects
Tags: .net, Agile, API, BI, big data, data engineer, ETL, Hive, java, Kafka, Kubernetes, python, R, scala, Scikit-learn, Scrum, spark, SparkML, SQL, Squads

¿Tienes preguntas?

Estamos para ayudarte. Escribenos aquí y te responderemos lo antes posible