Data Engineer
Agilytic
Brussels, Belgium
Description
Do you love to work on various projects and implement end-to-end data solutions?
Are you ready to expand your skillset in different tools, methodologies, and across various industries?
Do you have an innate passion for all the technical aspects of a data project while having an entrepreneurial sense to participate in building a new practice within a consultancy company?
As a Data Engineer, you will play a key role in tackling complex technological projects for our most challenging assignments. You will play a key role in preparing the (big) data infrastructure used by Data Scientists in the delivery of our client projects.
What will you do day-to-day?
- Conceive and build maintainable (cloud) data architectures (infrastructure, ETL, infra-as-code…);
- Integrate data sources by developing data pipelines;
- Create large data warehouses and datamarts fit for further reporting or advanced analytics;
- Monitor the quality and stability of data solutions;
- Optimize data ecosystems' performance & costs;
- Support Agilytic Data Scientists & Data Analysts in deploying data solutions for the clients;
- Stay up to date with the latest developments in data technology;
- Coach and actively share your knowledge with your colleagues.
Requirements
We are looking for strong candidates with the following academic and professional experiences. Don’t worry if you don’t tick all the boxes, and let’s talk if your motivation is there.
Your background
- A Master in Computer Science, Engineering, Mathematics, Statistics or another quantitative discipline;
- Ideally 2+ year demonstrated experience with big data platforms (Hadoop, Cloudera, EMR, Databricks...), cloud environments (Azure, AWS, or GCP) or end-to-end BI projects;
- But being eager to learn new topics in the Data Engineering field is what matters most!
Your skills
- Technical knowledge in one or more of the following competencies:
- Data pipeline management/ETL processing;
- Database management (SQL and noSQL databases);
- Data modeling basics (3NF, Star Schema…);
- Large file storage (HDFS, Data Lake, S3, Blob storage…);
- Cloud platforms management like AWS, Google Cloud or Azure, in a data context;
- Docker;
- DevOps best practices integration (CI/CD, Infra-as-Code, unit testing…) is a plus;
- Workflow management such as Airflow or Oozie is a plus;
- Stream processing such as Kafka, Kinesis, Elasticsearch is a plus.
- Programming languages requirements:
- Working experience with Python, SQL (Java, Scala are also considered);
- Knowledge of Spark with Scala or Pyspark is a plus.
- Other requirements:
- Fluency in English;
- Fluency in either in French OR Dutch;
- Eligibility to work in Belgium and the European Union (please be aware that we cannot provide relocation support).
Don't forget to mention BrusselsJobs when applying.