89 Emplois pour Google Cloud Certified Professional Data Engineer - Belgique

Manager Data Architecture

Zaventem, Vlaams Brabant KPMG Belgium

Publié il y a 3 jours

Emploi consulté

Appuyez à nouveau pour fermer

Description De L'emploi

Roles & Responsibilities

Becoming an architect inside KPMG Lighthouse will allow you to demonstrate, develop and extend your architecture skills in Data, AI, automation, and system integration. You will be working is a modern environment and find yourself surrounded by a multi-disciplinary peer group of over 60 data enthusiasts with a strong entrepreneurial spirit. You will be able to work on architecture activities in various industries which will allow you to develop your skills and knowledge to the full extent.

You will also get the opportunity to grow in terms of opportunities as well as from financial benefits.

Skills & Qualifications

  • Minimum of 5 years of experience in a technical role with extensive exposure to data; data architect, data engineer, data analyst, software engineer/testing, DevOps engineer, data scientist, or similar relevant experience.
  • You are solution driven and you have a strong analytical mindset, combined with creative skills.
  • You demonstrate leadership potential with the ability to communicate complex technical concepts to a variety of audiences.
  • You are an enthusiastic, driven, and proactive person, taking ownership and accountability in your work.
  • You have a strong business acumen and enjoy working in client facing roles.
  • You have hands-on experience in projects relating to data solutions (data lake, system integration, data migration, data warehousing experience with cloud-based data platforms).
  • Sound knowledge on modern data solutions such as data lakes, data platforms, data streaming, and data security best practices.
  • General knowledge of cloud native computing, public hyperscalers (Azure, AWS and/or GCP).
  • You show an interest in (new) technologies, methodologies, and concepts such as data governance & management, data mesh, data vault 2.0, delta lake, DWH automation, event streaming.
  • Fluent in Dutch & English. (French is a plus).

We offer

  • A competitive salary and profit sharing – the compensation package can be shaped to your individual needs in terms of mobility, IT devices, etc.
  • Your will get fringe benefits including continuous trainings that build and extend professional, technical, and managerial skills in an international and dynamic environment.
  • Flexible, hybrid work arrangements to enable working on different locations: home office, on-site or on the go.
  • “Together” is one of our KPMG-values, so you can count on a wide range of social activities like team buildings, get-togethers & after-work opportunities together with your colleagues
  • Professional experiences in an international and dynamic working environment with inspiring colleagues.
  • An inclusive workspace that encourages diversity and pursues mutual respect for each other’s beliefs and background.
Désolé, cet emploi n'est pas disponible dans votre région

Consultant - Data Engineering

Brussels Exsolvæ

Publié il y a 3 jours

Emploi consulté

Appuyez à nouveau pour fermer

Description De L'emploi

About Exsolvæ

Exsolvæ is a Brussels-based consultancy specializing in Data and Artificial Intelligence. We provide Consultancy, Audit, and Solution Development services to help our partners overcome unique challenges. Our holistic approach fosters collaboration across diverse domains, including Data Engineering, Advanced Analytics, and AI.

At Exsolvæ, our Solvers are passionate professionals equipped with cutting-edge tools, a culture of continuous learning, knowledge sharing, and an unwavering commitment to delivering high-impact solutions.

We are seeking a Data Engineering Specialist with strong AWS expertise to join our expert team and contribute to a large-scale data engineering transformation in Belgium.

Your Role

As a Data Engineering Specialist, you will design and implement cloud-native AWS-based data solutions that enable our client to unlock the full potential of their data assets. You will collaborate with multidisciplinary teams, applying modern big data frameworks, automated testing practices, and solid cloud engineering principles to deliver scalable, efficient, and high-quality pipelines, models, and analytics workflows.

Required Skills and Qualifications

  • Education: Master’s degree in Computer Science, Data Engineering, or related field
  • Languages: Written & spoken fluency in French AND English (mandatory)
  • Experience: 3+ years of professional experience in data engineering, including on-premise and cloud projects
  • AWS Expertise: Strong hands-on experience with AWS services such as S3, Glue, Redshift, EMR, Athena, and Lambda
  • PySpark: Proven ability to build and optimize distributed data processing jobs in a cloud environment
  • SQL: Advanced proficiency in writing complex queries, optimizing performance, and working with large datasets in the cloud
  • Apache Airflow: Hands-on experience orchestrating pipelines, scheduling workflows, and managing dependencies in AWS or multi-cloud environments
  • Data Quality & Migration: Demonstrated ability to ensure data accuracy and reliability in complex migration projects
  • ETL/ELT Development: Expertise in designing and optimizing ETL/ELT pipelines for diverse and high-volume datasets
  • Soft Skills: Strong communication, problem-solving, and collaboration skills to engage effectively with clients and cross-functional teams
  • Mobility: Ability to work on-site within 1h30 travel time from Brussels when required

Preferred Skills

  • Experience with data visualization tools such as Power BI or Tableau
  • Familiarity with Azure or GCP environments (multi-cloud exposure)
  • Understanding of real-time/streaming data frameworks (e.g., Kafka, Kinesis)
  • Professional certifications such as AWS Data Analytics Specialty or equivalent

What You’ll Do

Core Expertise

  • Collaborate with business and technical stakeholders to assess data needs and design efficient AWS-based solutions
  • Develop, maintain, and optimize ETL/ELT pipelines using AWS Glue, PySpark, and SQL
  • Implement automated data quality checks and testing frameworks for reliability and compliance

Technical Implementation

  • Design and optimize scalable data models and warehousing solutions on AWS Redshift or equivalent services
  • Integrate and process large datasets using AWS EMR and Apache Spark
  • Deploy and manage data solutions with a strong focus on performance, scalability, and security

Research and Innovation

  • Stay ahead of advancements in AWS and big data technologies
  • Explore new methodologies to improve scalability, automation, and governance in data engineering

Project Management

  • Lead data engineering projects from design to deployment, ensuring timely delivery and business value
  • Collaborate with data scientists, architects, and analysts to deliver end-to-end data solutions that support gaming analytics and operational intelligence

Why Join Us?

  • Be Part of the Solver Community: Join a team where collaboration, curiosity, and problem-solving are at the heart of everything we do. You’ll work alongside experts in Data and AI, sharing knowledge, insights, and best practices through our internal Cerebro Sessions.
  • Continuous Growth: Benefit from a dedicated €2000 annual training budget for certifications, conferences, and professional development. You’ll also get access to exclusive training on emerging technologies and AI tools.
  • Cutting-Edge Tools: Through our partnership with OpenAI, you’ll have access to a secure GenAI workspace, expert-led sessions, and the most advanced tools in the market to elevate your technical excellence.
  • Impactful Projects: Contribute to large-scale, cloud-native data initiatives in the gaming and entertainment industry, helping our client transform their data ecosystem and deliver high-value analytics.
  • Hybrid & Flexible: Enjoy a balanced work model combining remote flexibility with on-site collaboration
  • Attractive Package: Competitive salary, performance-based bonuses, and a benefits package tailored to your mobility and lifestyle preferences.
Désolé, cet emploi n'est pas disponible dans votre région

Data Engineering Business Unit Manager

Cartière

Hier

Emploi consulté

Appuyez à nouveau pour fermer

Description De L'emploi

Oost-Vlaanderen - 2 dagen thuiswerk - freelance of in vast dienstverband

BEDRIJF:

Deze toonaangevende consultancyorganisatie, actief in diverse sectoren en verschillende Europese landen, kijkt uit naar een enthousiaste Data Engineering Business Unit Manager. Naast hun sterke expertise in business & finance zijn ze uitgegroeid tot een gevestigde waarde binnen data & analytics in de huidige markt. Vandaag zetten ze volop in op de uitbouw van een Microsoft Fabric competence center. Om het Data Engineering team aan te sturen en verder uit te bouwen, kijken ze uit naar een Data Engineering Business Unit Manager.

JOBOMSCHRIJVING:

Als Data Engineering Business Unit Manager speel je een sleutelrol in de verdere ontwikkeling van het data engineering team. Je werkt nauw samen met de Data Analytics Manager en combineert technisch leiderschap met teamcoaching.

Je verantwoordelijkheden:

  • Je bouwt het team verder uit, coacht medewerkers, zet groeipaden uit en zorgt ervoor dat ieders sterktes optimaal worden ingezet.
  • Je ontwikkelt en optimaliseert moderne data-architecturen binnen de Microsoft stack (Azure, Fabric, Databricks, enz.).
  • Je analyseert klantbehoeften, bedenkt slimme oplossingen en waakt over een kwalitatieve implementatie.
  • Je bouwt en onderhoudt data lakes, data warehouses, pipelines en ETL-processen in een Azure-omgeving.
  • Je bouwt mee aan het klantenportfolio van jouw unit.
  • Je blijft op de hoogte van technologische evoluties en implementeert verbeteringen waar nodig.

PROFIEL:

Ze kijken uit naar een dynamische en no-nonsense collega met energie, ambitie en zelfvertrouwen. De organisatie is actiegericht, schakelt snel en werkt in een vlakke structuur zonder politieke spelletjes. Je bent een natuurlijke leider met strategisch inzicht en sterke communicatieve vaardigheden.

Wat breng je mee?

  • Minstens 5 jaar ervaring als data engineer of in een vergelijkbare technische functie in een consultancy
  • Grondige kennis van het Azure platform en Fabric
  • Sterke skills in Python en SQL
  • Je denkt oplossingsgericht en vertaalt complexe uitdagingen naar haalbare oplossingen
  • Je hebt leiderschapspotentieel en het vermogen om een team te laten groeien
  • Je communiceert vlot in het Nederlands én Engels

AANBOD:

  • Je krijgt de kans om door te groeien naar een managementrol, terwijl je jouw technische expertise blijft benutten en ontwikkelen.
  • Je werk is zichtbaar én van betekenis: je krijgt de ruimte om je stempel te drukken en mee een nieuw departement uit te bouwen.
  • Je komt terecht in een organisatie met een sterke cultuur en hoge medewerkerstevredenheid.
  • Geen hiërarchie, geen interne politiek, wél een team dat samenwerkt aan gedeeld succes.
  • Werk mee aan de uitbouw van een gloednieuw Microsoft Fabric competence center.
  • Hybride werken: 3 dagen op kantoor, 2 dagen thuis.

Ben jij de manager die we zoeken?

Stuur dan nu jouw CV naar en bel +32 3 343 94 91 voor meer informatie.

Bekijk onze website voor meer BI & data-vacatures.

Keywords: Lead Data Engineer, Head of Data Engineering, Teamlead Data Engineering, Data Team Manager.

Désolé, cet emploi n'est pas disponible dans votre région

Data Engineer

3070 Kortenberg, Vlaams Brabant Universitair Psychiatrisch Centrum KU Leuven

Aujourd'hui

Emploi consulté

Appuyez à nouveau pour fermer

Description De L'emploi

Het Universitair Psychiatrisch Centrum KU Leuven (UPC KU Leuven) kijkt uit naar nieuwe, extra krachten. Het is het grootste psychiatrisch centrum van ons land, telt meer dan duizend medewerkers en geldt als hét academisch referentieziekenhuis in de geestelijke gezondheidszorg.
Het UPC KU Leuven is georganiseerd in drie diensten. Er is een dienst voor kinder- en jeugdpsychiatrie, voor volwassenenpsychiatrie en voor ouderenpsychiatrie. De diensten zijn gehuisvest op campus Gasthuisberg in Leuven en op campus Kortenberg.
-->
bekijk al onze vacatures via

Universitair Psychiatrisch Centrum KU Leuven
Campus Kortenberg
Leuvensesteenweg 517
3070 Kortenberg
Campus Gasthuisberg
Herestraat 49
3000 Leuven Functieomschrijving Z.Org KU Leuven zoekt een Data Engineer (GCP Stack) voor zijn Universitair Psychiatrisch Centrum KU Leuven. visie
UPC KU Leuven is een toonaangevende universitaire psychiatrische zorginstelling met een sterk maatschappelijk en wetenschappelijk engagement en een groeiende focus op data-gedreven werken. Om onze zorgverlening en -processen, onze operationele werking en het wetenschappelijk onderzoek verder te sturen en ondersteunen, bouwen we aan een modern datawarehouse op de Google Cloud Stack.
We zoeken daarom een beslagen Data Engineer die de fundamenten wil meeleggen en onderhouden van deze centrale data-infrastructuur. je functie
Als Data Engineer ben je verantwoordelijk voor het ontwerp, de implementatie en het onderhoud van ons datawarehouse in Google BigQuery. Je werkt nauw samen met data-analisten, data scientists en IT-collega’s om betrouwbare en schaalbare datamodellen te ontwikkelen die inzichten mogelijk maken op het vlak van zorgkwaliteit, logistiek en beleid. Jouw takenpakket:

  • ontwerp, opzet en beheer van het datawarehouse op Google BigQuery
  • ontwikkelen en beheren van ETL-processen en data pipelines
  • modelleren van datasets in Dataform en het uitbouwen van een gelaagde data-architectuur (bron ? staging ? model)
  • uittekenen van data oplossingen, kost-efficiënt en onderhoudbaar
  • samenwerken met data analisten en domein experts voor het opzetten van flexibele en efficiënte data modellen
  • ondersteunen van wetenschappelijke teams bij het opzetten van betrouwbare datasets
  • op de hoogte blijven van recente ontwikkelingen in data technologie
  • aanspreekpunt zijn voor datavraagstukken in samenwerking met IT, privacy en security
  • documenteren van dataprocessen en het waarborgen van datakwaliteit
Profiel
  • je hebt bij voorkeur een masterdiploma in (toegepaste) informatica, computer wetenschappen, engineering, data sciences of equivalent
  • je hebt minstens 5 jaar ervaring als data engineer of in een vergelijkbare rol meebrengt
  • je hebt een uitstekende kennis van SQL (performance, modellering, transformatie)
  • je hebt bekendheid met cloud platforms, en in het bijzonder Google Cloud producten (BigQuery, Dataform, Cloud Functions, etc.) is een plus, maar met een gezonde interesse spring je al een eind.
  • enige kennis van python is een voordeel
  • ervaring met data modellering, version control (Git), CI/CD is mooi meegenomen
  • je kan bruggen bouwen tussen technische en niet-technische profielen en slaagt erin om inhoudelijke wensen te vertalen naar technische data modellen
  • affiniteit met de zorgsector of maatschappelijke impact is een extra troef
Aanbod
  • een betekenisvolle job in een organisatie waar mens en welzijn centraal staan
  • de kans om van nul af aan een moderne dataomgeving op te bouwen met de nieuwste technologieën
  • samenwerking in een klein maar ambitieus team met veel autonomie
Désolé, cet emploi n'est pas disponible dans votre région

Data Engineer

Akkodis

Aujourd'hui

Emploi consulté

Appuyez à nouveau pour fermer

Description De L'emploi

At Akkodis, we're Hiring: Data Engineers for an Advanced R&D Data Platform (Semiconductor Domain)

Location: Leuven, Belgium (Hybrid: 1–3 days on-site/week)

Duration: Up to 24 months

Start date: Within 20 working days of contract award / September

Engagement: FTE (1 year initially)

The Opportunity

Join a cutting-edge R&D data platform project with one of Europe’s most advanced research organizations. This is your chance to work on a mission-critical platform powering digital transformation in the semiconductor industry.

The program is redefining how researchers access, transform, and visualize data — turning raw, complex cleanroom data into actionable insights.

We're looking for experienced Data Engineers to help onboard semiconductor equipment data into a state-of-the-art, cloud-based platform that’s used daily by top researchers.

Your Mission

As a Data Engineer, you’ll join a growing team responsible for:

  • Ingesting raw tool data into a central R&D data platform
  • Creating event-driven pipelines triggered by Azure Event Hubs
  • Transforming complex logs into standardized tabular formats using Python
  • Writing and storing data in Azure SQL, Cosmos DB, and Azure Data Lake
  • Building and maintaining REST APIs for the UI and researcher access
  • Implementing lot-based access control mechanisms
  • Working closely with architects, frontend teams, and R&D users to improve data availability and usability
  • Proposing platform improvements and new feature development as the system evolves

You’ll Excel in This Role If You Have:

Required Skills & Experience:

  • 3+ years of experience in data engineering
  • Strong Python skills, especially in data parsing & transformation

Experience with:

  • Azure Data Services (Data Lake, SQL DB, Cosmos DB, Event Hubs)
  • Airflow, Spark, Containers, REST APIs
  • SQL and structured data modeling
  • Excellent communication in English (written and spoken)
  • Strong analytical thinking and ability to work independently

Highly Valued:

  • Experience in semiconductor or research environments
  • Understanding of MES systems, wafer/lot data, pilot lines
  • Familiarity with PowerBI, data governance, and lakehouse architecture
  • Prior work on data platforms used in scientific or R&D contexts

What We Offer

  • Long-term consulting opportunity (up to 24 months)
  • Clear onboarding and mobilization (start within 20 working days post-award)
  • Flexibility: Hybrid work setup, with 1–3 days/week in Leuven
  • Opportunity to shape a platform used by leading-edge researchers
  • Stable, high-impact work in a collaborative, technical environment

Ready to Apply?

Send us your CV highlighting your relevant experience with:

  • Python data transformation
  • Azure data stack
  • Semiconductor or R&D platforms (if applicable)
Désolé, cet emploi n'est pas disponible dans votre région

Data Engineer

Vivid Resourcing

Aujourd'hui

Emploi consulté

Appuyez à nouveau pour fermer

Description De L'emploi

Data Engineer Opportunity in Belgium – Fully Remote:

I'm partnered with a leading metals company based in Belgium, dedicated to innovation and excellence in the metals industry.

They are seeking a skilled and experienced Data Engineer to join. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining our data infrastructure and pipelines.

You will have the exciting opportunity to work closely with data scientists, engineers, and stakeholders across the organization to build and optimize data-driven solutions that support our business objectives.

Key Responsibilities:

  • Design, build, and maintain scalable and robust data pipelines to ingest, process, and transform large volumes of structured and unstructured data from various sources, including databases, APIs, and streaming platforms.
  • Develop and implement efficient data models and architectures to support analytics, reporting, and machine learning initiatives
  • Collaborate with data scientists to deploy machine learning models into production environments.
  • Implement data quality monitoring and governance processes to ensure the accuracy, completeness, and reliability of data assets.
  • Identify opportunities for performance optimization and efficiency improvements within the data infrastructure and pipelines.

Qualifications:

  • Bachelor's or master's degree in computer science, Engineering, or related field.
  • 3+ years of experience working as a Data Engineer or similar role, preferably in a fast-paced and dynamic environment.
  • Strong understanding of data modeling, database design, and SQL query optimization.
  • Experience with big data technologies and distributed computing frameworks (e.g., Hadoop, Spark, Kafka).
  • Knowledge of cloud platforms and services, such as AWS, Azure, or Google Cloud Platform.

Benefits:

  • €90,000+
  • Company car
  • Opportunity to work on innovative projects at the intersection of technology and healthcare.
  • Collaborative and inclusive work culture that values diversity and creativity.
  • Professional development opportunities and training programs.
  • Flexible work arrangements and a healthy work-life balance.

If interested, feel free to get in touch below:

Email:

Désolé, cet emploi n'est pas disponible dans votre région

Data engineer

Leuven, Vlaams Brabant Spott (YC W25)

Hier

Emploi consulté

Appuyez à nouveau pour fermer

Description De L'emploi

About Spott

Spott is a YC-backed AI startup with its office in Leuven, Belgium. We are on a mission to transform the recruitment industry, by making recruiting faster, smarter and more efficient. Together, we’re building the first AI-native recruitment platform (ATS & CRM) that empowers professional recruiters to close significantly more placements.

Why?

Recruitment is an $700B+ industry still running on outdated methods. The biggest players are slow to innovate. Spott isn’t a tool that will be added to legacy software, it is a full reset. We are the system of record. AI-driven, fast, scalable, and personal. And we’re here to take over.

What We’re Looking For

To help us achieve our mission, we’re looking for a Data Engineer specialized in migrations. You’ll make sure recruitment firms can move smoothly from their legacy systems to Spott.

Direct migration experience is a plus. You should have the skills and confidence to take ownership and learn fast

You can navigate undocumented systems, reverse-engineer data models on the fly, and solve problems through experimentation.

You approach complex data challenges with practical, solutions-focused thinking.

Experience with dbt, SQL, PostgreSQL, Snowflake and Python

Experience with Terraform and Azure is a plus

What You’ll Do

You will build our migration strategy from the ground up. You can anticipate data coming from ~20 different systems. So far, our CTO has already created migration scripts for a good number of platforms and will be your PoC to support you.

Data Migrations: Design, manage, and optimize complex data migrations

Pipeline Development: Develop and maintain ETL/ELT processes using dbt and Snowflake to efficiently manage and transform large datasets

Own the end-to-end process: Including the injection of new data into our production environments and supporting customer success to delight the client during this migration

Documentation & Process Creation: Establish clear, efficient processes for data handling and write documentation for end-to-end pipeline management

What We Offer

Competitive salary and generous stock options

A high-impact role at a fast-growing YC-backed startup

Full ownership and autonomy in your domain

Regular team dinners, sports activities, and international offsites

About the interview process

1. Application Review

We'll review your application, in 48 hours you will hear back from us

2. Founder call (30min)

A 30min call with Manu to learn about your motivations to join Spott, determine why you’d be a great fit, and answer any questions you may have for us

3. Technical Interview (60 min)

We talk through a typical data migration problem we face at Spott to learn more about how you think as an engineer

4. Founder Chat

A final chat with two of the founders to align on mission, expectations, and team fit.

Overall, if you're a data engineer and joining an “San Francisco/YC company in Belgium” sounds appealing to you, then you should apply - even if you don't match all criteria

Désolé, cet emploi n'est pas disponible dans votre région
Soyez le premier informé

À propos du dernier Google cloud certified professional data engineer Emplois dans Belgique !

Data Engineer

Herentals, Antwerpen Robert Half

Hier

Emploi consulté

Appuyez à nouveau pour fermer

Description De L'emploi

Voor een internationale speler, actief in de retail sector, zoeken wij een gedreven Data Engineer. Je gaat deel uitmaken van een dynamisch team waar samenwerking en innovatie centraal staan. Het bedrijf biedt een moderne werkomgeving en kansen om persoonlijk én professioneel te groeien.

De verantwoordelijkheden voor deze functie zijn:

  • Je maakt deel uit van het IT Data & AI Team en speelt een sleutelrol in het opzetten en verbeteren van het Data Lake met behulp van Azure Databricks;
  • Je brengt verbindingen tot stand met verschillende systemen via APIs, ODBC, Event Hubs en sFTP-protocollen;
  • Je ontwerpt en optimaliseert ETL/ELT pipelines met behulp van Azure Data Factory en Databricks;
  • Je implementeert de Medallion Architecture, waarbij je data transformeert in Bronzen, Zilveren en Gouden lagen met behulp van Delta-, Parquet- en JSON-formaten;
  • Je ontwikkelt en optimaliseert datamodellen, zoals ster-schema's en dimensies, voor analytics en operationele doeleinden;
  • Je waarborgt een robuust databeheer door middel van Role-Based Access Control (RBAC) en Unity Catalog;
  • Je optimaliseert de prestaties van pipelines en queries en implementeert continu CI/CD pipelines via Azure DevOps;
  • Je werkt nauw samen met datawetenschappers, IT-teams en zakelijke stakeholders om end-to-end oplossingen te leveren.

De vereiste kwalificaties voor deze functie zijn:

  • Je hebt minimaal 3 jaar ervaring als Azure Data Engineer;
  • Je hebt diepgaande technische expertise, zoals:
  • Vakkennis van Azure Data Factory, Databricks en Azure Storage;
  • Vaardigheden in SQL, Python en technieken voor datamodellering;
  • Vertrouwdheid met dataformaten zoals Parquet en JSON;
  • Je hebt ervaring met AI/ML-modelbeheer op Azure Databricks;
  • Je beschikt over een Bachelor in IT, Computerwetenschappen of een gerelateerd vakgebied;
  • Je bent in het bezit van een Microsoft Certified: Azure Data Engineer Associate certificering;
  • Je beheerst zowel Nederlands als Engels Frans is een pluspunt;
  • Kennis van Power BI is een extra troef;
  • Je hebt een analytisch denkvermogen en oog voor detail;
  • Je werkt pragmatisch en klantgericht;
  • Je bent een teamspeler met uitstekende communicatievaardigheden;
  • Je bent gepassioneerd door technologie en data-gedreven innovatie.

Het aanbod:

  • Je krijgt een uitdagende en afwisselende fulltime functie met veel zelfstandigheid en verantwoordelijkheid;
  • Je maakt deel uit van een gezonde en groeiende organisatie waar medewerkers centraal staan en initiatief gewaardeerd wordt;
  • Wij voorzien uitgebreide opleidingsmogelijkheden om je up-to-date te houden met de nieuwste technieken en tools;
  • Een aantrekkelijk salarispakket aangevuld met extra-legale voordelen, waaronder:
  • Een elektrische bedrijfswagen met laadpas en laadstation;
  • Hospitalisatieverzekering;
  • Ecocheques en maaltijdcheques;
  • Personeelskortingen en toegang tot het Benefits At Work-platform met tal van voordelen;
  • Je werkt vanuit het hoofdkantoor in Herentals in een moderne en filevrije omgeving met de mogelijkheid tot thuiswerk (2 dagen per week);
  • De werkweek bedraagt 40 uur, inclusief 12 betaalde ADV-dagen per jaar;
  • Je sluit je aan bij een groeiende infrastructuur en een topteam waar collegialiteit en kennisdeling centraal staan.
Désolé, cet emploi n'est pas disponible dans votre région

Data Engineer

Hasselt, Limburg CDH Professionals

Hier

Emploi consulté

Appuyez à nouveau pour fermer

Description De L'emploi

CDH Professionals’ client is looking for an experienced Freelance Data Engineer to join a project team in Hasselt. The consultant will play a key role in designing, developing, and maintaining data solutions, while ensuring that information is accurate, accessible, and valuable for decision-making.

Role Overview

As a Freelance Data Engineer, you will:

  • Support the design and optimisation of data pipelines and workflows
  • Work with SQL databases to ensure data accuracy, performance, and scalability
  • Integrate and process data within Linux-based environments
  • Build and maintain dashboards and reporting solutions using Power BI
  • Collaborate with stakeholders to translate business needs into reliable technical solutions

Who You Are

You’re a hands-on data professional who thrives in fast-paced environments. You combine technical expertise with a problem-solving mindset, and you’re comfortable working independently as a freelancer while engaging with diverse teams.

Your Experience

  • Proven experience as a Data Engineer in professional environments
  • Strong skills in SQL (query optimisation, data modelling, performance tuning)
  • Experience working with Linux-based systems
  • Hands-on knowledge of Power BI for reporting and visualisation
  • Ability to work collaboratively with technical and non-technical stakeholders

Nice to Have

  • Experience with data warehousing or ETL tools
  • Familiarity with cloud platforms (Azure, AWS, or GCP)
  • Knowledge of Python or other scripting languages for data engineering tasks
Désolé, cet emploi n'est pas disponible dans votre région

Data Engineer

Brussels OneSource Consulting

Publié il y a 3 jours

Emploi consulté

Appuyez à nouveau pour fermer

Description De L'emploi

Job Title: Data Engineer

Language: English, French or Dutch or German

Location: Brussels, Belgium

Duration: 8/09/2025 - 31/03/2026

Job Description:

  • Within the corporate Digital & Data department, we aim at facilitating and driving our organization to leverage data.
  • At Client, data is considered as a major asset for the business to achieve its targets (strategic ambitions, innovation or transformation themes, objectives and regulatory compliance).

With our Data Platform consisting out of:

  • Cutting-edge Integration Toolset,
  • Business Intelligence, Analytics and storage capabilities,
  • A platform for the Management of Master Data,
  • A Data Governance platform that encompasses Data Lineage and Data Quality.

With our Data Organization consisting out of:

  • Business Product Teams (supporting the business stakeholder initiatives, organized according to the Agile principles)
  • Backbone/Platform Teams (supporting the data platform initiatives, organized according to the Agile principles)
  • Transversal competencies Team (supporting the various profiles and technologies initiatives, organized according to the Agile principles (Spotify))

Our Data Organization never stands still: No status-quo!

Accordingly, we challenge ourselves and the organization continuously.

We work in quite flat hierarchy, with our teammates widespread in agile teams, and according to the principle of open source.

We are looking for a Data Engineer Service, who will primarily join one of our product Teams. It is an opportunity to mark with your signature our corporate Data organization and contribute indirectly to better product to manage our customer, as more qualitative data or service around data for our customer.

We are looking for Data Engineers able to create data pipelines, efficient storage structures, powerful materialized views in different analytical technologies, but also data exchange endpoints at destination of our users. To some extent, you’ll also have to interact with aspect related to governance tools (Glossary, Modeling, Lineage, or Data Quality.).

As we are looking for data engineers for a couple of different teams, we are particularly interested in data engineers knowledgeable in Azure stack but also PowerBI reporting or with some software engineering passion or experience.

What will be your mission and for which competences will you need an Extensive hands-on experience (> 3 to 5 years preferable) in 2 out of the 4 following categories:

  • Usage of the SQL server
  • Implementing data pipelines using Azure services such as Azure Databricks, but also to some extent Azure Data Factory, Azure Functions, Azure Stream/log Analytics, and Azure DevOps
  • Implementing data pipelines or data enrichments with Python in a Databricks environment
  • Open to learn and jump on new technologies (on the ones listed above, or also Redis, RabbitMQ, Neo4j, or Apache Arrow, .)
  • Able and willing to interact with business analysts and stakeholders to refine some requirements and to present reusable and integrated solution
  • Able and willing to contribute to extensive testing of the solution, as the reinforcement of devops principles within the team.
  • Able and willing to contribute to the writing and structuration of documentation
  • Experience with Power BI

What you will NOT be doing

  • You shall not act from an Ivory Technical Tower.
  • You don’t not make decisions for the Business but rather advise them with careful logic or showing your reasoning.
  • You shall not deliver technical solution, rather product features.

Requirements:

  • Able to challenge your interlocutors by leveraging your rational thinking, and no non-sense philosophy.
  • Continuously look at data in a transversal way (no siloes), across the entire Enterprise, to maximize coherence, reuse and adoption.
  • Track record of driving results through continuous improvement
  • Customer-oriented
  • Thinking Iterative
  • Analytical approach to problem-solving and a track record of driving results through continuous improvement
  • Team player breathing respect, open-mind, dare, challenge, innovation and one team/voice for the Customer
  • Product-oriented mindset
  • Enjoy sharing ideas with the team (or with other teams) and contribute to the product success
  • Language skills: fluent in English (must have) AND fluent in German/French or Dutch (soft requirement)
  • Good communication skills
Désolé, cet emploi n'est pas disponible dans votre région

Emplacements à proximité

Autres emplois à proximité de chez moi

Industrie

  1. shopping_bagAchats
  2. workAdministratif
  3. ecoAgriculture et élevage
  4. schoolApprentissage et formation
  5. apartmentArchitecture
  6. paletteArts du spectacle
  7. diversity_3Assistance sociale
  8. policyAssurance
  9. directions_carAutomobile
  10. flight_takeoffAviation
  11. account_balanceBanque et finance
  12. local_floristBien-être
  13. local_mallBiens de grande consommation (FMCG)
  14. storeCommerce et distribution
  15. request_quoteComptabilité
  16. supervisor_accountConseil en gestion
  17. person_searchConseil en recrutement
  18. constructionConstruction
  19. brushCréatif et digital
  20. currency_bitcoinCryptographie et blockchain
  21. medical_servicesDentaire
  22. gavelDroit et justice
  23. electrical_servicesÉlectronique
  24. boltÉnergie
  25. schoolEnseignement et formation
  26. engineeringExploitation minière
  27. precision_manufacturingFabrication et production
  28. gavelFonction publique
  29. child_friendlyGarde d’enfants
  30. foundationGénie civil
  31. supervisor_accountGestion
  32. checklist_rtlGestion de projet
  33. beach_accessHôtellerie - Restauration
  34. local_gas_stationHydrocarbures
  35. smart_toyIA et Technologies émergentes
  36. home_workImmobilier
  37. precision_manufacturingIndustrie
  38. scienceIndustrie chimique
  39. codeInformatique et logiciels
  40. shopping_cartInternet - Ecommerce
  41. emoji_eventsJeunes diplômés
  42. inventory_2Logistique et entreposage
  43. sports_soccerLoisirs et sports
  44. handymanMaintenance et entretien
  45. campaignMarketing
  46. buildMécanique
  47. local_hospitalMédecine
  48. perm_mediaMédias et relations publiques
  49. clean_handsNettoyage et assainissement
  50. biotechPharmaceutique
  51. scienceRecherche et développement
  52. groupsRessources humaines
  53. health_and_safetySanté
  54. securitySécurité de l’information
  55. securitySécurité publique
  56. support_agentService client et assistance
  57. diversity_3Services sociaux
  58. medical_servicesSoins infirmiers
  59. wifiTélécommunications
  60. psychologyThérapie
  61. beach_accessTourisme
  62. local_shippingTransport
  63. point_of_saleVentes
  64. petsVétérinaire
Tout afficher Google cloud certified professional data engineer Emplois