88 Emplois pour Data Engineer - Belgique
Data Engineer
Aujourd'hui
Emploi consulté
Description De L'emploi
Het UPC KU Leuven is georganiseerd in drie diensten. Er is een dienst voor kinder- en jeugdpsychiatrie, voor volwassenenpsychiatrie en voor ouderenpsychiatrie. De diensten zijn gehuisvest op campus Gasthuisberg in Leuven en op campus Kortenberg.
-->
bekijk al onze vacatures via
Universitair Psychiatrisch Centrum KU Leuven
Campus Kortenberg
Leuvensesteenweg 517
3070 Kortenberg
Campus Gasthuisberg
Herestraat 49
3000 Leuven Functieomschrijving Z.Org KU Leuven zoekt een Data Engineer (GCP Stack) voor zijn Universitair Psychiatrisch Centrum KU Leuven. visie
UPC KU Leuven is een toonaangevende universitaire psychiatrische zorginstelling met een sterk maatschappelijk en wetenschappelijk engagement en een groeiende focus op data-gedreven werken. Om onze zorgverlening en -processen, onze operationele werking en het wetenschappelijk onderzoek verder te sturen en ondersteunen, bouwen we aan een modern datawarehouse op de Google Cloud Stack.
We zoeken daarom een beslagen Data Engineer die de fundamenten wil meeleggen en onderhouden van deze centrale data-infrastructuur. je functie
Als Data Engineer ben je verantwoordelijk voor het ontwerp, de implementatie en het onderhoud van ons datawarehouse in Google BigQuery. Je werkt nauw samen met data-analisten, data scientists en IT-collega’s om betrouwbare en schaalbare datamodellen te ontwikkelen die inzichten mogelijk maken op het vlak van zorgkwaliteit, logistiek en beleid. Jouw takenpakket:
- ontwerp, opzet en beheer van het datawarehouse op Google BigQuery
- ontwikkelen en beheren van ETL-processen en data pipelines
- modelleren van datasets in Dataform en het uitbouwen van een gelaagde data-architectuur (bron ? staging ? model)
- uittekenen van data oplossingen, kost-efficiënt en onderhoudbaar
- samenwerken met data analisten en domein experts voor het opzetten van flexibele en efficiënte data modellen
- ondersteunen van wetenschappelijke teams bij het opzetten van betrouwbare datasets
- op de hoogte blijven van recente ontwikkelingen in data technologie
- aanspreekpunt zijn voor datavraagstukken in samenwerking met IT, privacy en security
- documenteren van dataprocessen en het waarborgen van datakwaliteit
- je hebt bij voorkeur een masterdiploma in (toegepaste) informatica, computer wetenschappen, engineering, data sciences of equivalent
- je hebt minstens 5 jaar ervaring als data engineer of in een vergelijkbare rol meebrengt
- je hebt een uitstekende kennis van SQL (performance, modellering, transformatie)
- je hebt bekendheid met cloud platforms, en in het bijzonder Google Cloud producten (BigQuery, Dataform, Cloud Functions, etc.) is een plus, maar met een gezonde interesse spring je al een eind.
- enige kennis van python is een voordeel
- ervaring met data modellering, version control (Git), CI/CD is mooi meegenomen
- je kan bruggen bouwen tussen technische en niet-technische profielen en slaagt erin om inhoudelijke wensen te vertalen naar technische data modellen
- affiniteit met de zorgsector of maatschappelijke impact is een extra troef
- een betekenisvolle job in een organisatie waar mens en welzijn centraal staan
- de kans om van nul af aan een moderne dataomgeving op te bouwen met de nieuwste technologieën
- samenwerking in een klein maar ambitieus team met veel autonomie
Data Engineer
Aujourd'hui
Emploi consulté
Description De L'emploi
At Akkodis, we're Hiring: Data Engineers for an Advanced R&D Data Platform (Semiconductor Domain)
Location: Leuven, Belgium (Hybrid: 1–3 days on-site/week)
Duration: Up to 24 months
Start date: Within 20 working days of contract award / September
Engagement: FTE (1 year initially)
The Opportunity
Join a cutting-edge R&D data platform project with one of Europe’s most advanced research organizations. This is your chance to work on a mission-critical platform powering digital transformation in the semiconductor industry.
The program is redefining how researchers access, transform, and visualize data — turning raw, complex cleanroom data into actionable insights.
We're looking for experienced Data Engineers to help onboard semiconductor equipment data into a state-of-the-art, cloud-based platform that’s used daily by top researchers.
Your Mission
As a Data Engineer, you’ll join a growing team responsible for:
- Ingesting raw tool data into a central R&D data platform
- Creating event-driven pipelines triggered by Azure Event Hubs
- Transforming complex logs into standardized tabular formats using Python
- Writing and storing data in Azure SQL, Cosmos DB, and Azure Data Lake
- Building and maintaining REST APIs for the UI and researcher access
- Implementing lot-based access control mechanisms
- Working closely with architects, frontend teams, and R&D users to improve data availability and usability
- Proposing platform improvements and new feature development as the system evolves
You’ll Excel in This Role If You Have:
Required Skills & Experience:
- 3+ years of experience in data engineering
- Strong Python skills, especially in data parsing & transformation
Experience with:
- Azure Data Services (Data Lake, SQL DB, Cosmos DB, Event Hubs)
- Airflow, Spark, Containers, REST APIs
- SQL and structured data modeling
- Excellent communication in English (written and spoken)
- Strong analytical thinking and ability to work independently
Highly Valued:
- Experience in semiconductor or research environments
- Understanding of MES systems, wafer/lot data, pilot lines
- Familiarity with PowerBI, data governance, and lakehouse architecture
- Prior work on data platforms used in scientific or R&D contexts
What We Offer
- Long-term consulting opportunity (up to 24 months)
- Clear onboarding and mobilization (start within 20 working days post-award)
- Flexibility: Hybrid work setup, with 1–3 days/week in Leuven
- Opportunity to shape a platform used by leading-edge researchers
- Stable, high-impact work in a collaborative, technical environment
Ready to Apply?
Send us your CV highlighting your relevant experience with:
- Python data transformation
- Azure data stack
- Semiconductor or R&D platforms (if applicable)
Data Engineer
Aujourd'hui
Emploi consulté
Description De L'emploi
Data Engineer Opportunity in Belgium – Fully Remote:
I'm partnered with a leading metals company based in Belgium, dedicated to innovation and excellence in the metals industry.
They are seeking a skilled and experienced Data Engineer to join. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining our data infrastructure and pipelines.
You will have the exciting opportunity to work closely with data scientists, engineers, and stakeholders across the organization to build and optimize data-driven solutions that support our business objectives.
Key Responsibilities:
- Design, build, and maintain scalable and robust data pipelines to ingest, process, and transform large volumes of structured and unstructured data from various sources, including databases, APIs, and streaming platforms.
- Develop and implement efficient data models and architectures to support analytics, reporting, and machine learning initiatives
- Collaborate with data scientists to deploy machine learning models into production environments.
- Implement data quality monitoring and governance processes to ensure the accuracy, completeness, and reliability of data assets.
- Identify opportunities for performance optimization and efficiency improvements within the data infrastructure and pipelines.
Qualifications:
- Bachelor's or master's degree in computer science, Engineering, or related field.
- 3+ years of experience working as a Data Engineer or similar role, preferably in a fast-paced and dynamic environment.
- Strong understanding of data modeling, database design, and SQL query optimization.
- Experience with big data technologies and distributed computing frameworks (e.g., Hadoop, Spark, Kafka).
- Knowledge of cloud platforms and services, such as AWS, Azure, or Google Cloud Platform.
Benefits:
- €90,000+
- Company car
- Opportunity to work on innovative projects at the intersection of technology and healthcare.
- Collaborative and inclusive work culture that values diversity and creativity.
- Professional development opportunities and training programs.
- Flexible work arrangements and a healthy work-life balance.
If interested, feel free to get in touch below:
Email:
Data engineer
Hier
Emploi consulté
Description De L'emploi
About Spott
Spott is a YC-backed AI startup with its office in Leuven, Belgium. We are on a mission to transform the recruitment industry, by making recruiting faster, smarter and more efficient. Together, we’re building the first AI-native recruitment platform (ATS & CRM) that empowers professional recruiters to close significantly more placements.
Why?
Recruitment is an $700B+ industry still running on outdated methods. The biggest players are slow to innovate. Spott isn’t a tool that will be added to legacy software, it is a full reset. We are the system of record. AI-driven, fast, scalable, and personal. And we’re here to take over.
What We’re Looking For
To help us achieve our mission, we’re looking for a Data Engineer specialized in migrations. You’ll make sure recruitment firms can move smoothly from their legacy systems to Spott.
Direct migration experience is a plus. You should have the skills and confidence to take ownership and learn fast
You can navigate undocumented systems, reverse-engineer data models on the fly, and solve problems through experimentation.
You approach complex data challenges with practical, solutions-focused thinking.
Experience with dbt, SQL, PostgreSQL, Snowflake and Python
Experience with Terraform and Azure is a plus
What You’ll Do
You will build our migration strategy from the ground up. You can anticipate data coming from ~20 different systems. So far, our CTO has already created migration scripts for a good number of platforms and will be your PoC to support you.
Data Migrations: Design, manage, and optimize complex data migrations
Pipeline Development: Develop and maintain ETL/ELT processes using dbt and Snowflake to efficiently manage and transform large datasets
Own the end-to-end process: Including the injection of new data into our production environments and supporting customer success to delight the client during this migration
Documentation & Process Creation: Establish clear, efficient processes for data handling and write documentation for end-to-end pipeline management
What We Offer
Competitive salary and generous stock options
A high-impact role at a fast-growing YC-backed startup
Full ownership and autonomy in your domain
Regular team dinners, sports activities, and international offsites
About the interview process
1. Application Review
We'll review your application, in 48 hours you will hear back from us
2. Founder call (30min)
A 30min call with Manu to learn about your motivations to join Spott, determine why you’d be a great fit, and answer any questions you may have for us
3. Technical Interview (60 min)
We talk through a typical data migration problem we face at Spott to learn more about how you think as an engineer
4. Founder Chat
A final chat with two of the founders to align on mission, expectations, and team fit.
Overall, if you're a data engineer and joining an “San Francisco/YC company in Belgium” sounds appealing to you, then you should apply - even if you don't match all criteria
Data Engineer
Hier
Emploi consulté
Description De L'emploi
Voor een internationale speler, actief in de retail sector, zoeken wij een gedreven Data Engineer. Je gaat deel uitmaken van een dynamisch team waar samenwerking en innovatie centraal staan. Het bedrijf biedt een moderne werkomgeving en kansen om persoonlijk én professioneel te groeien.
De verantwoordelijkheden voor deze functie zijn:
- Je maakt deel uit van het IT Data & AI Team en speelt een sleutelrol in het opzetten en verbeteren van het Data Lake met behulp van Azure Databricks;
- Je brengt verbindingen tot stand met verschillende systemen via APIs, ODBC, Event Hubs en sFTP-protocollen;
- Je ontwerpt en optimaliseert ETL/ELT pipelines met behulp van Azure Data Factory en Databricks;
- Je implementeert de Medallion Architecture, waarbij je data transformeert in Bronzen, Zilveren en Gouden lagen met behulp van Delta-, Parquet- en JSON-formaten;
- Je ontwikkelt en optimaliseert datamodellen, zoals ster-schema's en dimensies, voor analytics en operationele doeleinden;
- Je waarborgt een robuust databeheer door middel van Role-Based Access Control (RBAC) en Unity Catalog;
- Je optimaliseert de prestaties van pipelines en queries en implementeert continu CI/CD pipelines via Azure DevOps;
- Je werkt nauw samen met datawetenschappers, IT-teams en zakelijke stakeholders om end-to-end oplossingen te leveren.
De vereiste kwalificaties voor deze functie zijn:
- Je hebt minimaal 3 jaar ervaring als Azure Data Engineer;
- Je hebt diepgaande technische expertise, zoals:
- Vakkennis van Azure Data Factory, Databricks en Azure Storage;
- Vaardigheden in SQL, Python en technieken voor datamodellering;
- Vertrouwdheid met dataformaten zoals Parquet en JSON;
- Je hebt ervaring met AI/ML-modelbeheer op Azure Databricks;
- Je beschikt over een Bachelor in IT, Computerwetenschappen of een gerelateerd vakgebied;
- Je bent in het bezit van een Microsoft Certified: Azure Data Engineer Associate certificering;
- Je beheerst zowel Nederlands als Engels Frans is een pluspunt;
- Kennis van Power BI is een extra troef;
- Je hebt een analytisch denkvermogen en oog voor detail;
- Je werkt pragmatisch en klantgericht;
- Je bent een teamspeler met uitstekende communicatievaardigheden;
- Je bent gepassioneerd door technologie en data-gedreven innovatie.
Het aanbod:
- Je krijgt een uitdagende en afwisselende fulltime functie met veel zelfstandigheid en verantwoordelijkheid;
- Je maakt deel uit van een gezonde en groeiende organisatie waar medewerkers centraal staan en initiatief gewaardeerd wordt;
- Wij voorzien uitgebreide opleidingsmogelijkheden om je up-to-date te houden met de nieuwste technieken en tools;
- Een aantrekkelijk salarispakket aangevuld met extra-legale voordelen, waaronder:
- Een elektrische bedrijfswagen met laadpas en laadstation;
- Hospitalisatieverzekering;
- Ecocheques en maaltijdcheques;
- Personeelskortingen en toegang tot het Benefits At Work-platform met tal van voordelen;
- Je werkt vanuit het hoofdkantoor in Herentals in een moderne en filevrije omgeving met de mogelijkheid tot thuiswerk (2 dagen per week);
- De werkweek bedraagt 40 uur, inclusief 12 betaalde ADV-dagen per jaar;
- Je sluit je aan bij een groeiende infrastructuur en een topteam waar collegialiteit en kennisdeling centraal staan.
Data Engineer
Hier
Emploi consulté
Description De L'emploi
CDH Professionals’ client is looking for an experienced Freelance Data Engineer to join a project team in Hasselt. The consultant will play a key role in designing, developing, and maintaining data solutions, while ensuring that information is accurate, accessible, and valuable for decision-making.
Role Overview
As a Freelance Data Engineer, you will:
- Support the design and optimisation of data pipelines and workflows
- Work with SQL databases to ensure data accuracy, performance, and scalability
- Integrate and process data within Linux-based environments
- Build and maintain dashboards and reporting solutions using Power BI
- Collaborate with stakeholders to translate business needs into reliable technical solutions
Who You Are
You’re a hands-on data professional who thrives in fast-paced environments. You combine technical expertise with a problem-solving mindset, and you’re comfortable working independently as a freelancer while engaging with diverse teams.
Your Experience
- Proven experience as a Data Engineer in professional environments
- Strong skills in SQL (query optimisation, data modelling, performance tuning)
- Experience working with Linux-based systems
- Hands-on knowledge of Power BI for reporting and visualisation
- Ability to work collaboratively with technical and non-technical stakeholders
Nice to Have
- Experience with data warehousing or ETL tools
- Familiarity with cloud platforms (Azure, AWS, or GCP)
- Knowledge of Python or other scripting languages for data engineering tasks
Data Engineer
Publié il y a 3 jours
Emploi consulté
Description De L'emploi
Job Title: Data Engineer
Language: English, French or Dutch or German
Location: Brussels, Belgium
Duration: 8/09/2025 - 31/03/2026
Job Description:
- Within the corporate Digital & Data department, we aim at facilitating and driving our organization to leverage data.
- At Client, data is considered as a major asset for the business to achieve its targets (strategic ambitions, innovation or transformation themes, objectives and regulatory compliance).
With our Data Platform consisting out of:
- Cutting-edge Integration Toolset,
- Business Intelligence, Analytics and storage capabilities,
- A platform for the Management of Master Data,
- A Data Governance platform that encompasses Data Lineage and Data Quality.
With our Data Organization consisting out of:
- Business Product Teams (supporting the business stakeholder initiatives, organized according to the Agile principles)
- Backbone/Platform Teams (supporting the data platform initiatives, organized according to the Agile principles)
- Transversal competencies Team (supporting the various profiles and technologies initiatives, organized according to the Agile principles (Spotify))
Our Data Organization never stands still: No status-quo!
Accordingly, we challenge ourselves and the organization continuously.
We work in quite flat hierarchy, with our teammates widespread in agile teams, and according to the principle of open source.
We are looking for a Data Engineer Service, who will primarily join one of our product Teams. It is an opportunity to mark with your signature our corporate Data organization and contribute indirectly to better product to manage our customer, as more qualitative data or service around data for our customer.
We are looking for Data Engineers able to create data pipelines, efficient storage structures, powerful materialized views in different analytical technologies, but also data exchange endpoints at destination of our users. To some extent, you’ll also have to interact with aspect related to governance tools (Glossary, Modeling, Lineage, or Data Quality.).
As we are looking for data engineers for a couple of different teams, we are particularly interested in data engineers knowledgeable in Azure stack but also PowerBI reporting or with some software engineering passion or experience.
What will be your mission and for which competences will you need an Extensive hands-on experience (> 3 to 5 years preferable) in 2 out of the 4 following categories:
- Usage of the SQL server
- Implementing data pipelines using Azure services such as Azure Databricks, but also to some extent Azure Data Factory, Azure Functions, Azure Stream/log Analytics, and Azure DevOps
- Implementing data pipelines or data enrichments with Python in a Databricks environment
- Open to learn and jump on new technologies (on the ones listed above, or also Redis, RabbitMQ, Neo4j, or Apache Arrow, .)
- Able and willing to interact with business analysts and stakeholders to refine some requirements and to present reusable and integrated solution
- Able and willing to contribute to extensive testing of the solution, as the reinforcement of devops principles within the team.
- Able and willing to contribute to the writing and structuration of documentation
- Experience with Power BI
What you will NOT be doing
- You shall not act from an Ivory Technical Tower.
- You don’t not make decisions for the Business but rather advise them with careful logic or showing your reasoning.
- You shall not deliver technical solution, rather product features.
Requirements:
- Able to challenge your interlocutors by leveraging your rational thinking, and no non-sense philosophy.
- Continuously look at data in a transversal way (no siloes), across the entire Enterprise, to maximize coherence, reuse and adoption.
- Track record of driving results through continuous improvement
- Customer-oriented
- Thinking Iterative
- Analytical approach to problem-solving and a track record of driving results through continuous improvement
- Team player breathing respect, open-mind, dare, challenge, innovation and one team/voice for the Customer
- Product-oriented mindset
- Enjoy sharing ideas with the team (or with other teams) and contribute to the product success
- Language skills: fluent in English (must have) AND fluent in German/French or Dutch (soft requirement)
- Good communication skills
Soyez le premier informé
À propos du dernier Data engineer Emplois dans Belgique !
Data Engineer
Publié il y a 3 jours
Emploi consulté
Description De L'emploi
Join our client and help them build their data backbone.
You will design, develop, and optimize scalable data solutions supporting innovation, regulatory compliance, and operational excellence.
What you will do
- Build and optimize data pipelines using Azure services (Databricks, Data Factory, Functions, Stream Analytics).
- Develop efficient storage structures, materialized views, and exchange endpoints for business users.
- Contribute to data quality and governance (lineage, glossary, modeling).
- Collaborate with business analysts & stakeholders to refine requirements and deliver reusable solutions.
- Support DevOps practices: testing, CI/CD, documentation, continuous improvement.
- Create and enhance Power BI reports for business insights.
What you bring
- 3–5+ years’ hands-on experience in at least two of these:
- SQL Server
- Azure Databricks / Data Factory / Functions / DevOps
- Python for data pipelines
- Power BI
- Strong analytical mindset and ability to challenge requirements constructively.
- Product-oriented, iterative approach, focus on delivering business value.
- Team player with good communication skills in English.
- Open to learning new technologies (e.g. Redis, RabbitMQ, Neo4j, Apache Arrow).
Interested? Apply immediately!
myNEBIRU: Not into this role, but interested in what NEBIRU does? That’s totally fine.
Visit to see how we can support you - even outside our client missions. Let’s build the bridge to your next step, together.
Data Engineer
Publié il y a 3 jours
Emploi consulté
Description De L'emploi
I am representing one of the Biggest Energy companies in Belgium as they are looking to expand their Data Engineering team for an ongoing project and I think you'd be a very good fit for what they're looking for.
They have a market share in the Flemish region of 100% for electricity and gas, making them one of the leading energy companies in the country.
Role - Kafka Streams Data Engineer
Responsibilities
- Designing the solution
- Working in a scrum team
- Delivering the solution
- Take charge of the designing
Experience Required
- Kafka Streams / Apache Kafka
- Kubernetes
- Data Engineering
- Good Leading and Guidance Skills
Next week I have a meeting with the company regarding CV's that they would like to view. If this is an opportunity you're interested in then I'd be more than happy to further discuss it with you.
Data Engineer
Publié il y a 3 jours
Emploi consulté
Description De L'emploi
Data Engineer (Databricks) - Freelance Contract
For a client of mine in the Energy Sector, we are seeking a skilled Data Engineer with hands-on experience in Databricks to design, build, and optimize scalable data pipelines in a modern data platform environment.
Key Responsibilities
- Collaborate on Business Intelligence, Big Data, and AI projects in the Azure Cloud.
- Build scalable data platforms for clients across diverse industries.
- Design robust ETL pipelines to extract, transform, and load data into central platforms.
- Focus on data quality and reliable processing.
- Contribute to a culture of innovation and knowledge sharing.
- Grow your skills through Microsoft and Databricks certifications.
Required Qualifications
- A genuine passion for data-driven solutions and a strong interest in cloud services.
- Hands-on experience as a Databricks Data Engineer with a focus on Azure Cloud technologies.
- Familiarity with several Azure services such as Data Lake, Data Factory, Azure Databricks, Azure SQL Server, Azure Synapse, Azure Data Explorer, Azure DevOps, etc.
- Proficiency in Python, SQL, and/or Scala.
- A flexible, proactive, and team-oriented mindset.
- Preferably, prior experience working in a consulting environment.
- Strong communication skills in English, with a customer-focused and detail-oriented approach.
Offer:
Location: Brussels
Onsite Requirements: 50%
Contract Type: Freelance
Duration: 12 Months
Daily Rate: Up to €750 (Dependent on Experience)
Languages: English