Data Engineer
Role: Data Engineer (Python, PySpark, SQL)Day rate: 475pd- 520pd (Inside IR35)Contract: 6 months initial We are currently recruiting for a Data Engineer to be part of a team on a Business Data Service Project, which is a Data Warehouse Replacement and Report simplification project. You will be responsible for ensuring all data products and solutions created in the business insights ecosystem are fit for purpose, resilient, robust and reliable. You will play a pivotal role that builds, tests and deploys Data Warehouse solutions. This will cover the lifecycle for the planning, ingestion, transformation, consolidation and aggregation of data from source to target in the Data Warehouse environment. Skills and experience required:Strong experience developing ETL/ELT pipelines using PySpark and PythonHands-on experience with Microsoft Fabric lakehouse or similar cloud data platforms (Azure Synapse Analytics, Databricks)Proficiency in working with Jupyter/Fabric Notebooks for data engineering workflowsSolid understanding of data lakehouse architecture patterns and medallion architectureExperience working with Delta Lake or similar lakehouse storage formatsStrong SQL skills for data manipulation, transformation, and quality validation This is a role that will require 2/3 days per month onsite in Dudley, West Midlands. Please consider this when applying for the role. If you are interested in the role and would like to apply, please click on the link for immediate consideration. ..... full job details .....
Other jobs of interest...
Perform a fresh search...
-
Create your ideal job search criteria by
completing our quick and simple form and
receive daily job alerts tailored to you!