What you will do
- Build and maintain scalable, distributed, fault-tolerant data pipelines using Microsoft Fabric;
- Develop and manage lakehouse layers and Delta Lake workflows for data processing;
- Collaborate with stakeholders across data engineering, compliance, and business teams;
- Design and implement pipelines to acquire, normalize, transform, and release large volumes of financial data;
- Design and implement bitemporal data models for regulatory-grade time-series datasets;
- Build and maintain testing frameworks for data pipelines and transformation logic;
- Own end-to-end solutions including ingestion pipelines, QA workflows, correction management, and audit trails;
- Contribute to shared platform services in a collaborative environment;
- Support implementation of AI solutions including data ingestion, anomaly detection, and semantic search.
Must haves
- 6–8 years of experience in data engineering;
- Proficiency in Python for data pipelines, transformation logic, and automation;
- Proficiency in SQL including window functions, partitioning, and time-series queries;
- Hands-on experience with Microsoft Fabric (OneLake, Data Factory, Lakehouse, Warehouse);
- Working knowledge of Delta Lake including incremental merges and Change Data Feed;
- Experience with AI-assisted development tools such as GitHub Copilot or similar;
- Experience with Git version control and collaboration workflows;
- Familiarity with REST APIs for integrations;
- Familiarity with Azure technologies (Azure Data Factory, Azure SQL, Azure Key Vault, RBAC);
- Understanding of financial data concepts related to equities and other asset classes;
- Upper-intermediate English level.
Nice to haves
- Knowledge of data libraries such as pandas or PySpark;
- Experience with columnar storage and time-series analytics tools such as ClickHouse;
- Familiarity with Microsoft Purview for data governance;
- Understanding of bitemporal data modeling concepts;
- Knowledge of financial reference data such as equities, fixed income, or corporate actions;
- Experience with CI/CD pipelines and automated deployments;
- Exposure to LLMs and Agentic AI for data-related use cases.
As a Data Engineer, you will build and scale robust, cloud-native data pipelines that power large-scale financial data processing and analytics. Working with Python, SQL, and Microsoft Fabric on Azure, you’ll design distributed, fault-tolerant systems, implement advanced data models, and ensure high data quality and compliance. This role offers strong ownership, cross-functional collaboration, and the opportunity to integrate AI-driven solutions into modern data platforms.
About the role
The benefits of joining us
Professional growth
Accelerate your professional journey with mentorship, TechTalks, and personalized growth roadmaps
Competitive compensation
We match your ever-growing skills, talent, and contributions with competitive USD-based compensation and budgets for education, fitness, and team activities
A selection of exciting projects
Join projects with modern solutions development and top-tier clients that include Fortune 500 enterprises and leading product brands
Flextime
Tailor your schedule for an optimal work-life balance, by having the options of working from home and going to the office – whatever makes you the happiest and most productive.
Your AgileEngine journey starts here
2 min
Tell us about yourself
2 sec
Confirm requirements
30 - 60 min
Pass a short test
5 min
Record a short video
→ Introduce yourself on a video, instead of waiting for an interview
Live interview
Ace the technical interview with our team
→ Schedule a call yourself right away after your video is reviewed
Live interview
Final interview with your team
→ Get to know the team you will be working with
Get an offer
As quick as possible







