What you will do
- Design, develop, and maintain scalable and reliable ETL/ELT pipelines;
- Build and optimize data workflows using tools such as dbt and Airflow;
- Develop and maintain data solutions leveraging technologies such as Snowflake, Redshift, Kafka, Spark, and Hive;
- Work with both SQL and NoSQL databases to support data ingestion, transformation, and analytics needs;
- Monitor, troubleshoot, and improve data pipeline performance, scalability, and reliability;
- Collaborate closely with cross-functional teams including Engineering, Product, Analytics, and Business stakeholders;
- Participate in Agile ceremonies and contribute to sprint planning and delivery commitments;
- Manage multiple priorities effectively while ensuring timely and high-quality deliverables;
- Contribute to data architecture discussions and help drive engineering best practices.
Must haves
- 5+ years of professional experience in Data Engineering or Data Warehousing roles;
- Strong programming skills in Python;
- 5+ years of experience building data pipelines using ETL/ELT tools, with dbt preferred;
- 3+ years of hands-on experience with big data technologies such as Snowflake, Redshift, Kafka, Spark, or Hive;
- Extensive experience working with both SQL and NoSQL databases;
- Strong expertise with workflow orchestration and pipeline management tools such as Airflow;
- Strong understanding of scalable data architecture and engineering best practices;
- Excellent communication and stakeholder management skills;
- Proven ability to manage competing priorities and deliver within agreed sprint commitments;
- Comfortable working in highly collaborative, cross-functional Agile teams;
- Self-starter mindset with strong analytical, problem-solving, and critical-thinking abilities;
- Master’s degree in Computer Science, Mathematics, Statistics, or a related technical field preferred, or Bachelor’s degree with relevant experience;
- Upper-intermediate English level.
Nice to haves
- Experience working in cloud-based data environments;
- Familiarity with modern data warehousing and distributed systems concepts;
- Exposure to data governance, observability, or data quality frameworks.
We are looking for a Senior Data Engineer to design and maintain scalable data pipelines and platforms supporting Accounting, Finance, Payments, and Tax functions within a large-scale financial systems platform. You will build ETL/ELT workflows using Python, dbt, and Airflow, working across Snowflake, Redshift, Kafka, and Spark to ensure data availability and reliability for critical business operations. The role operates in a cross-functional Agile environment with direct stakeholder engagement across Engineering, Product, and Analytics teams.
About the role
The benefits of joining us
Professional growth
Accelerate your professional journey with mentorship, TechTalks, and personalized growth roadmaps
Competitive compensation
We match your ever-growing skills, talent, and contributions with competitive USD-based compensation and budgets for education, fitness, and team activities
A selection of exciting projects
Join projects with modern solutions development and top-tier clients that include Fortune 500 enterprises and leading product brands
Flextime
Tailor your schedule for an optimal work-life balance, by having the options of working from home and going to the office – whatever makes you the happiest and most productive.
Your AgileEngine journey starts here
2 min
Tell us about yourself
2 sec
Confirm requirements
30 - 60 min
Pass a short test
5 min
Record a short video
→ Introduce yourself on a video, instead of waiting for an interview
Live interview
Ace the technical interview with our team
→ Schedule a call yourself right away after your video is reviewed
Live interview
Final interview with your team
→ Get to know the team you will be working with
Get an offer
As quick as possible
