What you will do
- Build and support ETL pipelines;
- Monitor data pipelines, identify bottlenecks, optimize data processing and storage for performance and cost-effectiveness;
- Collaborate effectively with cross-functional teams including data scientists, analysts, software engineers, and business stakeholders;
- Working with Terraform to build AWS infrastructure;
- Analyze sources and build Cloud Data Warehouse and Data Lake solution.
Must haves
- 3+ years of professional experience with Python;
- 3+ years of professional experience in a Data Engineering role;
- Proficiency in programming languages commonly used in data engineering such as Python, SQL, and optionally Scala for working with data processing frameworks like Spark and libs like Pandas;
- Proficiency in designing, deploying, and managing data pipelines using Apache Airflow for workflow orchestration and scheduling;
- Ability to design, develop, and optimize ETL processes to move and transform data from various sources into the data warehouse, ensuring data quality, reliability, and efficiency;
- Knowledge of big data technologies and frameworks such as Apache Spark for processing large volumes of data efficiently;
- Extensive hands-on experience with various AWS services relevant to data engineering, including but not limited to Amazon MWAA, Amazon S3, Amazon RDS, Amazon EMR, AWS Lambda, AWS Glue,
- Amazon Redshift, AWS Data Pipeline, Amazon DynamoDB;
- Deep understanding and practical experience in building and optimizing cloud data warehousing solutions;
- Ability to monitor data pipelines, identify bottlenecks, and optimize data processing and storage for performance and cost-effectiveness;
- Excellent communication skills to collaborate effectively with cross-functional teams including data scientists, analysts, software engineers, and business stakeholders;
- Bachelor’s degree in computer science/engineering or other technical field, or equivalent experience;
- Upper-intermediate English level.
Nice to haves
- Familiarity with the fintech industry, understanding of financial data, regulatory requirements, and business processes specific to the domain;
- Documentation skills to document data pipelines, architecture designs, and best practices for knowledge sharing and future reference;
- GCP services relevant to data engineering;
- Snowflake;
- OpenSearch, Elasticsearch;
- Jupyter for analyze data;
- Bitbucket, Bamboo;
- Terraform.
AgileEngine is one of the Inc. 5000 fastest-growing companies in the US and a top-3 ranked dev shop according to Clutch. We create award-winning custom software solutions that help companies across 15+ industries change the lives of millions.
If you like a challenging environment where you’re working with the best and are encouraged to learn and experiment every day, there’s no better place — guaranteed! 🙂
About the project
The benefits of joining us
Professional growth
Accelerate your professional journey with mentorship, TechTalks, and personalized growth roadmaps
Competitive compensation
We match your ever-growing skills, talent, and contributions with competitive USD-based compensation and budgets for education, fitness, and team activities
A selection of exciting projects
Join projects with modern solutions development and top-tier clients that include Fortune 500 enterprises and leading product brands
Flextime
Tailor your schedule for an optimal work-life balance, by having the options of working from home and going to the office – whatever makes you the happiest and most productive.
Your AgileEngine journey starts here
2 min
Tell us about yourself
2 sec
Confirm requirements
30 - 60 min
Pass a short test
5 min
Record a short video
→ Introduce yourself on a video, instead of waiting for an interview
Live interview
Ace the technical interview with our team
→ Schedule a call yourself right away after your video is reviewed
Live interview
Final interview with your team
→ Get to know the team you will be working with
Get an offer
As quick as possible





