At Verto, we’re passionate about helping businesses in Emerging markets reach the world. What first started life as a FX solution for trading Nigerian Naira has now become a market-leading platform, changing the way thousands of businesses transfer money in and out of Emerging Markets.
We believe that where you do business shouldn’t determine how successful you are, or your ability to scale. Millions of companies a day have to juggle long settlement periods, high transaction fees and issues accessing liquidity in order to trade with African businesses. We’re on a mission to change this by creating equal access to easy payment and liquidity solutions that are already a given in developed markets.
We’re not alone in realising the opportunity and need to solve for emerging markets. We’re backed by world-class investors including Y-Combinator, Quona and MEVP, power payments for some of the most disruptive start-ups in the world and have a list of accolades from leading publications including being voted ‘Fintech Start Up of the Year’ at Fintech Awards London 2022.
Each year we process billions of dollars of payments and provide companies with solutions which help them to save money, automate processes and grow, but we’re only just getting started.
We’re seeking a driven and results-oriented Senior Data Engineer who is excited to help build out a best-in-class Data Platform. In this role, you will be expected to achieve key milestones such as improving on our existing Data Warehouse, implementing a CI/CD framework, and enabling core technologies such as dbt and git. You will play a pivotal role in enabling long-term scalability and efficiency when it comes to all things data, and leveraging your expertise in Data Engineering to drive measurable impact.
In this role you will:
Conceptualize, maintain and improve the data architecture
Evaluating design and operational cost-benefit tradeoffs within systems
Design, build, and launch collections of data models that support multiple use cases across different products or domains
Solve our most challenging data integration problems, optimising ELT pipelines, frameworks, sourcing from structured and unstructured data sources
Implementing a CI/CD framework
Create and contribute to frameworks that improve the accuracy, efficiency and general data integrity
Design and execute ‘best-in-class’ schema design
Implementing other potential data tools
Define and manage refresh schedules, load-balancing and SLA for all data sets in allocated areas of ownership
Collaborate with engineers, product managers, and data analysts to understand data needs, identify and resolve issues and help set best practices for efficient data capture
Determine and implement the data governance model and processes within ownership realm (GDPR, PPI, etc)
You’ll be responsible for:
Taking ownership of the data engineering process - from project scoping, design, communication, execution and conclusion
Support and strengthen data infrastructure together with data team and engineering
Support organisation in understanding the importance of data and advocate for best-in-class infrastructure
Mentoring, educating team members on best-in-class DE practices
Priorising workload effectively
Support quarterly and half-year planning from Data Engineering perspective
Skills and Qualifications
University degree; ideally in data engineering, software engineering, computer science-engineering, numerate or similar
+4 years of data engineering experience or equivalent
Expert experience building data warehouses and ETL pipelines
Expert experience of SQL, python, git, dbt (incl. query efficiency and optimization)
Expert experience of AWS stack (Athena, Redshift, Glue, Cloudwatch, etc) → Qualification preferred, not mandatory
Significant experience of Automation and Integrations tools (Zapier, FiveTran, Airflow, Astronomer or similar)
Significant experience with IoC tools (Terraform, Docker, Kubernetes or similar)
Significant experience with CI/CD tools (Jenkins, GitHub Actions, CircleCI or similar)
Preferred Experience:
Experience with real time data pipelines (AWS Kinesis, Kafka, Spark)
Experience with observability tools (Metaplane, MonteCarlo, Datadog or similar)
Experience within FinTech/Finance/FX preferred