About BetaNXT:
BetaNXT is a leading provider of frictionless wealth management infrastructure, real-time data solutions, and an enhanced advisor experience. We invest in platforms, products, and partnerships to accelerate growth for the ecosystem we serve. Our connective approach empowers our clients to deliver a comprehensive, end-to-end advisor and investor experience.
BetaNXT is a premier provider of technology, data, and operations as services to a rich client base of wealth managers, institutional wealth firms, and digital brokers. It is comprised of three industry-leading businesses which, combined, provide end-to-end solutions across the investment lifecycle.
Overview of the Lead Associate,
BETA is a self-clearing securities processing solution for wealth management firms. Built on years of expert knowledge and hands-on experience, BETA provides critical back-office controls including corporate action management, middle office exception management, and front office account management and trading tools for advisors.
We are seeking a Lead Associate with hands-on experience in production-level, cloud-hosted data-as-a-service application that is running on top of Snowflake. In this role, you will primarily work on data warehousing- involving data modeling, ETL, and reverse ETL processes including hands on in Data Build Tool (DBT) or any such build tools. You are required to be hands-on in Snowflake, DBT, and Terraform IaC deployments. Awareness and basic knowledge of the latest data delivery tools like Kafka, Airflow, Kinesis etc will be an added advantage. On any given day, you might be architecting data ingestion and warehousing solutions and delivering data to our customers or internal applications through queries and functions. You will follow best practices in code and design, secure coding practices, and review data architecture, modeling, and queries from development through to completion in a live production environment.
Reporting to the Manager, Technology Development, this role is part of a Scrum team for developing product(s) within the Data Delivery team. The role will be involved with developing customer facing cloud hosted data as service products. This role necessitates meetings and require heavy collaboration within and across many teams. Our business model is hybrid, working from our Bangalore campus.
Duties and Responsibilities of the Lead Associate
- Architect and implement data ingestion and warehousing solutions.
- Hand-on day to day work in Snowflake, DBT.
- Develop and maintain data models, ETL, and reverse ETL processes.
- Write and optimize SQL queries, functions, and stored procedures.
- Ensure best practices in code and design, including secure coding practices.
- Review and improve data architecture, modeling, and queries.
- Deliver data solutions to customers and internal applications.
- Collaborate with cross-functional teams to meet project goals. Design, develop, and maintain streaming data pipelines using Apache Kafka, with Confluent Cloud as the preferred platform for near real time data processing.
Skills and Expereince of the Lead Associate:
- 5+ years of professional experience in a data-related software engineering role.
- Bachelor’s in Computer Science, Software Engineering, or equivalent.
- Hands-on experience with coding, analytical, and data querying knowledge.
- Experience with all phases of the software development life cycle and Agile software development.
- Solid design knowledge to create extensible, reusable data deliveries that meet architectural objectives and security assessments.
- Willingness to learn new technologies and quickly get up to speed.
- Ability to work autonomously and in teams.
- Excitement for new technologies and challenges of scale.
- Proficiency in:
- Data Build Tool (DBT) or any analogous tool
- Data warehousing, including data modeling, ETL, and reverse ETL
- SQL queries, functions, and stored procedures
- Various data delivery methodologies (files, streams, APIs)
- Working experience with Kafka concepts and components, including:
- Topics, partitions, offsets, and consumer groups
- Producers and consumers
- Schema evolution using Schema Registry (Avro/JSON/Protobuf)
- Understanding of data-related application SDKs
- Good Hands-on with AWS Services that are required for data translation.
- Familiarity with Infrastructure as Code (IAC) tools like Terraform is a plus
- Knowledge of programming languages such as Python and PowerShell preferred