Join the C3 Data Warehouse team with one of our Investment Banking clients! We are building a next-generation data platform that centralizes data from various technology systems to empower advanced reporting and analytics for the Technology Risk functions.
Role Summary :
As a Python / Pyspark Data Platform Engineer , you'll play a vital role in developing a unified data pipeline framework using cutting-edge technologies like Airflow, DBT, Spark, and Snowflake. You'll collaborate with a dynamic team of data engineers, analysts, and developers to implement and optimize our data solutions in a hybrid on-premises and cloud environment.
Key Responsibilities :
- Develop components of the unified data pipeline framework in Python
- Establish best practices for Airflow, DBT and Snowflake
- Assist in Testing and deploying the framework using CI / CD tools
- Monitor and optimize query and data load performance
- Collaborate with QA And UAT teams to address issues and provide resolutions
What We're Looking For :
Minimum 7 years in data development with coplex data environments and large volumesProficience in Python (Pandas, NumPy, PySpark), SQL / PLSQL, Snowflake, Apache Spark, and Airflow. Familiarity with DBT is a plus.Hands on experience with structured, semi-structured, and unstructured data integrationStrong analytical and problem solving capability, excellent communication and ability to work across diverse teams.Bachelor’s degree in computer science, Software Engineering or related fieldWhy Join Us?Opportunity to work with state of the art technologyBe part of a team that’s driving innovation in data engineeringCollaborate with industry experts and build impactful solutionsReady to Apply?
If you're passionate about data engineering and thrive in a fast-paced, collaborative environment, we'd love to hear from you!