Senior Data Engineer

RBC - Royal Bank
Toronto, ON
$120K a year (estimated)
Full-time

Job Summary

Job Description

What is the Opportunity?

The GRM Portfolio Risk Oversight group provides independent and effective on-site monitoring, control and communication on the nature and extent of material risks for Business Financial Services (BFS).

We are looking for someone who can drive custom risk Analytics and Insights to business partners by building and operating a modern data stack for reporting & analytics.

As a Senior Data Engineer you will contribute to the overall success of the BFS risk oversight strategies and objectives.

You are accountable for architecting, implementing and managing data models and data pipelines, developing, maintaining and upscaling the cloud data platform / services vital to the continued growth of broader team

Portfolio Risk Oversight (PRO) is an extremely dynamic team, capable of proactively uncovering insights and risk trends by applying data science methodologies.

To succeed you will have to develop and maintain strong ties with business leaders to support decision making through a combination of custom analytics and the evolution of the PRO data infrastructure.

The PRO team leverages modern data stack, which requires constant refinements and enhancements to provide business users with relevant and timely self-serve analytics.

You will ensure that our team is successful by leveraging the data in the best possible way. You will be a hands-on analytics practitioner and consultant to ensure that we track, maintain and analyze data in a way that leads us to optimize operational processes and make better business decisions.

What will you do?

Support the PRO Business Analytics & Innovation leadership in the design, coordination, execution and monitoring of key transformational initiatives

Provide architecture guidance, performance tuning and big data engineering expertise for use cases that require capabilities in Federated Queries, Data Ingestion and Distributed Computing.

Building and supporting data engineering pipelines in Python using PySpark and Apache Airflow

Manage and optimizes an inventory of risk data sources. Responsible for compiling, aggregating, testing and validating different data repositories / sources for the risk dashboards ensuring completeness, accuracy, timeliness and integrity of information;

Develop data products by writing code that is modular, reliable, maintainable & replicable by leveraging open source data science libraries (Pandas, SQLAlchemy ,scikit-learn, airflow, pyarrow, nltk, spaCy)

Identify, design, and implement internal process improvements : automating manual processes, orchestrating and optimizing data delivery across on-premises data platforms, re-designing data models for greater scalability, etc.

Build on and optimize the existing foundation of data pipeline architectures required for optimal extraction, transformation, and loading of data from a wide variety of data sources using various methods, programming languages, and software technologies, adjusting as needed to optimize effectiveness.

Coordinate Linux VMs maintenances and maintain Docker Containers and basic shell scripting

What do you need to succeed?

Must-have :

Bachelor's degree in computer science, Information Technology, or a related field.

Experience working with containers and orchestration tools like (Docker, Kubernetes, Apache Airflow, CI / CD, etc.)

Must be proficient in working on Linux RHEL v8-9

Experience using Environments technologies such as Hadoop / Spark, Virtual Servers, SQL, Oracle, DB2, NoSQL / SQL databases, Storage SAN / NAS.

Expertise SQL, coding and experience with a broad array of development tools and platforms, experience with Linux / UNIX shell coding, analytics data management tools / languages such as Python (required), Spark.

Experience developing and maintaining reporting environments across multiple platforms (e.g. ETL, reporting data layer, etc.

with exposure to programming and data environments (e.g. SQL, Hadoop, Python, etc.).

Strong core competency in SQL. Experience in writing complex SQL queries to extract and integrate data from multiple database source.

Nice-to-have

Knowledge of Credit Risk Modelling techniques

Experience working with Cloud Technologies

Experience working with Structured vs. Unstructured data

Extensive hands-on experience in designing, developing and maintaining software frameworks using Python, Spark, and Shell Scripts.

What's in it for you?

We thrive on the challenge to be our best, progressive thinking to keep growing, and working together to deliver trusted advice to help our clients thrive and communities prosper.

We care about each other, reaching our potential, making a difference to our communities, and achieving success that is mutual.

A comprehensive Total Rewards Program including bonuses and flexible benefits, competitive compensation, commissions, and stock where applicable

Leaders who support your development through coaching and managing opportunities

Ability to make a difference and lasting impact

Work in a dynamic, collaborative, progressive, and high-performing team

A world-class training program in financial services

Flexible work / life balance options

Opportunities to do challenging work

TechPJ

LI-Post

LI-Hybrid

Job Skills

Big Data Management, Cloud Computing, Database Development, Data Mining, Data Warehousing (DW), ETL Processing, Group Problem Solving, Quality Management, Requirements Analysis

30+ days ago
Related jobs
Lorven Technologies
Toronto, Ontario

Role: Data Engineer with Capital Market. Basic Concepts: Data Mesh, Data Product. Expert in Metadata: Neptune, OpenLineage, Angular, DataZone. Expert in Data Pipelines: PySpark, Hudi, Iceberg, Airflow, EMR. ...

Huawei Technologies Canada Co., Ltd.
Markham, Ontario

Domain expertise in the implementation of one or more of the following:  AI end to end data processing, scenarios and system optimizations, core database internals (Ex: query optimization, runtime, scheduling, storage engine, transaction processing, etc. Architect and develop framework/engine for th...

0000050007 Royal Bank of Canada
Toronto, Ontario

Apache Hadoop, Azure SQL Data Warehouse, Big Data, Big Data Technologies, Cloud Computing, Database Development, Data Warehousing (DW), ETL Development, Microsoft Azure, Microsoft Azure SQL Database, Microsoft SQL Server, Snowflake (Platform), Snowflake Data Warehouse, Tableau (Software). RBC is see...

Scotiabank
Toronto, Ontario

GTT is searching for Senior Data Engineer who are continuous learners and are eager to develop & boost capabilities of our existing and growing google cloud platforms. Strong experience in GCP cloud-based data engineering development (Pub/Sub, Big Query, GCS, Dataflow, Cloud Composer, DBT etc. S...

ITL Canada
Toronto, Ontario

At least 4 years of experience in designing, implementing, and maintaining robust and scalable data pipelines on Azure using services such as Azure Data Factory, Azure SQL Data Warehouse, Azure Analysis Services or any of the Azure Databricks/Synapse/Fabric. Ability to implement and manage CI/CD pip...

Royal Bank of Canada
Toronto, Ontario

As a Data Engineer you will be a part of the IDEA team, performing ETL activities, data modeling and building the data infrastructure that powers effective decision-making. Big Data Management, Cloud Computing, Database Development, Data Mining, Data Warehousing (DW), ETL Processing, Group Problem S...

Lyons Consulting Group
Toronto, Ontario

Senior Data Engineer - DataBricks-. Develop and manage frameworks for data ingestion and workflow orchestration using Data Factory or Dataflows. Hands-on experience with Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and other Azure data services. Fabric Analytics Engineer Associate ...

TextNow
Canada

Be a champion of TextNow's data ecosystem by working with data science, engineering and infrastructure to implement data strategy for governance, security, privacy, quality,and retention that will satisfy business policies and requirements. Mentor junior data engineers and promote best practices in ...

Endava
Toronto, Ontario

Implement data products curated by our Chief Data Office, as well as custom data models for fit for use. Ensure data quality and integrity across various data sources and systems to ensure data accuracy, completeness, and reliability. Keep up to date with the latest industry trends and technologies ...

Index Exchange, Inc.
Toronto, Ontario

Evaluate new technologies, design, implement, and maintain data pipelines for extraction, transformation, and loading of data from a wide variety of data sources to various data services. A senior engineer with exposure leading projects and mentoring junior developers. You have a passion for Big Dat...