Talent.com

Sql dba Jobs in Ajax, ON

Create a job alert for this search

Sql dba • ajax on

Last updated: 6 days ago
25-199 - Data Engineer - Azure Databricks - 11 month contract - Oshawa - Hybrid remote

25-199 - Data Engineer - Azure Databricks - 11 month contract - Oshawa - Hybrid remote

CorGTAOshawa, ON, Canada
Remote
Temporary
Data Engineer - Azure Databricks.Oshawa, ON (Hybrid - 3 days remote).We have a great new opportunity to support one of our Energy Sector clients in a contract capacity!.Please see below for more in...Show moreLast updated: 9 days ago
  • Promoted
Finance Manager

Finance Manager

Vaco by HighspringAjax, ON, CA
Permanent
Our client is a Real Estate development company.They are looking for a Finance Manager to join their team!.Direct exposure to ownership through enhanced reporting and analytics.Work closely with a ...Show moreLast updated: 6 days ago
Senior Database Administrator

Senior Database Administrator

360insightsWhitby, Ontario
Full-time
Starting base salary*CAD$90,000 – 110,000.Our salary ranges are based on the role, level, and location.These ranges represent the typical starting salary and do not reflect the long-term earning po...Show moreLast updated: 14 days ago
Remote Senior SQL Engineer - AI Trainer

Remote Senior SQL Engineer - AI Trainer

SuperAnnotatePickering, Ontario, CA
Remote
Full-time
As a Senior SQL Engineer, you will work remotely on an hourly paid basis to review AI-generated SQL queries, database designs, and data-processing logic, as well as generate high-quality reference ...Show moreLast updated: 30+ days ago
Remote Senior SQL Engineer - AI Trainer

Remote Senior SQL Engineer - AI Trainer

SuperAnnotateAjax, Ontario, CA
Remote
Full-time
As a Senior SQL Engineer, you will work remotely on an hourly paid basis to review AI-generated SQL queries, database designs, and data-processing logic, as well as generate high-quality reference ...Show moreLast updated: 30+ days ago
Remote Senior SQL Engineer - AI Trainer

Remote Senior SQL Engineer - AI Trainer

SuperAnnotateOshawa, Ontario, CA
Remote
Full-time
As a Senior SQL Engineer, you will work remotely on an hourly paid basis to review AI-generated SQL queries, database designs, and data-processing logic, as well as generate high-quality reference ...Show moreLast updated: 30+ days ago
Database Administrator 5990

Database Administrator 5990

FoilconOshawa, Ontario, Canada
Temporary
Creates physical database designs and operates and administers the database management system (DBMS) including database optimization, capacity planning, installation and migration, database design,...Show moreLast updated: 30+ days ago
Senior Software Developer

Senior Software Developer

Randstad CanadaOshawa, Ontario, CA
Temporary
Quick Apply
We are seeking a highly skilled Software Developer to join our team.This role focuses on translating technical specifications into robust, tested CRM applications.You will be responsible for the fu...Show moreLast updated: 13 days ago
People also ask
25-199 - Data Engineer - Azure Databricks - 11 month contract - Oshawa - Hybrid remote

25-199 - Data Engineer - Azure Databricks - 11 month contract - Oshawa - Hybrid remote

CorGTAOshawa, ON, Canada
9 days ago
Job type
  • Temporary
  • Remote
Job description

Position: Data Engineer - Azure Databricks
Location: Oshawa, ON (Hybrid - 3 days remote)
Structure: Contract 12 months
Pay: $72.00 - $90.00 p/h inc.

--

We have a great new opportunity to support one of our Energy Sector clients in a contract capacity!

Please see below for more information on the position and if interested, apply with an updated resume aligned to the needs of the role.

--

Job Overview

  • As an Azure and Databricks Data Engineer, you will be responsible for designing, building and supporting the data driven applications which enable innovative, customer centric digital experiences.
  • Will be working as part of a cross-discipline agile team who help each other solve problems across all business areas.
  • Will build reliable, supportable & performant data lake & data warehouse products to meet the organization’s need for data to drive reporting analytics, applications, and innovation.
  • Will employ best practice in development, security, and accessibility and design to achieve the highest quality of service for our customers.
  • Build and productionize modular and scalable data ELT/ETL pipelines and data infrastructure leveraging the wide range of data sources across the organization.
  • Build curated common data models designed by the Data Modelers that offer an integrated, business-centric single source of truth for business intelligence, reporting, and downstream system use, in collaboration with Data Architect.
  • Work closely with infrastructure, and cyber teams and Senior Data Developers to ensure data is secure in transit and at rest.
  • Clean, prepare and optimize datasets for performance, ensuring lineage and quality controls are applied throughout the data integration cycle.
  • Support Business Intelligence Analysts in modelling data for visualization and reporting, using dimensional data modeling and aggregation optimization methods.
  • Troubleshoot issues related to ingestion, data transformation and pipeline performance, data accuracy and integrity.
  • Collaborate with Business Analysts, data scientists, Senior Data Engineers, data Data Analysts and, solution Architects and Data Modelers to develop data pipelines to feed our data marketplace.
  • Assist in identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Work with tools in the Microsoft Stack; Azure Data Factory, Azure Data Lake, Azure SQL Databases, Azure Data Warehouse, Azure Synapse Analytics Services, Azure Databricks, Microsoft Purview, and Power BI.
  • Work within the agile SCRUM work management framework in delivery of products and services, including contributing to feature & user story backlog item development, and utilizing related Kanban/SCRUM toolsets.
  • Assist in building data catalog and maintenance of relevant metadata for datasets published for enterprise use.
  • Develop optimized, performant data pipelines and models at scale using technologies such as Python, Spark and SQL, consuming data sources in XML, CSV, JSON, REST APIs, or other formats.
  • Document as-built pipelines and data products within the product description, and utilize source control to ensure a maintainable code-base.
  • Implement orchestration of data pipeline execution designed by Senior Data Engineers to ensure data products meet customer latency expectations, dependencies are managed, and datasets are as up-to-date as possible, with minimal disruption to end-customer use.
  • Create tooling in collaboration with senior data engineers and data architects to help with day to day tasks, and reduce toil via automation wherever possible.
  • Work with Continuous Integration/Continuous Delivery and DevOps pipelines to assist in automate infrastructure, code delivery and product enhancement isolation and proper release management and versioning.
  • Monitor the ongoing operation of in-production solutions, assist in troubleshooting issues, and provide Tier 2 support for datasets produced by the team, on an as-required basis.
  • Implement and manage appropriate access to data products via role-based access control based on guidance from senior data engineers.
  • Write and perform automated unit and regression testing for data product builds, assist with user acceptance testing and system integration testing as required, and assist in design of relevant test cases based on guidance from Data Architects.
  • Participate in peer code review sessions.

Qualifications

  • Completion of a four-year University education in computer science, computer/software engineering or other relevant programs within data engineering, data analysis, artificial intelligence, or machine learning.
  • Experience as a Data Engineer designing and building data pipelines.
  • Fluent in creating data processing frameworks using Python, PySpark, SparkSOL and SOLExperience with Azure Data Factory, ADLS, Synapse Analytics and Databricks
  • Experience building data pipelines for Data Lakehouses and Data Warehouses
  • Good understanding of data structures and data processing frameworks
  • Knowledge of data governance and data quality principles
  • Effective communication skills to translate technical details to non-technical stakeholders

--
CorGTA is an equal opportunity employer, please apply with an updated resume and ensure the required skills you are able to speak to for this position are included.

At times, CorGTA or its client partners may utilize AI tools to assist with the hiring processes.

For more roles like this please go to www.corgta.com/find-a-job/