Hi There,
We have a contract role with one of our clients. Please find the below details.
Role : Databricks Engineer
Location : Toronto, Ontario (Hybrid)
Type : Contract
Required Skills :
- Proficiency in Databricks and Apache Spark.
- Experience with one or more cloud platforms (Azure Data Lake, AWS S3, GCP BigQuery).
- Strong programming skills in Python, Scala, or Java.
- Proficiency in SQL for data querying and manipulation.
- Familiarity with data orchestration tools like Apache Airflow or Azure Data Factory.
- Data Expertise :
- Knowledge of big data technologies (Delta Lake, Hive, Kafka).
- Understanding of data modeling, warehousing, and database concepts.
- Soft Skills :
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
- Databricks certifications (e.g., Databricks Certified Data Engineer).
- Experience with machine learning frameworks and Databricks MLflow.
- Exposure to REST APIs for data integration.
- Knowledge of CI / CD tools like Jenkins, GitHub Actions, or Azure DevOps.
Roles and responsibilities :
1. Design, develop, and deploy ETL pipelines using Databricks.
Implement end-to-end data solutions, including ingestion, transformation, and storage.Utilize Apache Spark within Databricks for large-scale data processing.2. Platform Expertise :
Develop solutions on Databricks integrated with cloud platforms (Azure, AWS, or GCP).Optimize Databricks clusters and workflows for performance and cost efficiency.3. Collaboration :
Work closely with data scientists, analysts, and stakeholders to understand business requirements.Collaborate with DevOps teams for CI / CD pipeline integration and automation.4. Data Governance & Security :
Implement and maintain data security and compliance measures.Ensure data quality and reliability using robust validation techniques.5. Troubleshooting & Support :
Monitor, troubleshoot, and resolve issues in Databricks workflows and pipelines.Stay updated on platform upgrades, best practices, and new features.