We are seeking an experienced
Hadoop Developer
with strong hands‑on expertise in
Spark and Scala
to build and support large‑scale data processing solutions. The ideal candidate will also bring domain knowledge in
Liquidity Reporting and Capital Markets , along with strong communication and presentation skills.
Job Summary
We are seeking an experienced
Hadoop Developer
with strong hands‑on expertise in
Spark and Scala
to build and support large‑scale data processing solutions. The ideal candidate will also bring domain knowledge in
Liquidity Reporting and Capital Markets , along with strong communication and presentation skills.
Key Responsibilities
Design, develop, and optimize
Hadoop‑based data processing solutions .
Build and maintain applications using
Apache Spark and Scala .
Develop and support data pipelines using
Hadoop ecosystem components .
Work with
Cassandra
and other NoSQL databases for high‑volume data processing.
Develop backend components using
Java .
Collaborate with business and technology teams to support
liquidity reporting and capital markets
use cases.
Analyze performance issues and implement tuning and optimization.
Present technical solutions and status updates to stakeholders.
Required Skills & Qualifications
Strong hands‑on experience in
Hadoop development .
Expertise in
Apache Spark and Scala .
Experience with
Hadoop and Cassandra .
Strong programming skills in
Java .
Good understanding of
Liquidity Reporting
and
Capital Markets products .
Strong
presentation, communication, and stakeholder interaction skills .
#J-18808-Ljbffr
Scala Developer • Toronto, Canada