Position Description:
The QA Analyst is responsible for validating the quality of deliverables. The QA Analyst must demonstrate a strong aptitude for automation and collaborate closely with client teams to structure and industrialize this practice using a specific approach.
We aim to foster a greater understanding of quality assurance practices among all our contributors. We are committed to optimizing various types of tests and developing a quality assurance process from the very beginning of the delivery cycle, tailored to the specific context of each project.
Your future duties and responsibilities:
AI Systems and Models Testing Design and execute comprehensive test strategies for AI systems and models, including prompt engineering, output evaluation, and bias/safety testing. Develop deep understanding of LLM behavior—tokenization, embeddings, attention mechanisms, and inference—to anticipate failure modes. Construct effective prompts, recognize hallucinations and off-target outputs, and assess quality across accuracy, tone, coherence, and bias dimensions. Apply evaluation metrics specific to generative AI and establish appropriate thresholds.
Test AI systems integrated with RAG pipelines and knowledge bases, validating data quality and retrieval accuracy as they impact model outputs. Understand vector database mechanics, similarity search thresholds, embedding drift, and test edge cases including near-duplicate documents, sparse vs. dense embeddings, and performance under scale. Leverage LangChain and LangGraph frameworks to read code, understand chain and graph construction, identify failure points, and write test harnesses. Validate integration points using MCPs, testing tool availability and error handling.
Test Strategy and Planning Define and execute comprehensive test strategies for securitization platforms, ensuring coverage across functional, regression, integration, and performance testing. Establish testing standards and best practices that span both traditional QA and AI-specific validation.
Test Automation and Framework Development Design, build, and maintain automated test suites to accelerate release cycles and improve coverage. Leverage AI and ML tools to enhance test coverage, improve efficiency, and reduce regression cycles.
Securitization Lifecycle QA Validate end-to-end deal workflows including setup, structuring, processing, and distributions. Ensure data integrity across upstream and downstream systems through reconciliation testing and reporting.
Release and Regression Testing Coordinate regression testing for platform releases, patches, and infrastructure changes. Ensure stability and backward compatibility, particularly during critical processing windows.
Cross-Functional Collaboration Partner with developers, business analysts, and product owners to clarify requirements, identify edge cases, and ensure testability of new features. Lead defect triage sessions, prioritize issues based on business impact, and maintain clear documentation through to closure.
Quality Leadership and Reporting Define and monitor key quality indicators including defect density, test coverage, and automation rates. Present findings to leadership and recommend improvements. Mentor junior QA team members and foster a culture of quality across the team.
Production Support Provide production support during critical processing windows, investigate incidents, and coordinate root cause analysis and remediation efforts.
Required qualifications to be successful in this role:
Experience 7+ years of quality assurance or quality engineering experience, with at least 3 years in a lead or senior capacity. Strong domain knowledge in securitization, capital markets, or similar asset classes.
Technical Skills Hands-on experience with test automation tools (Selenium, Robot Framework, Playwright, or similar). Proficiency in programming languages including Java and Python, with demonstrated framework implementation expertise. Hands-on API automation and backend system validation experience. Proficiency in database query development, data validation, and reconciliation testing. Experience with CI/CD pipelines and DevOps practices (Jenkins, GitHub, or similar).
AI and ML Competencies Core understanding of LLM architecture and behavior. Hands-on experience with LangChain and/or LangGraph frameworks. Knowledge of RAG pipelines, vector databases, and agentic solutions. Familiarity with Model Context Protocols (MCPs) and integration testing. Understanding of bias, safety, and red-team testing methodologies for AI systems.
• CGI is providing a reasonable estimate of the pay range for this role. The determination of this range includes factors such as skill set level, geographic market, experience and training, and licenses and certifications. Compensation decisions depend on the facts and circumstances of each case. A reasonable estimate of the current range is $60,–$,. This role is an existing vacancy
#LI-BZ1
Skills:
- English
- French
- DevOps
- GIT
- Jenkins
- Postman
- Selenium