eDNA Explorer is expanding through partnership with Dr. Caren Helbing's laboratory at the University of Victoria to create eDNA Explorer Canada! We are building a cutting‑edge platform for processing and analyzing environmental DNA (eDNA) data. Our system processes biological samples to identify species based on their genetic material, integrates environmental data, and provides insights into biodiversity and ecological patterns. We’re using modern cloud‑native data engineering principles to build robust, scalable pipelines for scientific data analysis.
We're seeking a Full‑stack Engineer to enhance and maintain our comprehensive eDNA Explorer Canada platform, which includes both cutting‑edge web applications and scientific data processing systems. This role involves building sophisticated data visualization components, implementing complex user workflows, developing type‑safe APIs, and maintaining Python‑based data processing pipelines and report generation services.
The ideal candidate will have strong React / TypeScript experience with a passion for creating intuitive interfaces for complex scientific data, combined with solid Python backend development skills for data‑intensive applications.
Our platform consists of :
- Front‑end Web Applications : Modern React‑based interfaces for scientific data analysis and research collaboration
- Python Data Processing Services : Flask‑based APIs and report generation systems handling large‑scale scientific datasets
- Data Pipeline Infrastructure : Dagster‑based workflows for processing genomic and environmental data
Requirements
Core Experience (Required)
4+ years of full‑stack web development experienceStrong experience with React 18+ and TypeScriptSolid understanding of Next.js (App Router and Pages Router)Experience with Python web development using Flask or FastAPIKnowledge of modern database technologies (PostgreSQL, SQLAlchemy)Experience with tRPC for type‑safe APIsFamiliarity with modern testing frameworks (Vitest, Playwright, React Testing Library, pytest)Preferred Experience
Component‑driven development and design systemsUnderstanding of monorepo architecture and Turborepo (for TS) and Poetry (for Python)Knowledge of cloud services and deployment pipelines (Google Cloud Platform preferred)Experience with data visualization libraries and scientific applicationsBackground in Redis / RQ for job queuing systemsExperience with scientific data processing or bioinformatics applicationsKnowledge of containerization (Docker) and orchestration (Kubernetes)Experience with AI‑powered development tools like Claude Code, GitHub Copilot, or similar agentic coding assistantsFamiliarity with AI frameworks such as Google AI SDK or PydanticAI (a plus)Technology Stack
Front‑end Technologies
React & Next.js : React 19 with functional components and hooks, Next.js 15 with both App Router and Pages Router patternsTypeScript : Comprehensive type safety across the entire applicationReact 19 compatibility : With React Compiler integrationUI & Styling : Custom component library (@cal‑edna / ui) with Storybook documentation, Tailwind CSS for utility‑first stylingState Management : Zustand for client state, tRPC for server state managementData Fetching : tRPC for type‑safe API calls with automatic TypeScript generationForms : React Hook Form with Zod validation for type‑safe form handlingTesting : Vitest for unit testing, Playwright for E2E testing, React Testing Library for component testingBack‑end Technologies
Python Web Frameworks : Flask 3.0+ for API services, with potential FastAPI integrationDatabase : PostgreSQL with SQLAlchemy 2.0+ ORM for robust data modelingJob Processing : Redis with RQ (Redis Queue) for background job processingAuthentication : Experience with JWT‑based authenticationCloud Services : Google Cloud Platform (BigQuery, Cloud Storage, Secret Manager)Data Visualization : Plotly for interactive scientific visualizationsContainerization : Docker with Kubernetes deploymentData Processing : polars for scientific data manipulationScientific Computing : scipy, scikit‑bio, scikit‑learn for data analysisDevelopment & Infrastructure
Monorepo Architecture : Turborepo for efficient builds and dependency managementPackage Management : yarn for frontend, Poetry for PythonVersion Control : Git with conventional commitsCI / CD : GitHub Actions with automated testing and deploymentCode Quality : ESLint, Prettier, Ruff (Python), pre‑commit hooksDocumentation : Storybook for component documentation, comprehensive API documentationData Processing Pipeline
Workflow Orchestration : Dagster for data pipeline managementData Storage : Google Cloud Storage, BigQuery for large‑scale data analyticsData Formats : Support for scientific data formats (FASTA, TSV, compressed formats)Performance Optimization : Polars for high‑performance data processingKey Responsibilities
Front‑end Development
Build and maintain React applications for scientific data visualization and analysisDevelop reusable UI components following design system principlesImplement complex data visualization dashboards using modern charting librariesCreate intuitive user workflows for researchers and scientistsEnsure type safety across the entire frontend application stackOptimize application performance for large scientific datasetsBack‑end Development
Design and implement Flask APIs for data processing and report generationManage database operations using SQLAlchemy for complex scientific data modelsDevelop background job processing systems using Redis and RQBuild report generation services that process large‑scale genomic and environmental dataIntegrate with Google Cloud services for scalable data processingImplement robust authentication and authorization systemsSystem Integration
Connect frontend applications with Python backend services via tRPCMaintain data consistency across web applications and processing pipelinesOptimize system performance for handling large scientific datasetsImplement monitoring and logging for both web and data processing componentsEnsure security best practices across the entire platformData & Analytics
Work with scientific datasets including genomic sequences, environmental data, and biodiversity informationImplement data validation and quality assurance processesBuild interactive dashboards for scientific data explorationCreate data export and download functionality for researchersWhat You’ll Build
Web Applications
Interactive data visualization dashboards for biodiversity analysisReal‑time data processing interfaces with progress trackingComplex form systems for scientific metadata collectionResponsive data tables with advanced filtering and sortingMap‑based visualizations for geographic species distributionBackend Services
Report generation APIs that process terabytes of scientific dataBackground job systems for long‑running data processing tasksData validation services for scientific metadataAuthentication and user management systemsFile processing and storage services for scientific datasetsIntegration Features
Real‑time updates between web interfaces and data processing jobsType‑safe API contracts between frontend and backend systemsScalable file upload and processing workflowsAdvanced search and filtering across scientific datasetsTechnical Challenges
Performance optimization for applications handling large scientific datasetsComplex state management across multiple interconnected applicationsReal‑time updates for long‑running scientific computationsType safety across full‑stack applications with complex data modelsScientific data visualization with interactive and responsive chartsScalable architecture supporting growing research communityTeam & Culture
AI‑native development leveraging modern coding assistants and tools for enhanced productivityCode quality and testing with comprehensive test coverageType safety and robust error handling across all systemsPerformance and scalability for scientific computing workloadsDocumentation and knowledge sharing for complex scientific processesCollaborative problem‑solving with domain experts and researchersContinuous learning and adoption of cutting‑edge development tools and practicesGrowth Opportunities
Scientific domain expertise in environmental biology and genomicsAdvanced data engineering and pipeline optimizationCloud architecture and distributed systems designOpen‑source contributions to scientific computing toolsResearch collaboration with academic institutions and environmental organizationsBenefits
This is a grant‑funded position with the possibility of future hiring as an employee at the end of the grant.
eDNA Explorer Canada is committed to building a diverse team. We encourage applications from candidates of all backgrounds.
This position is available as remote within Canada with preference for candidates who can occasionally visit our offices located at the University of Victoria on Vancouver Island in beautiful British Columbia. Applicant must be a Canadian citizen or have a valid work permit to work in Canada.
The Helbing lab is situated in the Department of Biochemistry & Microbiology at the University of Victoria. The eDNA Explorer platform can be viewed here : https : / / www.ednaexplorer.org.
We're looking for engineers who are excited about building tools that enable groundbreaking environmental research that can truly change the world. If you're passionate about creating robust, scalable applications that help scientists understand and protect biodiversity, we'd love to hear from you.
This role offers the unique opportunity to work at the intersection of modern web development and cutting‑edge environmental science, building tools that have real impact on our understanding of the natural world.
#J-18808-Ljbffr