ABOUT THE PROJECT Our client is seeking an experienced Data Scientist to contribute to a cutting-edge data science initiative focused on advanced optimization and predictive modeling. The role supports complex analytical projects that apply mathematical programming and statistical techniques to solve real-world business problems, while also enabling knowledge transfer to internal teams.This is a fully remote engagement offered as a one-year contract, with potential for extension. ABOUT THE RESPONSIBILITIES In this role, you will design, build, and iterate on advanced optimization and predictive models, translating complex business requirements into scalable, solver-ready mathematical formulations. You will work with large datasets and complex problem spaces, applying strong analytical judgment to balance performance, accuracy, and real-world constraints.You will be expected to operate with a high degree of independence, clearly communicate assumptions and trade-offs, and proactively drive work forward while collaborating with technical and non-technical stakeholders.Key responsibilities include :
- Formulating and implementing optimization models, with a strong focus on mixed-integer linear programming (MILP) and related mathematical programming techniques
- Translating business objectives and constraints into solver-ready formulations and iterating on models to achieve stable, performant solutions
- Working hands-on with optimization solvers and APIs in Python (e.g., Gurobi, CPLEX, OR-Tools, PuLP / COIN-OR), including debugging and refining model behavior
- Developing and applying predictive and statistical models, including Bayesian approaches where appropriate
- Processing, cleaning, and analyzing large datasets using Python and data-wrangling libraries such as Pandas or Polars
- Supporting feature engineering and analytical workflows for large-scale optimization or modeling problems
- Implementing and maintaining data pipelines, including monitoring execution, reviewing logs, and troubleshooting performance issues
- Applying DevOps practices to support reproducibility, deployment, and maintainability of data science solutions
- Working with cloud-based data platforms such as Databricks and Azure Blob Storage
- Clearly communicating assumptions, methodologies, results, and trade-offs to both technical and non-technical audiences
- Producing clear documentation, model artifacts, and analytical readouts to support transparency and knowledge transfer
- Proactively identifying risks, surfacing issues early, and seeking input as needed rather than waiting for scheduled check-ins
- Supporting knowledge transfer and training for internal staff to strengthen organizational data science capabilities
REQUIREMENTS Must-have :
Demonstrated hands-on experience formulating and implementing optimization models, particularly mixed-integer linear programming (MILP)Strong experience translating business constraints and objectives into solver-ready mathematical formulationsHands-on proficiency with at least one optimization solver or API in Python (e.g., Gurobi, CPLEX, OR-Tools, PuLP / COIN-OR)Ability to debug, iterate, and tune optimization models to achieve stable, performant resultsStrong Python skills with experience processing and analyzing large datasets using Pandas or PolarsExperience working with large-scale data and / or large-scale optimization problemsClear, structured communication skills with the ability to synthesize assumptions, approaches, results, and trade-offsAbility to produce high-quality written artifacts such as documentation, notes, and analytical readoutsSelf-directed, proactive working style with the ability to operate independently and surface risks earlyExperience explaining complex analytical concepts to both technical and non-technical audiencesNice-to-have :
Experience with Databricks, Azure Blob Storage, or similar cloud-based data platformsExperience implementing DevOps practices within data science or analytics environmentsFamiliarity with Power Apps or Power Automate for workflow automationExperience supporting knowledge transfer, training, or enablement activitiesExposure to production monitoring and troubleshooting of data or machine learning pipelinesABOUT THE ROLE
Location : Fully remoteDuration : 1-year contract (with potential for extension)Start date : FebruaryPAY DISCLOSURE The average hourly pay range for this field is as follows :
Junior : 0–5 years of experience — $90–$105 / hr CADIntermediate : 6–9 years of experience — $105–$120 / hr CADSenior : 10+ years of experience — $120–$130 / hr CAD