About the Role
Medior ML engineer
Requirements
Requirements
3+ years of professional experience in machine learning or a related field.
MSc in Computer Science, Data Science, Artificial Intelligence, or a relevant field.
Strong experience with classical machine learning techniques (regression, classification, etc.).
Proficiency in Python, with expertise in TensorFlow, PyTorch, and Scikit-learn.
Experience with AWS sagemaker, EC2 and Dockers for model deployment.
Familiarity with cloud-based services and SQL.
Strong coding practices (PEP8, unit tests, mock, logging, debugging, Git, etc.).
Excellent verbal and written communication skills in English.
Living in Amsterdam or willing to relocate.
Nice to have
Experience with ETL pipelines and Spark.
Familiarity with C++.
Familiarity with R.
Familiarity with Kubernetes.
About the Company
As a Medior ML Engineer at Neurocast, your primary focus will be designing, deploying, and maintaining machine learning models. You’ll ensure that these models are efficient, scalable, and perform optimally in production. This role involves developing and fine-tuning models, deploying them into production, ensuring scalability and performance, managing MLOps infrastructure, and experimenting with and monitoring model performance. You will be integral in driving machine learning solutions that meet real-world application requirements. This position is for you if:
You enjoy experimenting with new techniques to improve model performance.
You can translate business problems into scientific and technical steps.
You are passionate about ensuring the reliability and scalability of machine learning models.
You thrive in a collaborative environment and can effectively communicate your work to non-technical teams.
Key Responsibilities
Model Development: Build and fine-tune ML models using TensorFlow, PyTorch, or Scikit-learn.
Model Deployment: Deploy ML models into production using AWS SageMaker and EC2.
Optimization & Fine-Tuning: Continuously improve model performance, accuracy, and efficiency.
MLOps: Manage ML infrastructure, including model training pipelines and model deployment workflows.
Experimentation & Monitoring: Monitor and evaluate model performance, ensuring it meets production needs.
Scalability & Performance: Optimize models for both scalability and performance in production environments.
Programming: Use Python for model development and integration tasks.
Key Performance Indicators
Successful deployment and optimization of machine learning models in production.
High-quality, well-structured data pipelines and features for ML models.
Continuous improvement of model performance and scalability.Effective collaboration with cross-functional teams for seamless model integration.