We are a European deep-tech leader in quantum and AI, backed by major global strategic investors and strong EU support. Our groundbreaking technology is already transforming how AI is deployed worldwide — compressing large language models by up to 95% without losing accuracy and cutting inference costs by 50–80%. Joining us means working on cutting-edge solutions that make AI faster, greener, and more accessible — and being part of a company often described as a “quantum-AI unicorn in the making.”
Requirements
- Master’s or Ph.D. in Computer Science, Machine Learning, Electrical Engineering, Physics, or a related technical field.
- 3+ years of hands-on experience training deep learning models from scratch, including designing architectures, building data pipelines, implementing training loops, and running large-scale distributed training jobs.
- Proven experience in at least one major deep learning domain where training from scratch is standard practice, such as computer vision (CNNs, ViTs), speech recognition, recommender systems (DNNs, GNNs), or large language models (LLMs).
- Strong expertise with model compression techniques, including pruning (structured/unstructured), distillation, low-rank factorization, and architecture-level optimization.
- Demonstrated ability to analyze and improve model performance through ablation studies, error analysis, and architecture or data-driven iterative improvements.
- In-depth knowledge of foundational model architectures (computer vision and LLMs) and their lifecycle: training, fine-tuning, alignment, and evaluation.
- Solid understanding of training dynamics, optimization algorithms, initialization schemes, normalization layers, and regularization methods.
- Hands-on experience with Python, PyTorch and modern ML stacks (HuggingFace Transformers, Lightning, DeepSpeed, Accelerate, NeMo, or equivalent).
- Experience building robust, modular, scalable ML training pipelines, including experiment tracking, reproducibility, and version control best practices.
- Practical experience optimizing models for real-world deployment, including latency, memory footprint, throughput, hardware constraints, and inference-cost considerations.
- Excellent problem-solving, debugging, performance analysis, test design, and documentation skills.
- Excellent communication skills in English, with the ability to document and explain design decisions, experiment results, and trade-offs to both technical and non-technical stakeholders.
Benefits
- Competitive annual salary
- Two unique bonuses: signing bonus at incorporation and retention bonus at contract completion.
- Relocation package (if applicable).
- Fixed-term contract ending inn June 2026.
- Hybrid role and flexible working hours.
- Equal pay guaranteed.
- International exposure in a multicultural, cutting-edge environment.