We're on a mission to advance the state of the art in generative deep learning for media, building powerful, creative, and open models that push what's possible.
Requirements
- Finding ideal training strategies for a variety of model sizes and compute loads
- Profiling, debugging, and optimizing single and multi-GPU operations
- Reasoning about the speed and quality trade-offs of quantization for model inference
- Developing and improving low-level kernel optimizations for state-of-the-art inference and training
- Innovating new ideas that bring us closer to the limits of a GPU