The vision of the Apache TVM Project is to host a diverse community of experts and practitioners in machine learning, compilers, and systems architecture to build an accessible, extensible, and automated open-source framework that optimizes current and emerging machine learning models for any hardware platform. TVM provides the following main features:
Compilation and minimal runtimes commonly unlock ML workloads on existing hardware.
CPUs, GPUs, browsers, microcontrollers, FPGAs and more.
Automatically generate and optimize tensor operators on more backends.
Need support for block sparsity, quantization (1,2,4,8 bit integers, posit), random forests/classical ML, memory planning, MISRA-C compatibility, Python prototyping or all of the above?
TVM’s flexible design enables all of these things and more.
Compilation of deep learning models in Keras, MXNet, PyTorch, Tensorflow, CoreML, DarkNet and more. Start using TVM with Python today, build out production stacks using C++, Rust, or Java the next day.