The inference interface
Use AI hardware to accelerate inference operations in WebAssembly
This page is outdated. Please visit here for the most up-to-date content.
One of the key benefits of Rust is that it is close to the hardware and can fully take advantage of new hardware features. That is most interesting in the area of Artificial Intelligence (AI), where almost all major cloud players now have their own customized silicon chips for AI inference.
The SSVM inference interface enables Rust applications to directly drive natively compiled ONNX and Tensorflow models on those new hardware. We do that through Rust and WebAssembly code instead of C++ native code for better safety, portability, and manageability.
Last updated