NVIDIA TensorRT Inference Server

Model serving with TRT Inference Server

Kubeflow currently doesn’t have a specific guide for NVIDIA TensorRT Inference Server. See the NVIDIA documentation for instructions on running NVIDIA inference server on Kubernetes.


Last modified March 10, 2020: content i18n for zh (6c961064)