NVIDIA TensorRT Inference Server

Model serving with TRT Inference Server

Kubeflow currently doesn’t have a specific guide for NVIDIA TensorRT Inference Server. See the NVIDIA documentation for instructions on running NVIDIA inference server on Kubernetes.


最后修改 10.03.2020: content i18n for zh (6c961064)