NVIDIA TensorRT Inference Server
Model serving with TRT Inference Server
Kubeflow currently doesn’t have a specific guide for NVIDIA TensorRT Inference Server. See the NVIDIA documentation for instructions on running NVIDIA inference server on Kubernetes.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.
Last modified March 10, 2020: content i18n for zh (6c961064)