Seldon Core Serving
Seldon Core comes installed with Kubeflow. The Seldon Core documentation site provides full documentation for running Seldon Core inference.
If you have a saved model in a PersistentVolume (PV), Google Cloud Storage bucket or Amazon S3 Storage you can use one of the prepackaged model servers provided by Seldon Core.
Seldon Core also provides language specific model wrappers to wrap your inference code for it to run in Seldon Core.
Kubeflow specifics
You need to ensure the namespace where your models will be served has:
- An Istio gateway named kubeflow-gateway
- A label set as
serving.kubeflow.org/inferenceservice=enabled
The following example applies the label my-namespace
to the namespace for serving:
kubectl label namespace my-namespace serving.kubeflow.org/inferenceservice=enabled
Create a gateway called kubeflow-gateway
in namespace my-namespace
:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: kubeflow-gateway
namespace: my-namespace
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
Save the above resource and apply it with kubectl
.
Examples
Seldon provides a large set of example notebooks showing how to run inference code for a wide range of machine learning toolkits.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.