Model Serving

advanced
ai & ml
Enhanced Content

Definition

Making trained AI models available to applications through APIs or services for making predictions. Like opening a restaurant that serves dishes created from tested recipes.

Real-World Example

Model serving infrastructure hosts a language translation model that applications can call via API to translate text in real-time.

Related Terms

Cloud Provider Equivalencies

All providers offer managed endpoints to deploy trained models behind HTTPS APIs with autoscaling, monitoring, and security. Managed ML platforms (SageMaker/Azure ML/Vertex AI/OCI Data Science) focus on deploying your own models, while foundation-model services (Bedrock/Azure OpenAI/OCI Generative AI) provide hosted models accessed via API.

AWS
Amazon SageMaker (Real-Time Inference, Serverless Inference, Batch Transform) and Amazon Bedrock (for foundation model APIs)
AZ
Azure Machine Learning (Online Endpoints, Managed Online Endpoints) and Azure AI Foundry / Azure OpenAI (for foundation model APIs)
GCP
Vertex AI (Online Prediction Endpoints, Batch Prediction) and Google Cloud Run/GKE for custom serving
OCI
OCI Data Science (Model Deployment) and OCI Generative AI (for foundation model APIs)

Explore More Cloud Computing Terms