AI Model Inference (preview:2024-05-01)

2025/04/08 • 1 updated methods

GetModelInfo (updated)
Description Returns information about the AI model deployed. The method makes a REST API call to the `/info` route on the given endpoint. This method will only work when using Serverless API, Managed Compute, or Model . inference endpoint. Azure OpenAI endpoints don't support i.
Reference Link ¶

⚶ Changes

{
  "#id": "GetModelInfo",
  "Description": {
    "new": "Returns information about the AI model deployed.\nThe method makes a REST API call to the `/info` route on the given endpoint.\nThis method will only work when using Serverless API, Managed Compute, or Model .\ninference endpoint. Azure OpenAI endpoints don't support i.",
    "old": "Returns information about the AI model.\nThe method makes a REST API call to the `/info` route on the given endpoint.\nThis method will only work when using Serverless API or Managed Compute endpoint.\nIt will not work for GitHub Models endpoint or Azure OpenAI endpoint."
  },
  "$responses": {
    "200": {
      "$properties": [
        {
          "#name": "model_provider_name",
          "Description": {
            "new": "The model provider name. For example: `Microsoft`",
            "old": "The model provider name. For example: `Microsoft Research`"
          }
        }
      ]
    }
  }
}

⚼ Request

GET:  /info
{
api-version: string ,
}

⚐ Response (200)

{
model_name: string ,
model_type: enum ,
model_provider_name: string ,
}

⚐ Response (default)

{
$headers:
{
x-ms-error-code: string ,
}
,
$schema:
{
error:
{
code: string ,
message: string ,
target: string ,
details:
[
string ,
]
,
innererror:
{
code: string ,
innererror: string ,
}
,
}
,
}
,
}