Mistral AI
    Mistral AI
    • Create Chat Completions
      POST
    • Create FIM Completions
      POST
    • Create Embeddings
      POST
    • List Available Models
      GET
    • Delete Model
      DELETE
    • Upload File
      POST
    • List Files
      GET
    • Retrieve File
      GET
    • Delete File
      DELETE
    • List Fine Tuning Jobs
      GET
    • Create Fine Tuning Job
      POST
    • Get Fine Tuning Job
      GET
    • Cancel Fine Tuning Job
      POST

      Create FIM Completions

      Develop Env
      https://dev.your-api-server.com
      Develop Env
      https://dev.your-api-server.com
      POST
      /fim/completions
      Request Request Example
      Shell
      JavaScript
      Java
      Swift
      curl --location --request POST 'https://dev.your-api-server.com/fim/completions' \
      --header 'Content-Type: application/json' \
      --data-raw '{
          "prompt": "def",
          "suffix": "return a+b",
          "model": "codestral-latest",
          "temperature": 0.7,
          "top_p": 1,
          "max_tokens": 1024,
          "min_tokens": 0,
          "stream": false,
          "random_seed": 1337,
          "stop": "string"
      }'
      Response Response Example
      {
          "id": "5b35cc2e69bf4ba9a11373ee1f1937f8",
          "object": "chat.completion",
          "created": 1702256327,
          "model": "codestral-latest",
          "choices": [
              {
                  "index": 0,
                  "message": {
                      "role": "user",
                      "content": "\" add(a,b):\""
                  },
                  "finish_reason": "stop"
              }
          ],
          "usage": {
              "prompt_tokens": 8,
              "completion_tokens": 9,
              "total_tokens": 17
          }
      }

      Request

      Body Params application/json
      prompt
      string 
      required
      The text/code to complete.
      Example:
      def
      suffix
      string  | null 
      optional
      Optional text/code that adds more context for the model.
      When given a prompt and a suffix the model will fill
      what is between them. When suffix is not provided, the
      model will simply execute completion starting with
      prompt.
      Example:
      return a+b
      model
      string  | null 
      required
      ID of the model to use. Only compatible for now with:
      codestral-2405
      codestral-latest
      Example:
      codestral-latest
      temperature
      number  | null 
      optional
      What sampling temperature to use, between 0.0 and 1.0.
      Higher values like 0.8 will make the outptu more random,
      while lower values like 0.2 will make it more focused and
      deterministic.
      We generally recommend altering this or top_p but not both.
      >= 0<= 1
      Default:
      0.7
      Example:
      0
      top_p
      number  | null 
      optional
      Nucleus sampling, where the model considers the results of the
      tokens with with top_p probability mass. So 0.1 means only
      the tokens comprising the top 10% probability mass are considered.
      We generally recommend altering this or temperature but not both.
      >= 0<= 1
      Default:
      1
      Example:
      1
      max_tokens
      integer  | null 
      optional
      The maximum number of tokens to generate in the completion.
      The token count of your prompt plus max_tokens cannot
      exceed the model's context length.
      >= 0
      Example:
      1024
      min_tokens
      integer  | null 
      optional
      The minimum number of tokens to generate in the completion.
      >= 0
      stream
      boolean 
      optional
      Whether to stream back partial progress. If set, tokens will be
      sent as data-only server-side events as they become available,
      with the stream terminated by a data: [DONE] message."
      Otherwise, the server will hold the request open until the timeout
      or until completion, with the response containing the full result
      as JSON.
      Default:
      false
      Example:
      false
      random_seed
      integer  | null 
      optional
      The seed to use for random sampling. If set, different calls will
      generate deterministic results.
      >= 0
      Example:
      1337
      stop
      optional
      Any of
      Stop generation if this token is detected.
      Examples

      Responses

      🟢200OK
      application/json
      Body
      id
      string 
      optional
      Example:
      5b35cc2e69bf4ba9a11373ee1f1937f8
      object
      string 
      optional
      Example:
      chat.completion
      created
      integer 
      optional
      Example:
      1702256327
      model
      string 
      optional
      Example:
      codestral-latest
      choices
      array [object {3}] 
      optional
      index
      integer 
      required
      Example:
      0
      message
      object 
      optional
      finish_reason
      enum<string> 
      required
      Allowed values:
      stoplengthmodel_lengtherror
      Example:
      stop
      usage
      object 
      optional
      prompt_tokens
      integer 
      required
      Example:
      8
      completion_tokens
      integer 
      required
      Example:
      9
      total_tokens
      integer 
      required
      Example:
      17
      Modified at 2024-07-29 08:30:19
      Previous
      Create Chat Completions
      Next
      Create Embeddings
      Built with