OpenAI_API Represents authentication to the OpenAPI API endpoint The API key, required to access the API endpoint. The Organization ID to count API requests against. This can be found at https://beta.openai.com/account/org-settings. Allows implicit casting from a string, so that a simple string API key can be provided in place of an instance of The API key to convert into a . Instantiates a new Authentication object with the given , which may be . The API key, required to access the API endpoint. Instantiates a new Authentication object with the given , which may be . For users who belong to multiple organizations, you can specify which organization is used. Usage from these API requests will count against the specified organization's subscription quota. The API key, required to access the API endpoint. The Organization ID to count API requests against. This can be found at https://beta.openai.com/account/org-settings. The default authentication to use when no other auth is specified. This can be set manually, or automatically loaded via environment variables or a config file. Attempts to load api key from environment variables, as "OPENAI_KEY" or "OPENAI_API_KEY". Also loads org if from "OPENAI_ORGANIZATION" if present. Returns the loaded any api keys were found, or if there were no matching environment vars. Attempts to load api keys from a configuration file, by default ".openai" in the current directory, optionally traversing up the directory tree The directory to look in, or for the current directory The filename of the config file Whether to recursively traverse up the directory tree if the is not found in the Returns the loaded any api keys were found, or if it was not successful in finding a config (or if the config file didn't contain correctly formatted API keys) Tests the api key against the OpenAI API, to ensure it is valid. This hits the models endpoint so should not be charged for usage. if the api key is valid, or if empty or not accepted by the OpenAI API. A helper method to swap out objects with the authentication, possibly loaded from ENV or a config file. The specific authentication to use if not Either the provided or the Represents a result from calling the OpenAI API, with all the common metadata returned from every endpoint The time when the result was generated The time when the result was generated in unix epoch format Which model was used to generate this result. Object type, ie: text_completion, file, fine-tune, list, etc The organization associated with the API request, as reported by the API. The server-side processing time as reported by the API. This can be useful for debugging where a delay occurs. The request id of this API call, as reported in the response headers. This may be useful for troubleshooting or when contacting OpenAI support in reference to a specific request. The Openai-Version used to generate this response, as reported in the response headers. This may be useful for troubleshooting or when contacting OpenAI support in reference to a specific request. ChatGPT API endpoint. Use this endpoint to send multiple messages and carry on a conversation. This allows you to set default parameters for every request, for example to set a default temperature or max tokens. For every request, if you do not have a parameter set on the request but do have it set here as a default, the request will automatically pick up the default value. The name of the endpoint, which is the final path segment in the API URL. For example, "completions". Constructor of the api endpoint. Rather than instantiating this yourself, access it through an instance of as . Creates an ongoing chat which can easily encapsulate the conversation. This is the simplest way to use the Chat endpoint. Allows setting the parameters to use when calling the ChatGPT API. Can be useful for setting temperature, presence_penalty, and more. See OpenAI documentation for a list of possible parameters to tweak. A which encapulates a back and forth chat betwen a user and an assistant. Ask the API to complete the request using the specified parameters. This is non-streaming, so it will wait until the API returns the full result. Any non-specified parameters will fall back to default values specified in if present. The request to send to the API. Asynchronously returns the completion result. Look in its property for the results. Ask the API to complete the request using the specified parameters. This is non-streaming, so it will wait until the API returns the full result. Any non-specified parameters will fall back to default values specified in if present. The request to send to the API. Overrides as a convenience. Asynchronously returns the completion result. Look in its property for the results. Ask the API to complete the request using the specified parameters. This is non-streaming, so it will wait until the API returns the full result. Any non-specified parameters will fall back to default values specified in if present. The array of messages to send to the API The model to use. See the ChatGPT models available from What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. It is generally recommend to use this or but not both. An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend to use this or but not both. How many different choices to request for each prompt. How many tokens to complete to. Can return fewer if a stop sequence is hit. The scale of the penalty for how often a token is used. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. The scale of the penalty applied if a token is already present at all. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. Maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. One or more sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. Asynchronously returns the completion result. Look in its property for the results. Ask the API to complete the request using the specified message(s). Any parameters will fall back to default values specified in if present. The messages to use in the generation. The with the API response. Ask the API to complete the request using the specified message(s). Any parameters will fall back to default values specified in if present. The user message or messages to use in the generation. All strings are assumed to be of Role The with the API response. Ask the API to complete the message(s) using the specified request, and stream the results to the as they come in. If you are on the latest C# supporting async enumerables, you may prefer the cleaner syntax of instead. The request to send to the API. This does not fall back to default values specified in . An action to be called as each new result arrives, which includes the index of the result in the overall result set. Ask the API to complete the message(s) using the specified request, and stream the results to the as they come in. If you are on the latest C# supporting async enumerables, you may prefer the cleaner syntax of instead. The request to send to the API. This does not fall back to default values specified in . An action to be called as each new result arrives. Ask the API to complete the message(s) using the specified request, and stream the results as they come in. If you are not using C# 8 supporting async enumerables or if you are using the .NET Framework, you may need to use instead. The request to send to the API. This does not fall back to default values specified in . An async enumerable with each of the results as they come in. See for more details on how to consume an async enumerable. Ask the API to complete the message(s) using the specified request, and stream the results as they come in. If you are not using C# 8 supporting async enumerables or if you are using the .NET Framework, you may need to use instead. The array of messages to send to the API The model to use. See the ChatGPT models available from What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. It is generally recommend to use this or but not both. An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend to use this or but not both. How many different choices to request for each prompt. How many tokens to complete to. Can return fewer if a stop sequence is hit. The scale of the penalty for how often a token is used. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. The scale of the penalty applied if a token is already present at all. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. Maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. One or more sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. An async enumerable with each of the results as they come in. See the C# docs for more details on how to consume an async enumerable. Chat message sent or received from the API. Includes who is speaking in the "role" and the message text in the "content" Creates an empty , with defaulting to Constructor for a new Chat Message The role of the message, which can be "system", "assistant" or "user" The text to send in the message The role of the message, which can be "system", "assistant" or "user" The content of the message An optional name of the user in a multi-user chat Represents the Role of a . Typically, a conversation is formatted with a system message first, followed by alternating user and assistant messages. See the OpenAI docs for more details about usage. Contructor is private to force usage of strongly typed values Gets the singleton instance of based on the string value. Muse be one of "system", "user", or "assistant" The system message helps set the behavior of the assistant. The user messages help instruct the assistant. They can be generated by the end users of an application, or set by a developer as an instruction. The assistant messages help store prior responses. They can also be written by a developer to help give examples of desired behavior. Gets the string value for this role to pass to the API The size as a string Determines whether this instance and a specified object have the same value. The ChatMessageRole to compare to this instance true if obj is a ChatMessageRole and its value is the same as this instance; otherwise, false. If obj is null, the method returns false Returns the hash code for this object A 32-bit signed integer hash code Determines whether this instance and a specified object have the same value. The ChatMessageRole to compare to this instance true if other's value is the same as this instance; otherwise, false. If other is null, the method returns false Gets the string value for this role to pass to the API The ChatMessageRole to convert A request to the Chat API. This is similar, but not exactly the same as the Based on the OpenAI API docs The model to use for this request The messages to send with this Chat Request What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. It is generally recommend to use this or but not both. An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend to use this or but not both. How many different choices to request for each message. Defaults to 1. Specifies where the results should stream and be returned at one time. Do not set this yourself, use the appropriate methods on instead. This is only used for serializing the request into JSON, do not use it directly. One or more sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. The stop sequence where the API will stop generating further tokens. The returned text will not contain the stop sequence. For convenience, if you are only requesting a single stop sequence, set it here How many tokens to complete to. Can return fewer if a stop sequence is hit. Defaults to 16. The scale of the penalty for how often a token is used. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. Defaults to 0. The scale of the penalty applied if a token is already present at all. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. Defaults to 0. Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens(specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Creates a new, empty Create a new chat request using the data from the input chat request. Represents a result from calling the Chat API The identifier of the result, which may be used during troubleshooting The list of choices that the user was presented with during the chat interaction The usage statistics for the chat interaction A convenience method to return the content of the message in the first choice of this response The content of the message, not including . A message received from the API, including the message text, index, and reason why the message finished. The index of the choice in the list of choices The message that was presented to the user as the choice The reason why the chat interaction ended after this choice was presented to the user Partial message "delta" from a stream. For example, the result from StreamChatEnumerableAsync. If this result object is not from a stream, this will be null A convenience method to return the content of the message in this response The content of the message in this response, not including . How many tokens were used in this chat message. The number of completion tokens used during the chat interaction Represents on ongoing chat with back-and-forth interactions between the user and the chatbot. This is the simplest way to interact with the ChatGPT API, rather than manually using the ChatEnpoint methods. You do lose some flexibility though. An internal reference to the API endpoint, needed for API requests Allows setting the parameters to use when calling the ChatGPT API. Can be useful for setting temperature, presence_penalty, and more. Se OpenAI documentation for a list of possible parameters to tweak. Specifies the model to use for ChatGPT requests. This is just a shorthand to access .Model After calling , this contains the full response object which can contain useful metadata like token usages, , etc. This is overwritten with every call to and only contains the most recent result. Creates a new conversation with ChatGPT chat A reference to the API endpoint, needed for API requests. Generally should be . Optionally specify the model to use for ChatGPT requests. If not specified, used .Model or falls back to Allows setting the parameters to use when calling the ChatGPT API. Can be useful for setting temperature, presence_penalty, and more. See OpenAI documentation for a list of possible parameters to tweak. A list of messages exchanged so far. Do not modify this list directly. Instead, use , , , or . Appends a to the chat hstory The to append to the chat history Creates and appends a to the chat hstory The for the message. Typically, a conversation is formatted with a system message first, followed by alternating user and assistant messages. See the OpenAI docs for more details about usage. The content of the message) Creates and appends a to the chat hstory with the Role of . The user messages help instruct the assistant. They can be generated by the end users of an application, or set by a developer as an instruction. Text content generated by the end users of an application, or set by a developer as an instruction Creates and appends a to the chat hstory with the Role of . The user messages help instruct the assistant. They can be generated by the end users of an application, or set by a developer as an instruction. The name of the user in a multi-user chat Text content generated by the end users of an application, or set by a developer as an instruction Creates and appends a to the chat hstory with the Role of . The system message helps set the behavior of the assistant. text content that helps set the behavior of the assistant Creates and appends a to the chat hstory with the Role of . Assistant messages can be written by a developer to help give examples of desired behavior. Text content written by a developer to help give examples of desired behavior Calls the API to get a response, which is appended to the current chat's as an . The string of the response from the chatbot API OBSOLETE: GetResponseFromChatbot() has been renamed to to follow .NET naming guidelines. This alias will be removed in a future version. The string of the response from the chatbot API Calls the API to get a response, which is appended to the current chat's as an , and streams the results to the as they come in.
If you are on the latest C# supporting async enumerables, you may prefer the cleaner syntax of instead.
An action to be called as each new result arrives.
Calls the API to get a response, which is appended to the current chat's as an , and streams the results to the as they come in.
If you are on the latest C# supporting async enumerables, you may prefer the cleaner syntax of instead.
An action to be called as each new result arrives, which includes the index of the result in the overall result set.
Calls the API to get a response, which is appended to the current chat's as an , and streams the results as they come in.
If you are not using C# 8 supporting async enumerables or if you are using the .NET Framework, you may need to use instead.
An async enumerable with each of the results as they come in. See for more details on how to consume an async enumerable.
An interface for , the ChatGPT API endpoint. Use this endpoint to send multiple messages and carry on a conversation. This allows you to set default parameters for every request, for example to set a default temperature or max tokens. For every request, if you do not have a parameter set on the request but do have it set here as a default, the request will automatically pick up the default value. Creates an ongoing chat which can easily encapsulate the conversation. This is the simplest way to use the Chat endpoint. Allows setting the parameters to use when calling the ChatGPT API. Can be useful for setting temperature, presence_penalty, and more. See OpenAI documentation for a list of possible parameters to tweak. A which encapulates a back and forth chat betwen a user and an assistant. Ask the API to complete the request using the specified parameters. This is non-streaming, so it will wait until the API returns the full result. Any non-specified parameters will fall back to default values specified in if present. The request to send to the API. Asynchronously returns the completion result. Look in its property for the results. Ask the API to complete the request using the specified parameters. This is non-streaming, so it will wait until the API returns the full result. Any non-specified parameters will fall back to default values specified in if present. The request to send to the API. Overrides as a convenience. Asynchronously returns the completion result. Look in its property for the results. Ask the API to complete the request using the specified parameters. This is non-streaming, so it will wait until the API returns the full result. Any non-specified parameters will fall back to default values specified in if present. The array of messages to send to the API The model to use. See the ChatGPT models available from What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. It is generally recommend to use this or but not both. An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend to use this or but not both. How many different choices to request for each prompt. How many tokens to complete to. Can return fewer if a stop sequence is hit. The scale of the penalty for how often a token is used. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. The scale of the penalty applied if a token is already present at all. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. Maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. One or more sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. Asynchronously returns the completion result. Look in its property for the results. Ask the API to complete the request using the specified message(s). Any parameters will fall back to default values specified in if present. The messages to use in the generation. The with the API response. Ask the API to complete the request using the specified message(s). Any parameters will fall back to default values specified in if present. The user message or messages to use in the generation. All strings are assumed to be of Role The with the API response. Ask the API to complete the message(s) using the specified request, and stream the results to the as they come in. If you are on the latest C# supporting async enumerables, you may prefer the cleaner syntax of instead. The request to send to the API. This does not fall back to default values specified in . An action to be called as each new result arrives, which includes the index of the result in the overall result set. Ask the API to complete the message(s) using the specified request, and stream the results as they come in. If you are not using C# 8 supporting async enumerables or if you are using the .NET Framework, you may need to use instead. The request to send to the API. This does not fall back to default values specified in . An async enumerable with each of the results as they come in. See for more details on how to consume an async enumerable. Ask the API to complete the message(s) using the specified request, and stream the results as they come in. If you are not using C# 8 supporting async enumerables or if you are using the .NET Framework, you may need to use instead. The array of messages to send to the API The model to use. See the ChatGPT models available from What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. It is generally recommend to use this or but not both. An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend to use this or but not both. How many different choices to request for each prompt. How many tokens to complete to. Can return fewer if a stop sequence is hit. The scale of the penalty for how often a token is used. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. The scale of the penalty applied if a token is already present at all. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. Maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. One or more sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. An async enumerable with each of the results as they come in. See the C# docs for more details on how to consume an async enumerable. Ask the API to complete the message(s) using the specified request, and stream the results to the as they come in. If you are on the latest C# supporting async enumerables, you may prefer the cleaner syntax of instead. The request to send to the API. This does not fall back to default values specified in . An action to be called as each new result arrives, which includes the index of the result in the overall result set. Text generation is the core function of the API. You give the API a prompt, and it generates a completion. The way you “program” the API to do a task is by simply describing the task in plain english or providing a few written examples. This simple approach works for a wide range of use cases, including summarization, translation, grammar correction, question answering, chatbots, composing emails, and much more (see the prompt library for inspiration). This allows you to set default parameters for every request, for example to set a default temperature or max tokens. For every request, if you do not have a parameter set on the request but do have it set here as a default, the request will automatically pick up the default value. The name of the endpoint, which is the final path segment in the API URL. For example, "completions". Constructor of the api endpoint. Rather than instantiating this yourself, access it through an instance of as . Ask the API to complete the prompt(s) using the specified request. This is non-streaming, so it will wait until the API returns the full result. The request to send to the API. This does not fall back to default values specified in . Asynchronously returns the completion result. Look in its property for the completions. Ask the API to complete the prompt(s) using the specified request and a requested number of outputs. This is non-streaming, so it will wait until the API returns the full result. The request to send to the API. This does not fall back to default values specified in . Overrides as a convenience. Asynchronously returns the completion result. Look in its property for the completions, which should have a length equal to . Ask the API to complete the prompt(s) using the specified parameters. This is non-streaming, so it will wait until the API returns the full result. Any non-specified parameters will fall back to default values specified in if present. The prompt to generate from The model to use. You can use to see all of your available models, or use a standard model like . How many tokens to complete to. Can return fewer if a stop sequence is hit. What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. It is generally recommend to use this or but not both. An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend to use this or but not both. How many different choices to request for each prompt. The scale of the penalty applied if a token is already present at all. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. The scale of the penalty for how often a token is used. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. Include the log probabilities on the logprobs most likely tokens, which can be found in -> . So for example, if logprobs is 10, the API will return a list of the 10 most likely tokens. If logprobs is supplied, the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. Echo back the prompt in addition to the completion. One or more sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. Asynchronously returns the completion result. Look in its property for the completions. Ask the API to complete the prompt(s) using the specified promptes, with other paramets being drawn from default values specified in if present. This is non-streaming, so it will wait until the API returns the full result. One or more prompts to generate from Ask the API to complete the prompt(s) using the specified request, and stream the results to the as they come in. If you are on the latest C# supporting async enumerables, you may prefer the cleaner syntax of instead. The request to send to the API. This does not fall back to default values specified in . An action to be called as each new result arrives, which includes the index of the result in the overall result set. Ask the API to complete the prompt(s) using the specified request, and stream the results to the as they come in. If you are on the latest C# supporting async enumerables, you may prefer the cleaner syntax of instead. The request to send to the API. This does not fall back to default values specified in . An action to be called as each new result arrives. Ask the API to complete the prompt(s) using the specified request, and stream the results as they come in. If you are not using C# 8 supporting async enumerables or if you are using the .NET Framework, you may need to use instead. The request to send to the API. This does not fall back to default values specified in . An async enumerable with each of the results as they come in. See for more details on how to consume an async enumerable. Ask the API to complete the prompt(s) using the specified parameters. Any non-specified parameters will fall back to default values specified in if present. If you are not using C# 8 supporting async enumerables or if you are using the .NET Framework, you may need to use instead. The prompt to generate from The model to use. You can use to see all of your available models, or use a standard model like . How many tokens to complete to. Can return fewer if a stop sequence is hit. What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. It is generally recommend to use this or but not both. An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend to use this or but not both. How many different choices to request for each prompt. The scale of the penalty applied if a token is already present at all. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. The scale of the penalty for how often a token is used. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. Include the log probabilities on the logprobs most likely tokens, which can be found in -> . So for example, if logprobs is 10, the API will return a list of the 10 most likely tokens. If logprobs is supplied, the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. Echo back the prompt in addition to the completion. One or more sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. An async enumerable with each of the results as they come in. See the C# docs for more details on how to consume an async enumerable. Simply returns a string of the prompt followed by the best completion The request to send to the API. This does not fall back to default values specified in . A string of the prompt followed by the best completion Simply returns the best completion The prompt to complete The best completion Represents a request to the Completions API. Mostly matches the parameters in the OpenAI docs, although some have been renamed or expanded into single/multiple properties for ease of use. ID of the model to use. You can use to see all of your available models, or use a standard model like . This is only used for serializing the request into JSON, do not use it directly. If you are requesting more than one prompt, specify them as an array of strings. For convenience, if you are only requesting a single prompt, set it here The suffix that comes after a completion of inserted text. Defaults to null. How many tokens to complete to. Can return fewer if a stop sequence is hit. Defaults to 16. What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. It is generally recommend to use this or but not both. An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend to use this or but not both. The scale of the penalty applied if a token is already present at all. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. Defaults to 0. The scale of the penalty for how often a token is used. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. Defaults to 0. How many different choices to request for each prompt. Defaults to 1. Specifies where the results should stream and be returned at one time. Do not set this yourself, use the appropriate methods on instead. Include the log probabilities on the logprobs most likely tokens, which can be found in -> . So for example, if logprobs is 5, the API will return a list of the 5 most likely tokens. If logprobs is supplied, the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. The maximum value for logprobs is 5. Echo back the prompt in addition to the completion. Defaults to false. This is only used for serializing the request into JSON, do not use it directly. One or more sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. The stop sequence where the API will stop generating further tokens. The returned text will not contain the stop sequence. For convenience, if you are only requesting a single stop sequence, set it here Generates best_of completions server-side and returns the "best" (the one with the highest log probability per token). Results cannot be streamed. When used with n, best_of controls the number of candidate completions and n specifies how many to return – best_of must be greater than n. Note: Because this parameter generates many completions, it can quickly consume your token quota.Use carefully and ensure that you have reasonable settings for max_tokens and stop. A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Cretes a new, empty Creates a new , inheriting any parameters set in . The to copy Creates a new , using the specified prompts One or more prompts to generate from Creates a new with the specified parameters The prompt to generate from The model to use. You can use to see all of your available models, or use a standard model like . How many tokens to complete to. Can return fewer if a stop sequence is hit. What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. It is generally recommend to use this or but not both. The suffix that comes after a completion of inserted text An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend to use this or but not both. How many different choices to request for each prompt. The scale of the penalty applied if a token is already present at all. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. The scale of the penalty for how often a token is used. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. Include the log probabilities on the logprobs most likely tokens, which can be found in -> . So for example, if logprobs is 10, the API will return a list of the 10 most likely tokens. If logprobs is supplied, the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. Echo back the prompt in addition to the completion. One or more sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. Represents a completion choice returned by the Completion API. The main text of the completion If multiple completion choices we returned, this is the index withing the various choices If the request specified , this contains the list of the most likely tokens. If this is the last segment of the completion result, this specifies why the completion has ended. Gets the main text of this completion API usage as reported by the OpenAI API for this request How many tokens are in the completion(s) Represents a result from calling the Completion API The identifier of the result, which may be used during troubleshooting The completions returned by the API. Depending on your request, there may be 1 or many choices. API token usage as reported by the OpenAI API for this request Gets the text of the first completion, representing the main result An interface for , for ease of mock testing, etc This allows you to set default parameters for every request, for example to set a default temperature or max tokens. For every request, if you do not have a parameter set on the request but do have it set here as a default, the request will automatically pick up the default value. Ask the API to complete the prompt(s) using the specified request. This is non-streaming, so it will wait until the API returns the full result. The request to send to the API. This does not fall back to default values specified in . Asynchronously returns the completion result. Look in its property for the completions. Ask the API to complete the prompt(s) using the specified parameters. This is non-streaming, so it will wait until the API returns the full result. Any non-specified parameters will fall back to default values specified in if present. The prompt to generate from The model to use. You can use to see all of your available models, or use a standard model like . How many tokens to complete to. Can return fewer if a stop sequence is hit. What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. It is generally recommend to use this or but not both. An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend to use this or but not both. How many different choices to request for each prompt. The scale of the penalty applied if a token is already present at all. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. The scale of the penalty for how often a token is used. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. Include the log probabilities on the logprobs most likely tokens, which can be found in -> . So for example, if logprobs is 10, the API will return a list of the 10 most likely tokens. If logprobs is supplied, the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. Echo back the prompt in addition to the completion. One or more sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. Asynchronously returns the completion result. Look in its property for the completions. Ask the API to complete the prompt(s) using the specified promptes, with other paramets being drawn from default values specified in if present. This is non-streaming, so it will wait until the API returns the full result. One or more prompts to generate from Ask the API to complete the prompt(s) using the specified request and a requested number of outputs. This is non-streaming, so it will wait until the API returns the full result. The request to send to the API. This does not fall back to default values specified in . Overrides as a convenience. Asynchronously returns the completion result. Look in its property for the completions, which should have a length equal to . Ask the API to complete the prompt(s) using the specified request, and stream the results to the as they come in. If you are on the latest C# supporting async enumerables, you may prefer the cleaner syntax of instead. The request to send to the API. This does not fall back to default values specified in . An action to be called as each new result arrives, which includes the index of the result in the overall result set. Ask the API to complete the prompt(s) using the specified request, and stream the results to the as they come in. If you are on the latest C# supporting async enumerables, you may prefer the cleaner syntax of instead. The request to send to the API. This does not fall back to default values specified in . An action to be called as each new result arrives. Ask the API to complete the prompt(s) using the specified request, and stream the results as they come in. If you are not using C# 8 supporting async enumerables or if you are using the .NET Framework, you may need to use instead. The request to send to the API. This does not fall back to default values specified in . An async enumerable with each of the results as they come in. See for more details on how to consume an async enumerable. Ask the API to complete the prompt(s) using the specified parameters. Any non-specified parameters will fall back to default values specified in if present. If you are not using C# 8 supporting async enumerables or if you are using the .NET Framework, you may need to use instead. The prompt to generate from The model to use. You can use to see all of your available models, or use a standard model like . How many tokens to complete to. Can return fewer if a stop sequence is hit. What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. It is generally recommend to use this or but not both. An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend to use this or but not both. How many different choices to request for each prompt. The scale of the penalty applied if a token is already present at all. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. The scale of the penalty for how often a token is used. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse. Include the log probabilities on the logprobs most likely tokens, which can be found in -> . So for example, if logprobs is 10, the API will return a list of the 10 most likely tokens. If logprobs is supplied, the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. Echo back the prompt in addition to the completion. One or more sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. An async enumerable with each of the results as they come in. See the C# docs for more details on how to consume an async enumerable. Simply returns a string of the prompt followed by the best completion The request to send to the API. This does not fall back to default values specified in . A string of the prompt followed by the best completion Simply returns the best completion The prompt to complete The best completion OpenAI’s text embeddings measure the relatedness of text strings by generating an embedding, which is a vector (list) of floating point numbers. The distance between two vectors measures their relatedness. Small distances suggest high relatedness and large distances suggest low relatedness. This allows you to send request to the recommended model without needing to specify. Every request uses the model The name of the endpoint, which is the final path segment in the API URL. For example, "embeddings". Constructor of the api endpoint. Rather than instantiating this yourself, access it through an instance of as . Ask the API to embedd text using the default embedding model Text to be embedded Asynchronously returns the embedding result. Look in its property of to find the vector of floating point numbers Ask the API to embedd text using a custom request Request to be send Asynchronously returns the embedding result. Look in its property of to find the vector of floating point numbers Ask the API to embedd text using the default embedding model Text to be embedded Asynchronously returns the first embedding result as an array of floats. Represents a request to the Completions API. Matches with the docs at the OpenAI docs ID of the model to use. You can use to see all of your available models, or use a standard model like . Main text to be embedded Cretes a new, empty Creates a new with the specified parameters The model to use. You can use to see all of your available models, or use a standard model like . The prompt to transform Creates a new with the specified input and the model. The prompt to transform Represents an embedding result returned by the Embedding API. List of results of the embedding Usage statistics of how many tokens have been used for this request Allows an EmbeddingResult to be implicitly cast to the array of floats repsresenting the first ebmedding result The to cast to an array of floats. Data returned from the Embedding API. Type of the response. In case of Data, this will be "embedding" The input text represented as a vector (list) of floating point numbers Index An interface for , for ease of mock testing, etc This allows you to send request to the recommended model without needing to specify. Every request uses the model Ask the API to embedd text using the default embedding model Text to be embedded Asynchronously returns the embedding result. Look in its property of to find the vector of floating point numbers Ask the API to embedd text using a custom request Request to be send Asynchronously returns the embedding result. Look in its property of to find the vector of floating point numbers Ask the API to embedd text using the default embedding model Text to be embedded Asynchronously returns the first embedding result as an array of floats. A base object for any OpenAI API endpoint, encompassing common functionality The internal reference to the API, mostly used for authentication Constructor of the api endpoint base, to be called from the contructor of any devived classes. Rather than instantiating any endpoint yourself, access it through an instance of . The name of the endpoint, which is the final path segment in the API URL. Must be overriden in a derived class. Gets the URL of the endpoint, based on the base OpenAI API URL followed by the endpoint name. For example "https://api.openai.com/v1/completions" Gets an HTTPClient with the appropriate authorization and other headers set The fully initialized HttpClient Thrown if there is no valid authentication. Please refer to for details. Formats a human-readable error message relating to calling the API and parsing the response The full content returned in the http response The http response object itself The name of the endpoint being used Additional details about the endpoint of this request (optional) A human-readable string error message. Sends an HTTP request and returns the response. Does not do any parsing, but does do error handling. (optional) If provided, overrides the url endpoint for this request. If omitted, then will be used. (optional) The HTTP verb to use, for example "". If omitted, then "GET" is assumed. (optional) A json-serializable object to include in the request body. (optional) If true, streams the response. Otherwise waits for the entire response before returning. The HttpResponseMessage of the response, which is confirmed to be successful. Throws an exception if a non-success HTTP response was returned Sends an HTTP Get request and return the string content of the response without parsing, and does error handling. (optional) If provided, overrides the url endpoint for this request. If omitted, then will be used. The text string of the response, which is confirmed to be successful. Throws an exception if a non-success HTTP response was returned Sends an HTTP Request and does initial parsing The -derived class for the result (optional) If provided, overrides the url endpoint for this request. If omitted, then will be used. (optional) The HTTP verb to use, for example "". If omitted, then "GET" is assumed. (optional) A json-serializable object to include in the request body. An awaitable Task with the parsed result of type Throws an exception if a non-success HTTP response was returned or if the result couldn't be parsed. Sends an HTTP Get request and does initial parsing The -derived class for the result (optional) If provided, overrides the url endpoint for this request. If omitted, then will be used. An awaitable Task with the parsed result of type Throws an exception if a non-success HTTP response was returned or if the result couldn't be parsed. Sends an HTTP Post request and does initial parsing The -derived class for the result (optional) If provided, overrides the url endpoint for this request. If omitted, then will be used. (optional) A json-serializable object to include in the request body. An awaitable Task with the parsed result of type Throws an exception if a non-success HTTP response was returned or if the result couldn't be parsed. Sends an HTTP Delete request and does initial parsing The -derived class for the result (optional) If provided, overrides the url endpoint for this request. If omitted, then will be used. (optional) A json-serializable object to include in the request body. An awaitable Task with the parsed result of type Throws an exception if a non-success HTTP response was returned or if the result couldn't be parsed. Sends an HTTP Put request and does initial parsing The -derived class for the result (optional) If provided, overrides the url endpoint for this request. If omitted, then will be used. (optional) A json-serializable object to include in the request body. An awaitable Task with the parsed result of type Throws an exception if a non-success HTTP response was returned or if the result couldn't be parsed. Sends an HTTP request and handles a streaming response. Does basic line splitting and error handling. (optional) If provided, overrides the url endpoint for this request. If omitted, then will be used. (optional) The HTTP verb to use, for example "". If omitted, then "GET" is assumed. (optional) A json-serializable object to include in the request body. The HttpResponseMessage of the response, which is confirmed to be successful. Throws an exception if a non-success HTTP response was returned Represents a single file used with the OpenAI Files endpoint. Files are used to upload and manage documents that can be used with features like Fine-tuning. Unique id for this file, so that it can be referenced in other operations The name of the file What is the purpose of this file, fine-tune, search, etc The size of the file in bytes Timestamp for the creation time of this file When the object is deleted, this attribute is used in the Delete file operation The status of the File (ie when an upload operation was done: "uploaded") The status details, it could be null The API endpoint for operations List, Upload, Delete, Retrieve files Constructor of the api endpoint. Rather than instantiating this yourself, access it through an instance of as . The name of the endpoint, which is the final path segment in the API URL. For example, "files". Get the list of all files Returns information about a specific file The ID of the file to use for this request Returns the contents of the specific file as string The ID of the file to use for this request Delete a file The ID of the file to use for this request Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact OpenAI if you need to increase the storage limit The name of the file to use for this request The intendend purpose of the uploaded documents. Use "fine-tune" for Fine-tuning. This allows us to validate the format of the uploaded file. A helper class to deserialize the JSON API responses. This should not be used directly. An interface for , for ease of mock testing, etc Get the list of all files Returns information about a specific file The ID of the file to use for this request Returns the contents of the specific file as string The ID of the file to use for this request Delete a file The ID of the file to use for this request Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact OpenAI if you need to increase the storage limit The name of the file to use for this request The intendend purpose of the uploaded documents. Use "fine-tune" for Fine-tuning. This allows us to validate the format of the uploaded file. An interface for . Given a prompt, the model will generate a new image. Ask the API to Creates an image given a prompt. Request to be send Asynchronously returns the image result. Look in its Ask the API to Creates an image given a prompt. A text description of the desired image(s) Asynchronously returns the image result. Look in its Given a prompt, the model will generate a new image. The name of the endpoint, which is the final path segment in the API URL. For example, "image". Constructor of the api endpoint. Rather than instantiating this yourself, access it through an instance of as . Ask the API to Creates an image given a prompt. A text description of the desired image(s) Asynchronously returns the image result. Look in its Ask the API to Creates an image given a prompt. Request to be send Asynchronously returns the image result. Look in its Represents a request to the Images API. Mostly matches the parameters in the OpenAI docs, although some have been renamed or expanded into single/multiple properties for ease of use. A text description of the desired image(s). The maximum length is 1000 characters. How many different choices to request for each prompt. Defaults to 1. A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Optional. The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024. Defauls to 1024x1024 The format in which the generated images are returned. Must be one of url or b64_json. Defaults to Url. Cretes a new, empty Creates a new with the specified parameters A text description of the desired image(s). The maximum length is 1000 characters. How many different choices to request for each prompt. Defaults to 1. The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024. A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. The format in which the generated images are returned. Must be one of url or b64_json. Represents available response formats for image generation endpoints Requests an image that is 256x256 Requests an image that is 512x512 Gets the string value for this response format to pass to the API The response format as a string Gets the string value for this response format to pass to the API The ImageResponseFormat to convert Represents an image result returned by the Image API. List of results of the embedding Gets the url or base64-encoded image data of the first result, or null if there are no results Data returned from the Image API. The url of the image result The base64-encoded image data as returned by the API Represents available sizes for image generation endpoints Requests an image that is 256x256 Requests an image that is 512x512 Requests and image that is 1024x1024 Gets the string value for this size to pass to the API The size as a string Gets the string value for this size to pass to the API The ImageSize to convert An interface for , for ease of mock testing, etc Base url for OpenAI for OpenAI, should be "https://api.openai.com/{0}/{1}" for Azure, should be "https://(your-resource-name.openai.azure.com/openai/deployments/(deployment-id)/{1}?api-version={0}" Version of the Rest Api The API authentication information to use for API calls Text generation in the form of chat messages. This interacts with the ChatGPT API. Classify text against the OpenAI Content Policy. Text generation is the core function of the API. You give the API a prompt, and it generates a completion. The way you “program” the API to do a task is by simply describing the task in plain english or providing a few written examples. This simple approach works for a wide range of use cases, including summarization, translation, grammar correction, question answering, chatbots, composing emails, and much more (see the prompt library for inspiration). The API lets you transform text into a vector (list) of floating point numbers. The distance between two vectors measures their relatedness. Small distances suggest high relatedness and large distances suggest low relatedness. The API endpoint for querying available Engines/models The API lets you do operations with files. You can upload, delete or retrieve files. Files can be used for fine-tuning, search, etc. The API lets you do operations with images. You can Given a prompt and/or an input image, the model will generate a new image. An interface for , for ease of mock testing, etc Get details about a particular Model from the API, specifically properties such as and permissions. The id/name of the model to get more details about Asynchronously returns the with all available properties Get details about a particular Model from the API, specifically properties such as and permissions. The id/name of the model to get more details about Obsolete: IGNORED Asynchronously returns the with all available properties List all models via the API Asynchronously returns the list of all s Represents a language model The id/name of the model The owner of this model. Generally "openai" is a generic OpenAI model, or the organization if a custom or finetuned model. The type of object. Should always be 'model'. The time when the model was created The time when the model was created in unix epoch format Permissions for use of the model Currently (2023-01-27) seems like this is duplicate of but including for completeness. Currently (2023-01-27) seems unused, probably intended for nesting of models in a later release Allows an model to be implicitly cast to the string of its The to cast to a string. Allows a string to be implicitly cast as an with that The id/ to use Represents an Model with the given id/ The id/ to use. Represents a generic Model/model The default model to use in requests if no other model is specified. Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost Capable of straightforward tasks, very fast, and lower cost. Very capable, but faster and lower cost than Davinci. Most capable GPT-3 model. Can do any task the other models can do, often with higher quality, longer output and better instruction-following. Also supports inserting completions within text. Almost as capable as Davinci Codex, but slightly faster. This speed advantage may make it preferable for real-time applications. Most capable Codex model. Particularly good at translating natural language to code. In addition to completing code, also supports inserting completions within code. OpenAI offers one second-generation embedding model for use with the embeddings API endpoint. Most capable GPT-3.5 model and optimized for chat at 1/10th the cost of text-davinci-003. Will be updated with the latest model iteration. Snapshot of gpt-3.5-turbo from March 1st 2023. Unlike gpt-3.5-turbo, this model will not receive updates, and will only be supported for a three month period ending on June 1st 2023. More capable than any GPT-3.5 model, able to do more complex tasks, and optimized for chat. Will be updated with the latest model iteration. Currently in limited beta so your OpenAI account needs to be whitelisted to use this. Same capabilities as the base gpt-4 mode but with 4x the context length. Will be updated with the latest model iteration. Currently in limited beta so your OpenAI account needs to be whitelisted to use this. Stable text moderation model that may provide lower accuracy compared to TextModerationLatest. OpenAI states they will provide advanced notice before updating this model. The latest text moderation model. This model will be automatically upgraded over time. Gets more details about this Model from the API, specifically properties such as and permissions. An instance of the API with authentication in order to call the endpoint. Asynchronously returns an Model with all relevant properties filled in Permissions for using the model Permission Id (not to be confused with ModelId) Object type, should always be 'model_permission' The time when the permission was created Unix timestamp for creation date/time Can the model be created? Does the model support temperature sampling? https://beta.openai.com/docs/api-reference/completions/create#completions/create-temperature Does the model support logprobs? https://beta.openai.com/docs/api-reference/completions/create#completions/create-logprobs Does the model support search indices? Does the model allow fine tuning? https://beta.openai.com/docs/api-reference/fine-tunes Is the model only allowed for a particular organization? May not be implemented yet. Is the model part of a group? Seems not implemented yet. Always null. The API endpoint for querying available models The name of the endpoint, which is the final path segment in the API URL. For example, "models". Constructor of the api endpoint. Rather than instantiating this yourself, access it through an instance of as . Get details about a particular Model from the API, specifically properties such as and permissions. The id/name of the model to get more details about Asynchronously returns the with all available properties List all models via the API Asynchronously returns the list of all s Get details about a particular Model from the API, specifically properties such as and permissions. The id/name of the model to get more details about Obsolete: IGNORED Asynchronously returns the with all available properties A helper class to deserialize the JSON API responses. This should not be used directly. An interface for , which classifies text against the OpenAI Content Policy This allows you to send request to the recommended model without needing to specify. OpenAI recommends using the model Ask the API to classify the text using a custom request. Request to send to the API Asynchronously returns the classification result Ask the API to classify the text using the default model. Text to classify Asynchronously returns the classification result This endpoint classifies text against the OpenAI Content Policy This allows you to send request to the recommended model without needing to specify. OpenAI recommends using the model The name of the endpoint, which is the final path segment in the API URL. For example, "completions". Constructor of the api endpoint. Rather than instantiating this yourself, access it through an instance of as . Ask the API to classify the text using the default model. Text to classify Asynchronously returns the classification result Ask the API to classify the text using a custom request. Request to send to the API Asynchronously returns the classification result Represents a request to the Moderations API. Which Moderation model to use for this request. Two content moderations models are available: and . The default is which will be automatically upgraded over time.This ensures you are always using our most accurate model.If you use , we will provide advanced notice before updating the model. Accuracy of may be slightly lower than for . The input text to classify An array of inputs to classify Cretes a new, empty Creates a new with the specified parameters The prompt to classify The model to use. You can use to see all of your available models, or use a standard model like . Creates a new with the specified parameters An array of prompts to classify The model to use. You can use to see all of your available models, or use a standard model like . Creates a new with the specified input(s) and the model. One or more prompts to classify Represents a moderation result returned by the Moderations API List of results returned from the Moderations API request The unique identifier associated with a moderation request Consists of the prefix "modr-" followed by a randomly generated alphanumeric string Convenience function to return the highest confidence category for which the content was flagged, or null if no content flags the highest confidence category for which the content was flagged, or null if no content flags The result generated by the Moderations API request A series of categories that the content could be flagged for. Values are bool's, indicating if the txt is flagged in that category Confidence scores for the different category flags. Values are between 0 and 1, where 0 indicates low confidence True if the text was flagged in any of the categories Returns a list of all categories for which the content was flagged, sorted from highest confidence to lowest Returns the highest confidence category for which the content was flagged, or null if no content flags Returns the highest confidence flag score across all categories Series of boolean values indiciating what the text is flagged for If the text contains hate speech If the text contains hate / threatening speech If the text contains content about self-harm If the text contains sexual content If the text contains sexual content featuring minors If the text contains violent content If the text contains violent and graphic content Confidence scores for the different category flags Confidence score indicating "hate" content is detected in the text A value between 0 and 1, where 0 indicates low confidence Confidence score indicating "hate/threatening" content is detected in the text A value between 0 and 1, where 0 indicates low confidence Confidence score indicating "self-harm" content is detected in the text A value between 0 and 1, where 0 indicates low confidence Confidence score indicating "sexual" content is detected in the text A value between 0 and 1, where 0 indicates low confidence Confidence score indicating "sexual/minors" content is detected in the text A value between 0 and 1, where 0 indicates low confidence Confidence score indicating "violence" content is detected in the text A value between 0 and 1, where 0 indicates low confidence Confidence score indicating "violence/graphic" content is detected in the text A value between 0 and 1, where 0 indicates low confidence Entry point to the OpenAPI API, handling auth and allowing access to the various API endpoints Base url for OpenAI for OpenAI, should be "https://api.openai.com/{0}/{1}" for Azure, should be "https://(your-resource-name.openai.azure.com/openai/deployments/(deployment-id)/{1}?api-version={0}" Version of the Rest Api The API authentication information to use for API calls Optionally provide an IHttpClientFactory to create the client to send requests. Creates a new entry point to the OpenAPI API, handling auth and allowing access to the various API endpoints The API authentication information to use for API calls, or to attempt to use the , potentially loading from environment vars or from a config file. Instantiates a version of the API for connecting to the Azure OpenAI endpoint instead of the main OpenAI endpoint. The name of your Azure OpenAI Resource The name of your model deployment. You're required to first deploy a model before you can make calls. The API authentication information to use for API calls, or to attempt to use the , potentially loading from environment vars or from a config file. Currently this library only supports the api-key flow, not the AD-Flow. Text generation is the core function of the API. You give the API a prompt, and it generates a completion. The way you “program” the API to do a task is by simply describing the task in plain english or providing a few written examples. This simple approach works for a wide range of use cases, including summarization, translation, grammar correction, question answering, chatbots, composing emails, and much more (see the prompt library for inspiration). The API lets you transform text into a vector (list) of floating point numbers. The distance between two vectors measures their relatedness. Small distances suggest high relatedness and large distances suggest low relatedness. Text generation in the form of chat messages. This interacts with the ChatGPT API. Classify text against the OpenAI Content Policy. The API endpoint for querying available Engines/models The API lets you do operations with files. You can upload, delete or retrieve files. Files can be used for fine-tuning, search, etc. The API lets you do operations with images. Given a prompt and/or an input image, the model will generate a new image. Usage statistics of how many tokens have been used for this request. How many tokens did the prompt consist of How many tokens did the request consume total