Natural Language

class gretel_client.inference_api.natural_language.NaturalLanguageInferenceAPI(backend_model: str | None = None, *, verify_ssl: bool = True, session: ClientConfig | None = None, skip_configure_session: bool | None = False, **session_kwargs)

Inference API for real-time text generation with Gretel Natural Language.

Parameters:
  • backend_model (str, optional) – The model that is used under the hood. If None, the latest default model will be used. See the backend_model_list property for a list of available models.

  • **session_kwargs – kwargs for your Gretel session.

Raises:

GretelInferenceAPIError – If the specified backend model is not valid.

generate(prompt: str, temperature: float = 0.6, max_tokens: int = 512, top_p: float = 0.9, top_k: int = 43) str

Generate synthetic text.

Parameters:
  • prompt – The prompt for generating synthetic tabular data.

  • temperature – Sampling temperature. Higher values make output more random.

  • max_tokens – The maximum number of tokens to generate.

  • top_k – Number of highest probability tokens to keep for top-k filtering.

  • top_p – The cumulative probability cutoff for sampling tokens.

Example:

from gretel_client.inference_api.natural_language import NaturalLanguageInferenceAPI

llm = NaturalLanguageInferenceAPI(api_key="prompt")

prompt = "Tell me a funny joke about data scientists."

text = llm.generate(prompt=prompt, temperature=0.5, max_tokens=100)
Returns:

The generated text as a string.

property name: str

Returns display name for this inference api.