Authentication
Set yourFIREWORKS_API_KEY environment variable. Get your key from here.
Prompt caching
Prompt caching will happen automatically using ourFireworks model. You can read more about how Fireworks handle caching in their docs.
Example
UseFireworks with your Agent:
View more examples here.
Params
| Parameter | Type | Default | Description |
|---|---|---|---|
id | str | "accounts/fireworks/models/llama-v3p1-405b-instruct" | The specific model ID used for generating responses. |
name | str | "Fireworks: {id}" | The name identifier for the agent. Defaults to "Fireworks: " followed by the model ID. |
provider | str | "Fireworks" | The provider of the model. |
api_key | Optional[str] | - | The API key for authenticating requests to the service. Retrieved from the environment variable FIREWORKS_API_KEY. |
base_url | str | "https://api.fireworks.ai/inference/v1" | The base URL for making API requests to the Fireworks service. |
Fireworks also supports the params of OpenAI.