LLaMA API Request Builder
Build Ollama LLaMA API request payloads and cURL commands.
JSON Request Body
{
"model": "llama-3.1-70b",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
],
"options": {
"temperature": 0.7
},
"stream": false
}cURL Command
curl -X POST 'http://localhost:11434/api/chat' \
-H 'Content-Type: application/json' \
-d '{
"model": "llama-3.1-70b",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
],
"options": {
"temperature": 0.7
},
"stream": false
}'Endpointhttp://localhost:11434/api/chat
Related Tools
RQBAI API Request BuilderNEW
Build AI API request payloads and cURL commands for any provider.
OABOpenAI Request BuilderNEW
Build OpenAI Chat Completions API request payloads and cURL commands.
CABClaude API Request BuilderNEW
Build Anthropic Messages API request payloads and cURL commands.
LMALLaMA Token CounterNEW
Count tokens for LLaMA 3.1 405B, 70B, 8B, and LLaMA 3.2 models.
HDRAI API Headers BuilderNEW
Generate correct authentication headers for AI API providers.
Learn More
FAQ
- What is Ollama and how does its API work?
- Ollama is a local LLM runner. Its API accepts POST requests to http://localhost:11434/api/chat with a model name, messages array, options object for parameters, and stream flag.
- Does LLaMA / Ollama require API key authentication?
- By default, Ollama running locally does not require authentication. Only Content-Type: application/json is needed.
- How do I set temperature in Ollama?
- Ollama passes model parameters inside an options object in the request body, e.g. { "options": { "temperature": 0.7 } }.
Generate ready-to-use JSON request bodies and cURL commands for the Ollama local LLaMA API. Configure model, temperature, and message turns. Uses the Ollama /api/chat endpoint format with options object and stream: false.