Tool/Function Calling
We have certain models that now support tool calling.
Overview
Tool/function calling enables language models (LLMs) to interact seamlessly with external APIs or services, automatically determining when external actions are necessary and executing structured requests accordingly.
Supported Models
Model
Tools
Tool Choice
parasail-llama-33-70b-fp8
✅
✅
parasail-llama-4-scout-instruct
✅
✅
parasail-llama-4-maverick-instruct-fp8
✅
✅
parasail-qwen3-30b-a3b
✅
✅
parasail-qwen3-235b-a22b
✅
✅
parasail-qwen3-32b
✅
✅
parasail-mistral-devstral-small
✅
✅
Quickstart Guide
Step 1: Define Your Tool Schema
Define a clear schema describing the tool functionality:
tool_schema = {
"name": "get_weather",
"description": "Retrieve weather information for a given location and date.",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"},
"date": {"type": "string", "pattern": "^\\d{4}-\\d{2}-\\d{2}$"}
},
"required": ["location", "date"]
}
}
Step 2: Example User Query
example_sentence = "What's the weather like in Manhattan Beach on 2025-06-03?"
Using Parasail with OpenAI-Compatible REST Endpoint
Example Implementation
import os
import json
from openai import OpenAI
client = OpenAI(
base_url="https://api.parasail.io/v1",
api_key=os.getenv("PARASAIL_API_KEY")
)
response = client.chat.completions.create(
model="parasail-llama-4-scout-instruct",
messages=[{"role": "user", "content": example_sentence}],
tools=[tool_schema],
tool_choice="auto"
)
args = response.choices[0].message.tool_calls[0].function.arguments
print(json.dumps(args, indent=2))
Expected Output
{
"location": "Manhattan Beach",
"date": "2025-06-03"
}
Note: Schema validation occurs automatically via Parasail's API gateway.
Using Parasail's vLLM Client
Example Implementation
from vllm import LLM, SamplingParams
import os
import json
vlm = LLM(
model="parasail-llama-4-scout-instruct",
api_key=os.getenv("PARASAIL_API_KEY"),
base_url="https://api.parasail.io/v1/vllm"
)
prompt = f"""
You have access to this tool:
{json.dumps(tool_schema, indent=2)}
User Query: "{example_sentence}"
Respond **only** in JSON:
{{
"tool_name": "get_weather",
"parameters": {{
"location": "",
"date": ""
}}
}}
"""
params = SamplingParams(temperature=0, max_tokens=100, stop=["}"])
raw_output = vlm.generate([prompt], params)[0].outputs[0].text.strip() + "}"
tool_call = json.loads(raw_output)
print(json.dumps(tool_call, indent=2))
Expected Output
{
"tool_name": "get_weather",
"parameters": {
"location": "Manhattan Beach",
"date": "2025-06-03"
}
}
Integrating Your Tool
Implement your external service (e.g., weather service) to handle the provided parameters:
Example Weather Service Implementation
def get_weather_service(params):
location = params['location']
date = params['date']
# Example logic to fetch weather data
# Replace this with actual API calls or logic
weather_info = {
"location": location,
"date": date,
"weather": "Sunny",
"temperature": "75°F"
}
return weather_info
weather_data = get_weather_service(tool_call["parameters"])
print(json.dumps(weather_data, indent=2))
Expected Service Output
{
"location": "Manhattan Beach",
"date": "2025-06-03",
"weather": "Sunny",
"temperature": "75°F"
}
Switching Between REST and vLLM
Both methods utilize the same schema, simplifying your workflow and allowing seamless integration and switching between the REST API and vLLM client, depending on your application's needs.
Last updated