Data tools are plugin-provided functions the AI invokes during chat conversations. When a user asks a question that requires data from your plugin, the AI calls the appropriate tool automatically — no manual wiring needed.
Tools are defined as JSON files in the tools/ folder of your bundle. Each file describes one tool: what it does, what parameters it accepts, and where the data comes from. You can ship up to 10 tools per plugin.
Tool file format
Each tool is a JSON file at tools/{slug}.json:
{
"name": "sample_lookup",
"slug": "sample-lookup",
"description": "Look up records from the plugin data store. Call with no args for all, or pass an ID.",
"parameters": [
{ "name": "id", "type": "string", "description": "Record ID to look up", "required": false }
],
"source": {
"type": "data_store",
"config": { ... }
}
}
Field reference
| Field | Type | Required | Description |
|---|---|---|---|
name |
string | Yes | Display name |
slug |
string | Yes | Unique identifier. Lowercase letters and hyphens only. |
description |
string | Yes | Shown to the AI so it knows when to call the tool |
parameters |
array | No | Input parameters the AI can pass |
source |
object | Yes | Data source configuration (see below) |
Parameter object
| Field | Type | Required | Description |
|---|---|---|---|
name |
string | Yes | Parameter name |
type |
string | Yes | string, number, boolean |
description |
string | Yes | Tells the AI what the parameter does |
required |
boolean | No | Whether the AI must provide this parameter |
Source types
The source object tells CableKnit where to fetch data when the tool is called. Three source types are available.
data_store
Query your plugin’s key-value data store. Useful for storing and retrieving records that persist across conversations.
{
"type": "data_store",
"config": {
"key_prefix": "records:",
"single_key_template": "records:"
}
}
| Config field | Description |
|---|---|
key_prefix |
Prefix for listing all matching keys |
single_key_template |
Template for looking up a single key. `` is replaced with the parameter value. |
Full example — a tool that looks up records by ID or lists all:
{
"name": "sample_lookup",
"slug": "sample-lookup",
"description": "Look up records from the plugin data store. Call with no args for all, or pass an ID.",
"parameters": [
{ "name": "id", "type": "string", "description": "Record ID to look up", "required": false }
],
"source": {
"type": "data_store",
"config": {
"key_prefix": "records:",
"single_key_template": "records:"
}
}
}
connector
Query connected services like Salesforce, HubSpot, or QuickBooks. When the company has an active connector, the AI can pull live data through your tool.
{
"type": "connector",
"config": {
"connector": "salesforce",
"resources": "contacts,opportunities",
"filter_mapping": { "name": "" }
}
}
| Config field | Description |
|---|---|
connector |
Connector slug (see Connectors) |
resources |
Comma-separated list of resource types to query |
filter_mapping |
Maps parameter values to connector query filters |
static
Ship lookup tables as JSON directly in the bundle. Good for reference data that doesn’t change often — codes, categories, configuration values. Max 64KB per static data payload.
{
"type": "static",
"config": {
"data": [
{ "code": "A", "label": "Category A", "description": "First category" },
{ "code": "B", "label": "Category B", "description": "Second category" }
]
}
}
Full example — a reference data tool:
{
"name": "reference_data",
"slug": "reference-data",
"description": "Returns reference data definitions.",
"source": {
"type": "static",
"config": {
"data": [
{ "code": "A", "label": "Category A", "description": "First category" },
{ "code": "B", "label": "Category B", "description": "Second category" },
{ "code": "C", "label": "Category C", "description": "Third category" }
]
}
}
}
Data store operations
When using the data_store source type, the following operations are available:
| Operation | Behavior |
|---|---|
| get | Retrieve a single value by key |
| set | Store a value at a key. Overwrites existing. |
| delete | Remove a key and its value |
| list | Return all keys matching a prefix |
| increment | Atomically increment a numeric value |
Limits
| Resource | Limit |
|---|---|
| Keys per plugin per company | 500 |
| Value size | 64KB |
| Total store size | 1MB |
| Key expiration | Optional TTL per key |
Data is scoped per plugin per company — your plugin’s data is isolated from other plugins and other companies.
Sandboxed code execution
When your plugin declares data tools, you can also make sandboxed Ruby available so the AI can write custom queries and transformations against your data sources — beyond what static tool definitions cover.
Opting in
Add execute-code to the platform_tools array in plugin.json. By default the sandbox is read-only — the AI can query data but cannot modify it:
{
"platform_tools": ["execute-code"]
}
To enable write operations (set, delete, increment), use the object format with "writable": true:
{
"platform_tools": [
{ "slug": "execute-code", "writable": true }
]
}
The tool only registers when the plugin has at least one active data source. If you remove all tools/ files, the execute-code tool will not appear.
How it works
Each active data source becomes a callable method inside the sandbox. Method names match the data source slug with hyphens converted to underscores. For example, a data source with slug sales-records becomes callable as sales_records.
Read methods (always available):
The generated method queries data using the source’s configured parameters — same as calling the data tool directly.
Write methods (requires writable: true):
When writable is enabled, additional methods are exposed per data source:
| Method | Description |
|---|---|
{source}_set(key, value) |
Store a value at a key. Overwrites existing. |
{source}_delete(key) |
Remove a key and its value. |
{source}_increment(key, amount) |
Atomically increment a numeric value. |
These methods are not available in the default read-only mode.
Example
Given a plugin with a sales-records data source, the AI might generate:
records = sales_records("q1:")
totals = records.map { |r| r["amount"].to_f }
{ count: totals.size, total: totals.sum, average: totals.sum / totals.size }
With writable: true, the AI can also modify data:
sales_records_set("summary:q1", { total: 142_500, count: 37 })
sales_records_increment("stats:query_count")
Limits
| Resource | Limit |
|---|---|
| Timeout | 5 seconds |
| Memory | 10 MB |
| Network access | None |
| Filesystem access | None |
Code runs in an isolated MRuby Enclave. If execution exceeds the timeout or memory cap, it returns an error.
See Built-in Capabilities — execute_code for the parameter reference.