Configure AI agents that don't oscillate, overload, or freeze.
Stop debugging agent failures case-by-case. RPCS-1 Agent Tuner translates your agent's task and environment into specific platform parameters — grounded in the matching principle from cognitive systems research.
Free tier: unlimited tuner usage. SDK on paid plans.
from rpcs1 import recommend_params
config = recommend_params(
task_description="Customer support agent",
environment_entropy="dynamic",
stakes="high",
commitment_style="cautious",
target_platform="anthropic",
)
# Grounded in Matching Principle (Pred-09-5: TI ~ 1/H)
print(config.platform_parameters.temperature) # 0.52
print(config.platform_parameters.model_recommendation) # claude-sonnet-4-6
print(config.predicted_regime) # stable
print(config.receiver_profile.TI) # 30The problem every agent builder has
You ship an agent. It works in testing. In production it starts failing in one of three structural ways — and you have no framework for diagnosing why.
Agent revisits the same tool calls, refuses to commit. High TI + high SG in a fast-changing environment.
Lower SG, shorten context window (TI ↓)
Agent acts on insufficient information, hallucinates tool calls. High SG + low FT + short integration.
Raise FT, lower SG, add retry strategy
Agent hedges endlessly, never takes action. Low UE + high FT — stuck in the filter.
Lower FT, raise UE, adjust commitment style
Five primitives. One structural framework.
Every recommendation is driven by five receiver primitives from RPCS-1, each mapping to a specific LLM parameter. All outputs are deterministic and traceable — no black-box recommendations.
Temporal Integration
How much history to integrate. Maps to context window strategy and max_tokens.
Signal Gain
How strongly to amplify signals. Maps inversely to temperature.
Filtering Threshold
How conservatively to gate action. Drives tool use strategy.
Update Elasticity
How readily to revise the model. Sets retry and grounding strategy.
Ambiguity Resolution
How aggressively to commit when uncertain.
Pred-09-5 from IMM Paper 9
The Matching Principle: TI ≈ 1 / H
Agents in high-entropy environments need short attention windows. Agents in stable environments benefit from long integration. This single principle drives the core of every parameter recommendation.
Read the full explanation →Ready to tune your agent?
Free tier: unlimited web tuner. Paid SDK access starts at $40/month.