Platform Mappings
After computing the receiver profile, the SDK maps each primitive to platform-specific parameters. All mappings are deterministic and config-driven.
Temperature (from SG)
Temperature maps inversely to Signal Gain. A high-gain receiver discriminates sharply between signals — in LLMs that means low temperature (crisp, deterministic outputs). A low-gain receiver is broadly receptive — that means higher temperature (exploratory outputs).
temperature = t_max - (SG / 100) × (t_max - t_min)
# Anthropic range [0.0, 1.0]: SG=80 → temp=0.20, SG=20 → temp=0.80
# OpenAI range [0.0, 2.0]: SG=80 → temp=0.40, SG=20 → temp=1.60Max tokens (from TI)
Temporal Integration maps directly to max_tokens. Long integration needs more room to reason over more context.
max_tokens = t_min + (TI / 100) × (t_max - t_min), rounded to nearest 256
# Anthropic range [256, 8192]: TI=10 → 1024, TI=77 → 6400Context strategy (from TI)
TI ≥ 65 → long_window (pass full context)
TI ≥ 35 → rolling_summary (summarize old turns, keep recent full)
TI < 35 → frequent_grounding (re-inject key facts each turn)Tool use strategy (from AR + FT)
FT ≥ 65 → explicit_confirmation (verify before every tool call)
AR ≤ 35 → cautious_chaining (chain tools conservatively)
AR ≥ 65 → aggressive (act decisively)
else → fail_fast (attempt + retry on failure)Retry strategy (from UE)
UE ≥ 65 → aggressive (retry on any non-success)
UE ≥ 35 → moderate (retry on transient errors)
UE < 35 → minimal (retry only on network errors)Model selection (from TI + SG + UE)
TI ≥ 65 and SG ≤ 40 → complex_reasoning model
TI ≤ 30 and UE ≥ 65 → speed_priority model
else → default modelPlatform configs
Anthropic
temperature_range: [0.0, 1.0]
max_tokens_range: [256, 8192]
models:
default: claude-sonnet-4-6
complex_reasoning: claude-opus-4-6
speed_priority: claude-sonnet-4-6
cheap_high_volume: claude-haiku-4-5-20251001OpenAI
temperature_range: [0.0, 2.0]
max_tokens_range: [256, 16384]
models:
default: gpt-4o
complex_reasoning: o1
speed_priority: gpt-4o-miniOpen source
temperature_range: [0.0, 2.0]
max_tokens_range: [256, 8192]
models:
default: llama-3-70b
complex_reasoning: deepseek-r1
speed_priority: llama-3-8bGeneric
temperature_range: [0.0, 1.0]
max_tokens_range: [256, 4096]
model_recommendation: null (platform-neutral)