LexicalMachines Research
Filter

LLM API Parameter Reference

A curated snapshot of the parameter surface and provider coverage. Full matrices, filters, and audit notes are available by invitation.

Request Invite View Full Index
Parameter Surface
OpenAI
Anthropic
Gemini
Exposure Gov. Impact What it controls
model Model Inference Yes Yes Yes High High Specifies the exact model to run. Determines capability, cost, and latency.
temperature Sampling Inference Yes Yes Yes High Medium Controls randomness of token selection and response variability.
top_p Sampling Inference Yes Yes Yes High Medium Nucleus sampling: restricts token selection to a probability mass.
max_tokens Sampling Inference Yes Yes Yes High High Maximum number of tokens the model may generate in one response.
response_format Structure Inference Yes Part Part Partial Medium Constrains output format, such as strict JSON or a schema mode.
tools Tools Inference Yes Yes Yes Partial High Declares callable external functions the model may invoke during generation.
reasoning_effort Reasoning Inference Part Yes Part Contractual High Controls depth of internal reasoning and cost/latency trade-offs.
safety_settings Safety Enterprise Part No Yes Contractual High Configures harm category thresholds for content filtering.
stream Streaming Inference Yes Yes Yes High Low Streams tokens progressively rather than waiting for completion.
logprobs Logging Observability Yes No No Rare Medium Returns token log-probabilities for uncertainty analysis.

Full parameter matrix locked

Invite holders receive the full parameter list, filters, categories, and audit notes.

Request Access