Forum Discussion
its-mirzabaig
Dec 11, 2025Copper Contributor
How to Reliably Gauge LLM Confidence?
a { text-decoration: none; color: #464feb; } tr th, tr td { border: 1px solid #e6e6e6; } tr th { background-color: #f5f5f5; }
I’m trying to estimate an LLM’s confidence in its answers in a way that correlates with correctness. Self-reported confidence is often misleading, and raw token probabilities mostly reflect fluency rather than truth.
I don’t have grounding options like RAG, human feedback, or online search, so I’m looking for approaches that work in this constraint.
What techniques have you found effective—entropy-based signals, calibration (temperature scaling), self-evaluation, or others? Any best practices for making confidence scores actionable?
No RepliesBe the first to reply