Reviewed by CalculatorApp.me Math Team
Classical, conditional, and Bayesian probability with permutations, combinations, and distributions.
P(A)
Event probability
nCr
Combinations
nPr
Permutations
P(A|B)
Conditional prob
Free online probability calculator — single events, permutations, combinations, and Bayes' theorem with step-by-step solutions and AI insights.
Enter values above to see results.
Explore our in-depth guides related to this calculator
Master percentage calculations with this comprehensive guide covering percentage formulas, increase/decrease, percentage of a number, reverse percentages, and common real-world applications.
Complete geometry reference covering area formulas, volume calculations, triangles, circles, and 3D shapes. Free area calculator, volume calculator, and right triangle solver included.
Complete GPA guide covering weighted vs. unweighted GPA, cumulative GPA calculation, grade conversion charts, and strategies to raise your GPA. Free GPA and grade calculators included.
Probability quantifies the likelihood that an event will occur, expressed as a number between 0 (impossible) and 1 (certain). A probability of 0.5 means the event occurs half the time. Probability is the mathematical foundation of statistics, machine learning, insurance, genetics, quantum physics, and decision-making.
Three interpretations exist: Classical (equally likely outcomes: coin flip = 1/2), Frequentist (long-run relative frequency), and Bayesian (degree of belief updated with evidence). All three produce valid mathematics — the difference is philosophical.
Permutations count ordered arrangements (nPr = n!/(n-r)!), while combinations count unordered selections (nCr = n!/(r!(n-r)!)). The distinction matters: winning a lottery requires the right combination (order doesn't matter), but finishing positions in a race are permutations (order matters).
Classical Probability:
P(A) = favorable outcomes / total outcomes
Complement:
P(A') = 1 − P(A)
Addition Rule:
P(A or B) = P(A) + P(B) − P(A∩B)
If mutually exclusive: P(A∪B) = P(A)+P(B)
Multiplication Rule:
P(A and B) = P(A) × P(B|A)
If independent: P(A∩B) = P(A) × P(B)
Example: Deck of cards
P(King) = 4/52 = 1/13 ≈ 7.69%
P(Heart) = 13/52 = 1/4 = 25%
P(King of Hearts) = 1/52 ≈ 1.92%
P(King OR Heart) = 4/52 + 13/52
− 1/52 = 16/52The addition rule subtracts P(A∩B) to avoid double-counting. For mutually exclusive events (can't both happen), P(A∩B) = 0.
Permutations (order matters):
nPr = n! / (n−r)!
10P3 = 10! / 7! = 10×9×8 = 720
How many ways to arrange 3 of 10 items
Combinations (order doesn't matter):
nCr = n! / [r!(n−r)!]
10C3 = 10! / (3!×7!)
= 720/6 = 120
How many ways to choose 3 of 10 items
Relationship: nPr = nCr × r!
720 = 120 × 6 ✓
Lottery: Powerball
Main: 69C5 = 11,238,513
Power: ×26
Odds = 1 in 292,201,338Combinations are always ≤ permutations. Divide permutations by r! to remove the ordering. Lottery odds use combinations because drawn order doesn't matter.
Conditional Probability:
P(A|B) = P(A∩B) / P(B)
'Probability of A given B occurred'
Bayes' Theorem:
P(A|B) = P(B|A) × P(A) / P(B)
Medical test example:
Disease prevalence: P(D) = 0.01
Test sensitivity: P(+|D) = 0.99
Test specificity: P(−|~D) = 0.95
False positive rate: P(+|~D) = 0.05
P(D|+) = P(+|D)×P(D) /
[P(+|D)×P(D) + P(+|~D)×P(~D)]
= 0.99×0.01 / (0.99×0.01 + 0.05×0.99)
= 0.0099 / (0.0099 + 0.0495)
= 0.0099 / 0.0594
= 16.7% ← Only 1 in 6!
A positive test ≠ 'you have the disease'
when the disease is rare.| Distribution | Type | Parameters | Mean | Use Case |
|---|---|---|---|---|
| Binomial | Discrete | n trials, p success | np | Coin flips, pass/fail |
| Normal | Continuous | μ mean, σ SD | μ | Heights, test scores |
| Poisson | Discrete | λ rate | λ | Arrivals per hour |
| Exponential | Continuous | λ rate | 1/λ | Time between events |
| Uniform | Both | a min, b max | (a+b)/2 | Random number gen |
| Event | Probability | Odds | Comparison |
|---|---|---|---|
| Coin heads | 50% | 1 in 2 | Fair coin baseline |
| Roll 6 on die | 16.67% | 1 in 6 | Single die |
| Royal flush (poker) | 0.000154% | 1 in 649,740 | 5-card draw |
| Powerball jackpot | 0.00000034% | 1 in 292.2 million | US lottery |
| Lightning strike (year) | 0.00008% | 1 in 1.2 million | US annual risk |
| Identical birthday (23 people) | 50.7% | ~1 in 2 |
Astragali (knucklebones) were used for games and divination in Mesopotamia. While players developed intuitive probability, no formal theory existed. The historian F.N. David notes this as the origin of random events in human culture.
Blaise Pascal and Pierre de Fermat exchanged letters solving the 'Problem of Points' — how to fairly divide stakes in an interrupted game. Their correspondence established the mathematical foundations of probability, including expected value.
Jacob Bernoulli published Ars Conjectandi posthumously, proving the Law of Large Numbers: as trials increase, observed frequency converges to true probability. He also introduced the Bernoulli distribution and binomial probability.
Thomas Bayes' posthumous paper introduced what we now call Bayes' theorem — a method for updating probability based on new evidence. It languished for centuries before becoming central to modern statistics, AI, and machine learning.
Kolmogorov (1933)
Axiomatized probability using three principles: 1) P(A) ≥ 0, 2) P(Ω) = 1, 3) P(∪Aᵢ) = ΣP(Aᵢ) for disjoint events. This framework unified all prior approaches and remains the mathematical standard.
Kahneman & Tversky (1979) — Econometrica
Showed that humans systematically misjudge probabilities: overweighting small probabilities (lottery tickets) and underweighting large ones (insurance). This prospect theory won Kahneman the 2002 Nobel Prize in Economics.
Bernoulli (1713) — Ars Conjectandi
Proved that as the number of identical trials increases, the average of results approaches the expected value. This is why casinos always profit long-term: the law of large numbers eliminates individual luck at scale.
Laplace (1812) — Théorie Analytique
Past coin flips affect future ones (Gambler's Fallacy).
Each coin flip is independent. After 10 heads in a row, the next flip is still 50/50. The coin has no memory. The law of large numbers applies over thousands of flips, not the next single flip.
Rare events don't happen — so low-probability risks can be ignored.
With enough exposure, rare events become likely. A 1-in-million daily risk becomes ~1-in-2,740 over a year. Insurance, safety engineering, and portfolio diversification all exist because rare events DO happen to someone.
A positive medical test means you probably have the disease.
Bayes' theorem shows this depends on the disease prevalence. For a rare disease (1% prevalence) with a 95% accurate test, a positive result means only ~17% chance of having it — because most positives are false positives from the 99% healthy population.
Probability is just for gambling and games.
Precision math tools for students, teachers, and professionals — CalculatorApp.me.
Browse Math Calculators →Last updated:
Bayes' theorem is perhaps the most important result in probability. The base rate (prevalence) dramatically affects how to interpret test results — a counterintuitive but crucial insight.
Expected Value:
E(X) = Σ xᵢ × P(xᵢ)
Variance:
Var(X) = E(X²) − [E(X)]²
Fair die example:
E(X) = 1(1/6)+2(1/6)+...+6(1/6)
= 21/6 = 3.5
E(X²) = 1(1/6)+4(1/6)+9(1/6)
+16(1/6)+25(1/6)+36(1/6)
= 91/6 ≈ 15.17
Var(X) = 91/6−(21/6)² = 35/12 ≈ 2.92
SD(X) = √2.92 ≈ 1.71
Gambling insight:
Casino edge = E(payout) − bet
Roulette (US): E = −$0.053/dollar
Over 1000 bets: expect to lose $53Expected value is the long-run average. A 'fair' game has E(X) = 0. All casino games have negative expected value for players — the 'house edge.'
| Geometric | Discrete | p success | 1/p | Trials until 1st success |
| Chi-squared | Continuous | k degrees of freedom | k | Goodness-of-fit tests |
| Beta | Continuous | α, β shape | α/(α+β) | Bayesian priors |
| Birthday paradox |
Andrey Kolmogorov axiomatized probability theory using measure theory, resolving centuries of foundational debates. His three axioms (non-negativity, normalization, countable additivity) remain the rigorous foundation of all modern probability.
Bayesian methods became central to ML: spam filters (naive Bayes), recommendation systems, A/B testing, and eventually large language models. Bayesian inference — updating beliefs with data — is now the dominant paradigm in AI/ML.
Pierre-Simon Laplace systematized probability theory, introducing the 'principle of indifference' (equally likely outcomes), the central limit theorem precursor, and Bayesian methods. His work dominated probability for a century.
Probability underpins all of science (quantum mechanics is fundamentally probabilistic), AI/ML (neural networks optimize probabilistic models), medicine (clinical trials, diagnostic tests), finance (risk models), insurance, weather forecasting, genetics, and virtually every field involving uncertainty.