Reviewed by CalculatorApp.me Math Team
Linear, quadratic, polynomial, and systems of equations with step-by-step methods, formulas, and real-world applications.
ax+b=0
Linear equation
x = −b±√Δ / 2a
Quadratic formula
Cramer
Systems via determinants
Newton
Iterative root finding
Free online equation solver — solve linear, quadratic, and systems of equations with step-by-step solutions and AI insights.
Enter values above to see results.
Explore our in-depth guides related to this calculator
Master percentage calculations with this comprehensive guide covering percentage formulas, increase/decrease, percentage of a number, reverse percentages, and common real-world applications.
Complete geometry reference covering area formulas, volume calculations, triangles, circles, and 3D shapes. Free area calculator, volume calculator, and right triangle solver included.
Complete GPA guide covering weighted vs. unweighted GPA, cumulative GPA calculation, grade conversion charts, and strategies to raise your GPA. Free GPA and grade calculators included.
An equation is a mathematical statement asserting equality: two expressions joined by an equals sign. Solving an equation means finding all values of the unknown variable(s) that make the equation true. These values are called solutions or roots.
The linear equation ax + b = 0 has exactly one solution: x = −b/a (when a ≠ 0). Linear equations model direct proportional relationships — pricing, unit conversions, simple interest, and motion at constant velocity. They're the first step in algebraic problem-solving.
More complex equations — quadratic, cubic, polynomial, transcendental — require specialized methods. The general approach follows three key principles: (1) isolate the unknown, (2) apply inverse operations, and (3) verify solutions by substitution. Understanding these principles unlocks all equation-solving.
Solve: 3x + 7 = 22 Step 1: Subtract 7 from both sides 3x + 7 − 7 = 22 − 7 3x = 15 Step 2: Divide both sides by 3 3x/3 = 15/3 x = 5 Verify: 3(5) + 7 = 15 + 7 = 22 ✓ General formula: ax + b = c x = (c − b) / a Special cases: a = 0, b = c → identity (all x) a = 0, b ≠ c → no solution One variable, one equation → unique solution
Linear equations always have exactly one solution (when a≠0). The solution process is reversible: each step applies an inverse operation (addition↔subtraction, multiplication↔division) to both sides.
The Quadratic Formula:
−b ± √(b² − 4ac)
x = ─────────────────────
2a
Discriminant Δ = b² − 4ac
Δ > 0 → Two distinct real roots
Δ = 0 → One repeated real root
Δ < 0 → Two complex conjugate roots
Example: 2x² − 7x + 3 = 0
a=2, b=−7, c=3
Δ = 49 − 24 = 25
x = (7 ± 5) / 4
x₁ = 12/4 = 3
x₂ = 2/4 = 0.5
Alternative methods:
• Factoring: (2x−1)(x−3) = 0
• Completing the square
• Graphing (x-intercepts)The quadratic formula was essentially known to Babylonians (~2000 BC) and formally derived by al-Khwarizmi (820 AD). It works for ALL quadratics — factoring only works when roots are rational.
System of 2 equations:
2x + 3y = 12
4x − y = 5
Method 1: Substitution
From eq.2: y = 4x − 5
Substitute into eq.1:
2x + 3(4x − 5) = 12
2x + 12x − 15 = 12
14x = 27
x = 27/14 ≈ 1.929
y = 4(27/14) − 5 = 38/14 ≈ 2.714
Method 2: Elimination
Multiply eq.2 by 3:
12x − 3y = 15
Add to eq.1:
2x + 3y = 12
14x = 27 → same result
Method 3: Cramer's Rule
|A| = 2(−1)−3(4) = −14
x = |Ax|/|A| = (12(−1)−3(5))/(−14)
x = −27/(−14) = 27/14 ✓| Equation Type | General Form | Max Roots | Solution Method | Example Application |
|---|---|---|---|---|
| Linear | ax + b = 0 | 1 | Direct algebra | Pricing, break-even |
| Quadratic | ax² + bx + c = 0 | 2 | Quadratic formula | Projectile motion, area |
| Cubic | ax³ + bx² + cx + d = 0 | 3 | Cardano's formula | Volume optimization |
| Quartic | ax⁴ + ... = 0 | 4 | Ferrari's method | Optics, engineering |
| Polynomial (n) | aₙxⁿ + ... = 0 | n |
| Method | Convergence | Requires | Pros | Cons |
|---|---|---|---|---|
| Bisection | Linear O(log(1/ε)) | Bracketing interval | Always converges | Slow, needs sign change |
| Newton-Raphson | Quadratic | f(x), f'(x), initial guess | Very fast near root | May diverge, needs derivative |
| Secant | Superlinear (~1.618) | f(x), two initial points | No derivative needed | Can fail to converge |
| Fixed-Point | Linear | x = g(x) form | Simple to implement | Convergence not guaranteed |
| Brent's Method | Superlinear |
| Field | Equation | Variables | What It Solves |
|---|---|---|---|
| Physics | F = ma | Force, mass, acceleration | Newton's second law of motion |
| Finance | A = P(1+r/n)^(nt) | Compound interest variables | Future value of investment |
| Chemistry | pH = −log₁₀[H⁺] | Hydrogen ion concentration | Acidity/basicity of solution |
| Engineering | V = IR | Voltage, current, resistance | Ohm's law for circuits |
| Statistics | z = (x−μ)/σ | Standard score | How far from the mean |
| Economics | P = MC | Price = marginal cost |
Babylonian mathematicians solved quadratic equations using geometric methods on clay tablets. They found positive roots of x² + bx = c by completing the square — essentially the quadratic formula without algebraic notation. Their methods were algorithmic and remarkably sophisticated.
Diophantus of Alexandria wrote Arithmetica, introducing symbolic notation for unknowns and systematically solving polynomial equations. He worked with what we now call Diophantine equations (integer solutions). He's often called 'the father of algebra,' though his work was largely lost until rediscovered in the Renaissance.
Muhammad al-Khwarizmi wrote 'Al-Kitab al-Mukhtasar fi Hisab al-Jabr wal-Muqabala' — the book that gave algebra its name. He systematically classified and solved all forms of linear and quadratic equations, providing geometric proofs. The word 'algorithm' also derives from his name.
Gerolamo Cardano published Ars Magna containing general solutions for cubic (discovered by Tartaglia/del Ferro) and quartic equations (solved by his student Ferrari). These formulas proved that polynomials of degree 3 and 4 always have closed-form solutions. The cubic formula involves complex numbers even for real roots.
Abel (1824) — Impossibility Theorem
Abel proved that polynomial equations of degree ≥ 5 cannot be solved by radicals (using only +, −, ×, ÷, and nth roots). This settled a 300-year quest and motivated the development of group theory, Galois theory, and numerical methods. Specific quintics can still be solved, but no universal formula exists.
Galois (1832) — Group Theory
Galois theory provides a complete criterion for when a polynomial is solvable by radicals: its Galois group must be a solvable group. For degree 5+, the symmetric group S₅ is not solvable, explaining Abel's result. Galois theory is now a cornerstone of abstract algebra and number theory.
Brent (1973) — Algorithms for Minimization
Brent's method combines bisection (guaranteed convergence) with inverse quadratic interpolation (fast convergence) to create a robust root-finding algorithm. It never fails on continuous functions with sign changes and converges superlinearly. It's the default solver in MATLAB's fzero and Python's scipy.optimize.brentq.
Every equation has a solution.
Not all equations have real solutions. x² + 1 = 0 has no real roots (only complex: ±i). |x| = −3 has no solution at all. Some equations are contradictions (0x = 5), while others are identities true for all values (x + x = 2x). The existence and nature of solutions depends entirely on the equation type.
The quadratic formula is the only way to solve quadratics.
Quadratics can also be solved by factoring (fastest when possible), completing the square (always works, reveals vertex form), graphing (visual, approximate), and numerical methods. The quadratic formula is universal but not always the most efficient. Many textbook problems are designed to factor neatly.
Higher-degree polynomials can always be solved with a formula.
Abel (1824) and Galois (1832) proved this is impossible for degree ≥ 5. While specific quintics may have closed-form solutions, no general formula using radicals exists. This was one of the most important results in mathematical history and led to the creation of abstract algebra.
Linear to polynomial equation solving — step-by-step methods at CalculatorApp.me.
Browse Math Calculators →Last updated:
For 2×2 systems, any method works. For larger systems (3+ variables), Gaussian elimination or matrix methods (LU decomposition) are more systematic. Cramer's Rule is elegant but computationally expensive for large systems.
Iterative root-finding for f(x) = 0:
x_{n+1} = x_n − f(x_n) / f'(x_n)
Example: Find √2 (solve x² − 2 = 0)
f(x) = x² − 2, f'(x) = 2x
Start: x₀ = 1.5
x₁ = 1.5 − (2.25−2)/(3)
= 1.5 − 0.0833 = 1.41667
x₂ = 1.41667 − (2.00694−2)/(2.83333)
= 1.41667 − 0.00245 = 1.41422
x₃ = 1.41421 (6 correct digits!)
Convergence: Quadratic
Digits of accuracy roughly double
each iteration near the root.
Warnings:
• Needs good initial guess
• Fails if f'(x_n) = 0
• May diverge for poor starts
• Multiple roots → different startsNewton-Raphson converges extremely fast (quadratic convergence) when it works. Most scientific calculators and computer algebra systems use it internally. For guaranteed convergence, bisection method is slower but always works on continuous functions.
| Numerical methods |
| Signal processing |
| Rational | P(x)/Q(x) = 0 | Varies | Set P(x)=0, Q(x)≠0 | Economics, rates |
| Radical | √f(x) = g(x) | Varies | Square both sides | Distance, geometry |
| Exponential | aˣ = b | 1 | x = log_a(b) | Growth, decay, finance |
| Logarithmic | log_a(x) = b | 1 | x = aᵇ | pH, sound, earthquakes |
| Trigonometric | sin(x) = a | ∞ | x = arcsin(a) ± nπ | Wave mechanics, cycles |
| Bracketing interval |
| Robust + fast hybrid |
| More complex to implement |
| Müller's Method | Order ~1.84 | Three initial points | Finds complex roots | May jump to wrong root |
| Profit maximization |
| Geometry | A = πr² | Area, radius | Circle area calculation |
| Medicine | C(t) = C₀e^(−kt) | Drug concentration, time | Pharmacokinetics decay |
Niels Henrik Abel proved that no general algebraic formula exists for polynomial equations of degree 5 or higher. Évariste Galois (at age 20, before his death in a duel) developed group theory to explain exactly which equations are solvable by radicals. This was one of the most profound results in mathematics.
Isaac Newton developed the Newton-Raphson method for finding approximate roots of equations. Using tangent-line approximations (x_{n+1} = x_n − f(x_n)/f'(x_n)), the method converges quadratically near a root. It remains the most widely used numerical root-finding algorithm in science and engineering.
Wilkinson (1963) — Rounding Errors in Algebraic Processes
James Wilkinson showed that polynomial root-finding is inherently ill-conditioned: tiny changes in coefficients can cause massive changes in roots. His example (Wilkinson's polynomial) has integer roots 1-20, but perturbing one coefficient by 10⁻⁷ causes roots to jump by up to 2.8. This fundamentally influenced numerical analysis.
Numerical methods always give the correct answer.
Numerical methods can fail: Newton's method diverges for bad initial guesses, polynomials are ill-conditioned (Wilkinson's example), and floating-point rounding accumulates errors. Bisection always converges but is slow. No single method is universally optimal — choosing the right algorithm matters.