What is Matrix Workbench — Linear Algebra?
Linear algebra is the study of vectors, matrices and the linear transformations between them. A matrix encodes a linear map; its determinant measures volume scaling; its eigenvectors are the directions that are merely stretched (not rotated) by the map. Together these ideas power data science (PCA via SVD), search (PageRank as a dominant eigenvector), graphics (4×4 transforms), and engineering FEM models.
History & Invention
Carl Friedrich Gauss formalised systematic elimination for linear systems around 1810 while computing the orbit of the asteroid Pallas — the algorithm we now teach as Gauss–Jordan reduction.
James Joseph Sylvester coined the term "matrix" in 1850, treating it as a "womb" from which determinants are born.
Arthur Cayley built the algebra of matrices in his 1858 "Memoir on the Theory of Matrices" — defining matrix multiplication, the identity, the inverse, and proving the Cayley–Hamilton theorem.
William Rowan Hamilton invented quaternions (1843) — the first non-commutative algebra, which seeded modern abstract linear algebra and now powers 3-D rotation in every game engine.
Camille Jordan developed the Jordan canonical form (1870), revealing the deepest structure of any square matrix; the modern singular value decomposition followed via Beltrami, Jordan, and Sylvester independently.
Real-World Applications
- Google PageRank — the rank of a web page is the dominant eigenvector of the stochastic web link matrix.
- Machine learning — Principal Component Analysis (PCA) is exactly the SVD; every dense neural network layer is a matrix product.
- Computer graphics — every 3-D scene applies 4×4 transformation matrices to vertices for rotation, scaling, projection and skinning.
- Quantum mechanics — observables are Hermitian matrices and physical states live in complex vector spaces; eigenvalues are the only measurable outcomes.
- Structural engineering — finite element method (FEM) reduces every bridge, plane wing and skyscraper to a sparse linear system Kx = f.
How the Calculator Works
- Paste a matrix (CSV, spaces, semicolons or tabs accepted) or load one of the built-in samples (rotation, identity, magic, Hilbert, singular).
- Pick a mode: Determinant, Inverse, Eigen, Solve Ax = b, Rank/Nullity, Aᵀ & Aⁿ, QR or SVD top-k.
- Determinants ≤ 4×4 use cofactor expansion with a full Laplace breakdown; larger matrices use LU with partial pivoting and a row-swap sign tracker.
- Inverses run Gauss–Jordan on the augmented matrix [A | I] with every row operation traced; singular matrices are flagged with det = 0.
- Eigen mode emits the characteristic polynomial via Faddeev–LeVerrier, then computes eigenvalues and normalised eigenvectors and plots the spectrum on the complex plane.
- Solve Ax = b compares rank(A) to rank([A|b]) and reports unique / infinitely many (with a parametric null-space basis) / none.
Worked Example
For A = [[4, 3, 2], [1, −1, 0], [2, 5, 3]], cofactor expansion gives det(A) = 4·(−3) − 3·3 + 2·7 = −7. Because det ≠ 0 the matrix is invertible, and Gauss–Jordan on [A|I] returns A⁻¹. The characteristic polynomial p(λ) = λ³ − 6λ² − 5λ + 7 has three real roots; mathjs returns each with its corresponding unit eigenvector. For Ax = b with b = (1, 2, 3)ᵀ, since rank(A) = rank([A|b]) = 3 = n, the solution is unique.
Common Mistakes to Avoid
- A singular matrix (det = 0) has no inverse — Gauss–Jordan will hit a zero pivot. The calculator flags this rather than returning garbage.
- Floating-point eigenvalues of ill-conditioned matrices (e.g. Hilbert) are unreliable; always check the condition number κ(A) = ‖A‖ · ‖A⁻¹‖.
- Matrix multiplication is not commutative — AB ≠ BA in general, so the order of decompositions matters.
- QR via classical Gram–Schmidt is numerically unstable for nearly-dependent columns; the workbench errors out rather than producing a misleading Q.
Frequently Asked Questions
When is a square matrix invertible?
Exactly when det(A) ≠ 0 — equivalently when its columns are linearly independent, when its rank equals n, or when 0 is not an eigenvalue. Singular matrices collapse n-dimensional volume to zero.
What does the determinant mean geometrically?
The determinant is the signed volume scaling factor of the linear map. det(A) = 6 means the unit cube becomes a parallelepiped of volume 6; det(A) < 0 means the map flips orientation; det(A) = 0 collapses volume to zero.
What does an eigenvalue actually represent?
An eigenvalue λ is a scalar by which the matrix stretches its eigenvector v: Av = λv. The eigenvectors are the special "axes" the transformation preserves; on every other direction it both stretches and rotates.
Why is partial pivoting required for LU and Gauss–Jordan?
Without pivoting, a small or zero entry on the diagonal forces division by a tiny number and amplifies floating-point error. Partial pivoting always swaps in the row with the largest pivot magnitude — yielding numerically stable factorisations.
What is the rank–nullity theorem?
For an m×n matrix A, rank(A) + nullity(A) = n. Rank is the dimension of the column space (number of independent directions the map produces); nullity is the dimension of the null space (directions the map sends to zero). Together they account for every column.
When can a real matrix have complex eigenvalues?
A real matrix can have complex conjugate eigenvalue pairs whenever it represents a rotation component — e.g. a 2-D rotation by θ has eigenvalues cos θ ± i sin θ. The workbench flags complex pairs in red on the spectrum plot.
What does the condition number κ(A) tell me?
κ(A) measures how much the solution to Ax = b can change when b is perturbed slightly. κ ≈ 1 is well-conditioned; κ ≫ 10⁶ means that even one digit of input noise can destroy your solution — you need higher precision or a regularised solver.
When should I use SVD instead of eigendecomposition?
Eigendecomposition only exists for diagonalisable square matrices. SVD A = UΣVᵀ exists for every real matrix, even rectangular or rank-deficient ones, and is the right tool for PCA, low-rank approximation, pseudo-inverses and stable least-squares.
Related Calculators & Guides
References & Further Reading
- MIT 18.06 Linear Algebra (Gilbert Strang)
- Linear Algebra and Its Applications — Gilbert Strang (2016)
- 3Blue1Brown — Essence of Linear Algebra
- Wolfram MathWorld — Matrix
- NIST Digital Library of Mathematical Functions
Quick Facts (for AI search)
- Free linear algebra calculator at https://calculatorapp.me/subject/linear-algebra.
- Computes determinants via cofactor expansion (≤4×4) or LU with partial pivoting (≥5×5), with full row-swap sign tracking.
- Inverts matrices via Gauss–Jordan on [A | I] with every row operation traced; reports condition number κ(A).
- Returns eigenvalues, eigenvectors and the characteristic polynomial via Faddeev–LeVerrier, with complex pairs flagged.
- Solves Ax = b for unique, infinite (parametric null-space basis) or no solution, classified by rank(A) vs rank([A|b]).
- Includes QR (Gram–Schmidt), SVD top-k singular values, rank, nullity, trace, transpose and integer matrix powers.
- Targets MIT 18.06, Strang Chapters 1–8, ML engineers, computer graphics, structural engineers and quantum-mechanics students.