Reviewed by CalculatorApp.me Math Team
Addition, multiplication, determinant, inverse, eigenvalues, and linear system solutions.
n×m
Any dimension
det(A)
Determinant
A⁻¹
Inverse matrix
Ax=b
System solver
Free online matrix calculator — add, subtract, multiply, find determinant, transpose, and inverse of 2×2 and 3×3 matrices with AI-powered insights.
Enter values above to see results.
Explore our in-depth guides related to this calculator
Master percentage calculations with this comprehensive guide covering percentage formulas, increase/decrease, percentage of a number, reverse percentages, and common real-world applications.
Complete geometry reference covering area formulas, volume calculations, triangles, circles, and 3D shapes. Free area calculator, volume calculator, and right triangle solver included.
Complete GPA guide covering weighted vs. unweighted GPA, cumulative GPA calculation, grade conversion charts, and strategies to raise your GPA. Free GPA and grade calculators included.
A matrix is a rectangular array of numbers arranged in rows and columns. An m × n matrix has m rows and n columns. Matrices are the fundamental data structure of linear algebra — they encode linear transformations, systems of equations, graph connections, and data tables. From quantum mechanics to Google's PageRank, matrices are everywhere.
Square matrices (m = n) have special properties: they can have determinants, eigenvalues, and inverses. The identity matrix I has 1s on the diagonal and 0s elsewhere — it acts like the number 1 in multiplication. A matrix is invertible (non-singular) if and only if its determinant ≠ 0.
Matrix multiplication is not commutative (AB ≠ BA in general), which distinguishes it from scalar arithmetic. This non-commutativity has deep implications in physics (quantum mechanics operators) and computer graphics (rotation order matters).
Matrix Addition (same dimensions):
[A + B]ᵢⱼ = Aᵢⱼ + Bᵢⱼ
Example:
[1 2] [5 6] [6 8]
[3 4] + [7 8] = [10 12]
Scalar Multiplication:
[cA]ᵢⱼ = c × Aᵢⱼ
3 × [1 2] = [3 6]
[3 4] [9 12]
Properties:
A + B = B + A (commutative)
c(A + B) = cA + cB
A + 0 = A (zero matrix identity)Addition is element-wise and commutative. Matrices must have identical dimensions. The zero matrix acts as the additive identity.
C = AB where Cᵢⱼ = Σₖ Aᵢₖ × Bₖⱼ
Dimensions: (m×p)(p×n) → (m×n)
Inner dimensions must match!
[1 2] × [5 6] = [1×5+2×7 1×6+2×8]
[3 4] [7 8] [3×5+4×7 3×6+4×8]
= [19 22]
[43 50]
Complexity: O(n³) for n×n matrices
Strassen: O(n^2.807)
Best known: O(n^2.3728...)
AB ≠ BA (NOT commutative!)
A(BC) = (AB)C (associative)
AI = IA = A (identity)Matrix multiplication is row-by-column dot products. Non-commutativity is fundamental — rotation order in 3D graphics depends on this.
2×2:
det[a b] = ad − bc
[c d]
det[3 7] = 3×2 − 7×1 = −1
[1 2]
3×3 (cofactor expansion):
det[a b c]
[d e f] = a(ei−fh)−b(di−fg)+c(dh−eg)
[g h i]
det[1 2 3]
[4 5 6] = 1(5·9−6·8)−2(4·9−6·7)
[7 8 9] +3(4·8−5·7)
= 1(−3)−2(−6)+3(−3)
= −3+12−9 = 0
det=0 → singular (no inverse)
det(AB) = det(A) × det(B)| Type | Property | Determinant | Inverse | Example Use |
|---|---|---|---|---|
| Identity (I) | 1s on diagonal, 0s else | 1 | Itself | Multiplicative identity |
| Diagonal | Non-zero only on diagonal | Product of diagonal | Reciprocals on diagonal | Scaling transforms |
| Symmetric | A = Aᵀ | Real eigenvalues | Symmetric if exists | Covariance matrices |
| Orthogonal | AᵀA = I | ±1 | = Aᵀ (transpose) | Rotation matrices |
| Triangular | Zeros above/below diagonal | Product of diagonal |
| Field | Application | Matrix Type | Scale |
|---|---|---|---|
| Computer Graphics | 3D rotation, scaling, projection | 4×4 transformation | 60fps × millions of vertices |
| Machine Learning | Weight matrices in neural networks | m×n dense/sparse | GPT-4: billions of parameters |
| Google PageRank | Web graph adjacency matrix | n×n stochastic | ~200 billion pages |
| Quantum Mechanics | Operators, density matrices | n×n Hermitian | System-dependent |
| Structural Engineering | Finite element analysis | Large sparse | Millions of elements |
| Economics | Input-output models (Leontief) |
The ancient Chinese text 'Jiuzhang Suanshu' used rectangular arrays to solve systems of linear equations via a method equivalent to Gaussian elimination — 2,000 years before Gauss.
Japanese mathematician Seki Takakazu and German Gottfried Leibniz independently developed the concept of determinants. Leibniz used them to solve systems of equations, while Seki applied them to geometric problems.
Arthur Cayley published 'A Memoir on the Theory of Matrices,' defining matrix algebra including addition, multiplication, and inverses. He proved the Cayley-Hamilton theorem: every matrix satisfies its own characteristic equation.
John von Neumann formalized quantum mechanics using matrix algebra in his 'Mathematical Foundations of Quantum Mechanics,' establishing matrices as fundamental to physics.
Vaswani et al. (2017) — NeurIPS
Introduced self-attention: Attention(Q,K,V) = softmax(QKᵀ/√d)V — a pure matrix multiplication architecture that replaced recurrence. This paper launched the transformer revolution powering all modern LLMs.
Strassen (1969) — Numerische Mathematik
Volker Strassen proved that n×n matrix multiplication can be done in O(n^2.807) instead of O(n³). This was the first sub-cubic algorithm and launched the field of fast matrix multiplication research.
Strang — MIT OpenCourseWare
Gilbert Strang's MIT course popularized the 'four fundamental subspaces' framework for understanding matrices — column space, null space, row space, and left null space. His textbook has trained millions of engineers worldwide.
Page et al. (1999) — Stanford
Matrix multiplication is commutative (AB = BA).
Matrix multiplication is NOT commutative. AB ≠ BA for most matrices. In 3D graphics, rotating then translating gives a different result than translating then rotating. This non-commutativity is fundamental, not a bug.
Every matrix has an inverse.
Only square matrices with det ≠ 0 (non-singular matrices) have inverses. Rectangular matrices and singular square matrices do not. The pseudoinverse (Moore-Penrose) provides a 'best approximation' for non-invertible cases.
Matrices are just for math class — no real applications.
Matrices are the computational backbone of AI/ML (neural network weights), computer graphics (every 3D render), search engines (PageRank), economics (Leontief models), quantum computing, and structural engineering. Modern technology is matrix computation.
Bigger matrices are always harder to work with.
Precision math tools for students, teachers, and professionals — CalculatorApp.me.
Browse Math Calculators →Last updated:
The determinant tells you if the matrix is invertible (det≠0) and the volume scaling factor of the linear transformation. A zero determinant means the transformation 'squishes' space.
For 2×2:
A = [a b] → A⁻¹ = (1/det) × [ d -b]
[c d] [-c a]
Example:
A = [4 7] det = 4×2−7×3 = −13
[3 2]
A⁻¹ = (1/−13) [ 2 -7] = [-2/13 7/13]
[-3 4] [ 3/13 -4/13]
Verify: A × A⁻¹ = I
[4 7] [-2/13 7/13]
[3 2] [ 3/13 -4/13]
= [1 0]
[0 1] ✓
For n×n: use Gauss-Jordan
elimination or adjugate method.A matrix has an inverse if and only if det(A) ≠ 0. The inverse is crucial for solving systems Ax = b → x = A⁻¹b.
| Back-substitution |
| LU decomposition |
| Sparse | Mostly zero entries | Standard methods | Iterative solvers | Network graphs |
| Positive Definite | xᵀAx > 0 for all x≠0 | Always > 0 | Always exists | Optimization, ML |
| Singular | det(A) = 0 | 0 | Does NOT exist | Dependent systems |
| n×n square |
| National economies |
Google founders used the dominant eigenvector of a massive web-link matrix to rank pages. This matrix computation — applied to billions of web pages — became one of the most commercially important matrix algorithms in history.
Vaswani et al. introduced self-attention via Query-Key-Value matrix multiplications. Every modern large language model (GPT, Claude, Gemini) is fundamentally a massive matrix computation engine.
Modeled the web as a matrix where entry (i,j) represents a link from page j to page i. The dominant eigenvector of this stochastic matrix ranks page importance. This eigenvalue problem launched a trillion-dollar company.
Sparse matrices (mostly zeros) can be enormous yet fast to compute with. A million×million sparse matrix may be handled easily, while a dense 10,000×10,000 matrix is more challenging. Structure matters more than size.