|
| 1 | +<div align="center"> |
1 | 2 |
|
2 | | -#### **A comprehensive implementation of various Matrix Decomposition Techniques from the lens of Linear Algebra to produce efficient computing of SVD, PCA, Feature Selection & Data Analysis in Python.** |
3 | | -______________________________________________________________________ |
| 3 | +# Matrix Decompositions Implemenations |
4 | 4 |
|
5 | | -To gain a deeper understanding of how Orthogonalization & Matrices Decomposition works in real-life applications, & how they save bunch of time through an approach of vectorization, you'll find such techniques used in; |
| 5 | +**A hands-on marimo built, math-first implementations of Matrix Decomposition Functions,** |
| 6 | +**find the notebooks; hosting on molab & hf-spaces.** |
6 | 7 |
|
7 | | -- **📡 Signal Processing** |
8 | | -- **🤖 Control Systems and Robotics** |
9 | | -- **🖼️ Image Processing** |
10 | | -- **➗ Solving Linear Systems i.e. *AX = B*** |
| 8 | +<br/> |
11 | 9 |
|
12 | | -With certain mathematical intuitions (*having visual introspections*),this project simplifies most of the abstract concepts and becomes easier to grasp and connect with practical applications. |
| 10 | +[](https://molab.marimo.io/notebooks/nb_TAVLehyiE58b5RDzjxFxSW) |
| 11 | +[](https://huggingface.co/spaces/your-hf-spaces-link-here) |
| 12 | +[](https://www.python.org/) |
| 13 | +[](https://numpy.org/) |
| 14 | +[](https://pragyntiwari.github.io/Matrix-Decompositions-Implementation-for-SVD-PCA) |
13 | 15 |
|
14 | | - |
| 16 | +<img src=".assets/01.gif" alt="Matrix Decompositions Demo" width="620"/> |
15 | 17 |
|
16 | | -> You'll yet to see more implementations—such as **Householder Reflection**, **Bidiagonalization**, **LU Decomposition**, on this repo, and others—*these will be added soon*. |
| 18 | +</div> |
17 | 19 |
|
18 | | -## What's Inside |
| 20 | +## Overview |
19 | 21 |
|
20 | | -*By latest ✨,* |
| 22 | +A curated set of [marimo](https://marimo.io) notebooks based on **Matrix Decomposition** functions, written in Python — each pairing a rigorous mathematical derivation with annotated Python and an interactive visualization, in a single reactive environment. |
21 | 23 |
|
22 | | -The **Gram-Schmidt Orthogonalization** is one of the fundamental process in Linear Algebra to achieve *Orthonormal Vectors* for a given vector space. The Orthonormal Basis are produced by iteratively removing vector projections — also known as the *Vector Projection Elimination method*. |
| 24 | +The series is a progressive build, starting from orthogonalization fundamentals and working toward full matrix factorizations and applications: |
23 | 25 |
|
24 | | -**Terms like Orthogonality, QR Decomposition are being discussed in the — [🗨️Discussion section](https://github.com/PragyanTiwari/Matrix-Decompositions-Implementation-for-SVD-PCA/discussions).** |
| 26 | +`Gram-Schmidt` → `QR` → `LU` → `Householder` → `SVD` → `PCA` |
25 | 27 |
|
26 | | -Here's a snippet; |
| 28 | +> **Main Motive —** Matrix decompositions reduce computationally expensive operations — inversion, least squares, eigensolving — into sequences of simpler, numerically stable factors. This project builds that machinery from the ground up, reducing the computational overhead at each step. |
27 | 29 |
|
28 | | -```bash |
29 | | -def gs_Orthogonalization(X:np.ndarray)->np.ndarray: |
| 30 | +>> Applications such as **noise reduction, signal processing, image compression** and more will be covered as the series progresses. |
30 | 31 |
|
31 | | - Q = np.copy(X).astype("float64") |
32 | | - n_vecs = Q.shape[1] |
| 32 | +## Marimo Apps |
33 | 33 |
|
34 | | - # defining a function to compute the L2-norm |
35 | | - length = lambda x: np.linalg.norm(x) |
| 34 | +| Notebook | Open in molab | Open in HF Spaces | |
| 35 | +|---|:---:|:---:| |
| 36 | +| **Gram-Schmidt Orthogonalization** | [](https://molab.marimo.io/notebooks/nb_TAVLehyiE58b5RDzjxFxSW/app) | [](https://huggingface.co/spaces/PragyanTiwari/Gram-Schmidt-Orthonormal-Basis) | |
| 37 | +| **QR Decomposition** | 🔜 | 🔜 | |
| 38 | +| **Householder Reflection & Bidiagonalization** | 🔜 | 🔜 | |
36 | 39 |
|
37 | | - # iteration with each vector in the matrix X |
38 | | - for nth_vec in range(n_vecs): |
| 40 | +## Quickstart |
39 | 41 |
|
40 | | - # iteratively removing each preceding projection from nth vector |
41 | | - for k_proj in range(nth_vec): |
| 42 | +Requires Python `>= 3.12` and [`uv`](https://docs.astral.sh/uv/). |
42 | 43 |
|
43 | | - # the dot product would be the scaler coefficient |
44 | | - scaler = Q[:,nth_vec] @ Q[:,k_proj] |
45 | | - projection = scaler * Q[:,k_proj] |
46 | | - Q[:,nth_vec] -= projection # removing the Kth projection |
| 44 | +**1. Clone and install dependencies** |
47 | 45 |
|
48 | | - norm = length(Q[:,nth_vec]) |
| 46 | +```bash |
| 47 | +git clone https://github.com/prgyn8/Matrix-Decomposition-Implementations.git |
| 48 | +uv sync |
| 49 | +``` |
49 | 50 |
|
50 | | - # handling the case if the loop encounters linearly dependent vectors. |
51 | | - # Since, they come already under the span of vector space, hence their value will be 0. |
52 | | - if np.isclose(norm,0, rtol=1e-15, atol=1e-14, equal_nan=False): |
53 | | - Q[:,nth_vec] = 0 |
54 | | - else: |
55 | | - # making orthogonal vectors -> orthonormal |
56 | | - Q[:,nth_vec] = Q[:,nth_vec] / norm |
| 51 | +**2. Run a marimo app**, (eg. gram-schmidt process) |
57 | 52 |
|
58 | | - return Q |
| 53 | +```bash |
| 54 | +uvx marimo run apps/gs_process.py # you can find the available notebooks in the apps directory. |
59 | 55 | ``` |
60 | 56 |
|
61 | | -To run the notebook in a sandbox environment; |
| 57 | +**3. Optionally, run any notebook in a sandboxed environment** |
62 | 58 |
|
63 | 59 | ```bash |
64 | | -uvx marimo run --sandbox notebooks/Gram_Schmidt_QR_Decomposition.py |
| 60 | +# No global installs — dependencies are inlined in the notebook |
| 61 | +uvx marimo run --sandbox apps/gs_process.py |
| 62 | + |
| 63 | +# Or open for editing |
| 64 | +uvx marimo edit --sandbox apps/gs_process.py |
65 | 65 | ``` |
66 | 66 |
|
67 | | -## 🧪 Testing |
| 67 | +--- |
68 | 68 |
|
69 | | -The updates made on this project, can be tested for deployment, (and for personal experimentation) by the following; |
| 69 | +## Implementation Notes |
70 | 70 |
|
71 | | -- Fork the repository. |
| 71 | +<details> |
| 72 | +<summary><strong>Gram-Schmidt Orthogonalization</strong></summary> |
72 | 73 |
|
73 | | -- Run uv sync to install dependencies (*uv lockfile will help*) |
| 74 | +<br/> |
74 | 75 |
|
75 | | -```bash |
76 | | -uv sync |
77 | | -``` |
| 76 | +```python |
| 77 | +import numpy as np |
78 | 78 |
|
79 | | -- To test the export process, we'll run `.github/scripts/build.py` from the root directory through a symlink. |
| 79 | +def gram_schmidt(X: np.ndarray) -> np.ndarray: |
| 80 | + """ |
| 81 | + Transforms linearly independent column vectors into an orthonormal matrix Q. |
| 82 | + Q.T @ Q = I (verified below) |
| 83 | + """ |
| 84 | + Q = np.copy(X).astype("float64") |
| 85 | + for i in range(Q.shape[1]): |
| 86 | + for j in range(i): |
| 87 | + Q[:, i] -= (Q[:, i] @ Q[:, j]) * Q[:, j] # remove j-th projection |
| 88 | + norm = np.linalg.norm(Q[:, i]) |
| 89 | + Q[:, i] = 0 if np.isclose(norm, 0, atol=1e-14) else Q[:, i] / norm |
| 90 | + return Q |
| 91 | +``` |
80 | 92 |
|
81 | | -```bash |
82 | | -uv run .github/scripts/build.py |
| 93 | +```python |
| 94 | +# Verification: Q.T @ Q ≈ I |
| 95 | +A = np.array([[1, 0, 0], [2, 0, 3], [4, 5, 6]]).T |
| 96 | +assert np.allclose(gram_schmidt(A).T @ gram_schmidt(A), np.eye(3)) # ✓ |
83 | 97 | ``` |
84 | 98 |
|
85 | | -This will export all notebooks in a folder called `_site/` in the root directory |
| 99 | +> 💬 Questions on implementation or numerical stability? Start a thread in [Discussions](https://github.com/prgyn8/Matrix-Decomposition-Implementations/discussions). |
| 100 | +
|
| 101 | +</details> |
| 102 | + |
| 103 | +*Implementation notes for each technique will be added as notebooks are released.* |
| 104 | + |
| 105 | +--- |
| 106 | + |
| 107 | +## Contributing |
| 108 | + |
| 109 | +Contributions are welcome — whether it's a bug report, a new decomposition technique, or a clearer explanation of the math. |
| 110 | + |
| 111 | +1. **Fork** the repository |
| 112 | +2. **Sync** dependencies: `uv sync` |
| 113 | +3. **Create a branch** for your changes |
| 114 | +4. **Open a Pull Request** — maintainers will review it |
| 115 | + |
| 116 | +For questions, suggestions, or discussion of the mathematics: |
| 117 | + |
| 118 | +- 💬 [Discussion Board](https://github.com/prgyn8/Matrix-Decomposition-Implementations/discussions) |
| 119 | +- 🐛 [Open an Issue](https://github.com/prgyn8/Matrix-Decomposition-Implementations/issues) |
| 120 | + |
| 121 | +--- |
| 122 | + |
| 123 | +## Resources & Acknowledgements |
| 124 | + |
| 125 | +- [**Wikipedia** — Gram-Schmidt Process](https://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process) — foundational definitions and mathematical references |
| 126 | +- [**DataCamp** — Orthogonal Matrices](https://www.datacamp.com/tutorial/orthogonal-matrix) — accessible article on orthogonality |
| 127 | +- [**MIT OpenCourseWare** — Lecture 17](https://ocw.mit.edu/courses/18-06-linear-algebra-spring-2010/resources/lecture-17-orthogonal-matrices-and-gram-schmidt/) — in-depth treatment by *Prof. Gilbert Strang* |
| 128 | +- [**Steve Brunton**](https://www.youtube.com/@Eigensteve) — original spark for this project; exceptional intuition on engineering applications of linear algebra |
| 129 | +- [**Graphical Linear Algebra**](https://graphicallinearalgebra.net/2017/08/09/orthogonality-and-projections/) — visual treatment of orthogonality and projections |
| 130 | + |
| 131 | +--- |
86 | 132 |
|
87 | | -## 🌱 Contribution Guide |
| 133 | +<div align="center"> |
88 | 134 |
|
89 | | -- If you find a bug or have a feature request, please open an [Issue](https://github.com/PragyanTiwari/Matrix-Decompositions-Implementation-for-SVD-PCA/issues). |
| 135 | +**Try it without any setup →** [](https://molab.marimo.io/notebooks/nb_TAVLehyiE58b5RDzjxFxSW/app) |
90 | 136 |
|
91 | | -- PR will be reviewed by the maintainers. |
| 137 | +<br/> |
92 | 138 |
|
93 | | -- Questions & Suggestions can be queried on the [Discussion section](https://github.com/PragyanTiwari/Matrix-Decompositions-Implementation-for-SVD-PCA/discussions). |
| 139 | +[⭐ Star this repo](https://github.com/prgyn8/Matrix-Decomposition-Implementations/stargazers) · |
| 140 | +[💬 Join the Discussion](https://github.com/prgyn8/Matrix-Decomposition-Implementations/discussions) · |
| 141 | +[Made by Pragyan →](https://your-personal-link-here) |
94 | 142 |
|
95 | | -______________________________________________________________________ |
| 143 | +</div> |
0 commit comments