Word Prediction as Vector Geometry

A Linear Algebra Demo on Word Similarity

In modern AI, words are represented as numerical vectors. This interactive tool demonstrates how linear algebra, specifically cosine similarity, is used to measure how "close" two words are. A high similarity score suggests a strong contextual relationship.
How to use this demo:
How does this work? The Linear Algebra inside AI.

1. Word Embeddings (Vectors): Large Language Models (LLMs) learn a "word embedding," which is a mapping of every word to a high-dimensional vector. Here, we use simple 3D vectors for demonstration. Words with similar meanings are placed near each other in this vector space.

2. Cosine Similarity: To measure the "similarity" between two word vectors, we calculate the cosine of the angle ($ \theta $) between them. $$ \cos(\theta) = \frac{\vec{a} \cdot \vec{b}}{\|\vec{a}\| \|\vec{b}\|} $$ A smaller angle results in a cosine similarity closer to 1 (highly similar).