PaperGym
โ† Back to Foundations
๐Ÿ“
Module 2 of 6

Vectors & Matrix Operations

Understand the math that powers attention, embeddings, and neural networks.

Progress0/3 drills ยท 0%
FREE~25 minยทBeginner

What you'll learn

  • โœ“Dot product and cosine similarity
  • โœ“Matrix multiplication from scratch
  • โœ“Transpose and QKแต€ computation

Drills

1

Dot Product & Cosine Similarity

Implement dot product from scratch (no np.dot) and cosine similarity โ€” used everywhere in embeddings.

Easy๐Ÿ• 8mโšก 10 pts
2

Matrix Multiplication

Implement matmul using loops, then apply a weight matrix to a batch of vectors โ€” the core operation of neural networks.

Medium๐Ÿ• 10mโšก 15 pts
3

Transpose & QKแต€

Implement transpose for 2D and batched 3D tensors, then compute QKแต€ โ€” the exact operation in attention.

Medium๐Ÿ• 8mโšก 15 pts

๐Ÿ“š Resources