"Attention is All You Need" — Add Positional Encodings
Complete Module: Scaled Dot-Product Attention + Positional Encoding + Visualization
"Attention is All You Need" — Add Positional Encodings
Complete Module: Scaled Dot-Product Attention + Positional Encoding + Visualization
"Attention is All You Need" — Add Positional Encodings
Complete Module: Scaled Dot-Product Attention + Positional Encoding + Visualization
Updated Objective
Implement full Transformer attention with positional encodings — from scratch, with math, code, graphs, and intuition.
1. Why Positional Encoding?
Attention is permutation-invariant
→ Without position info,["I", "love", "AI"]=["AI", "love", "I"]
Solution: Inject position-aware signals into token embeddings.
2. Two Types of Positional Encodings
| Type | Formula | Learnable? |
|---|---|---|
| Fixed (Sinusoidal) | $ PE(pos, 2i) = \sin(pos / 10000^{2i/d}) $ $ PE(pos, 2i+1) = \cos(pos / 10000^{2i/d}) $ |
No |
| Learned | $ PE \in \mathbb{R}^{max_seq \times d} $ | Yes |
We’ll implement both — sinusoidal is default in original paper.
3. Sinusoidal Positional Encoding — Math
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
import seaborn as sns
class SinusoidalPositionalEncoding(nn.Module):
def __init__(self, d_model, max_seq_len=5000):
super().__init__()
pe = torch.zeros(max_seq_len, d_model)
position = torch.arange(0, max_seq_len, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-torch.log(torch.tensor(10000.0)) / d_model))
pe[:, 0::2] = torch.sin(position * div_term) # even
pe[:, 1::2] = torch.cos(position * div_term) # odd
self.register_buffer('pe', pe.unsqueeze(0)) # (1, max_seq, d_model)
def forward(self, x):
"""
x: (batch, seq_len, d_model)
"""
return x + self.pe[:, :x.size(1), :]
4. Visualize Positional Encoding
d_model = 128
max_seq = 100
pe_layer = SinusoidalPositionalEncoding(d_model, max_seq)
pe = pe_layer.pe[0].cpu().numpy() # (seq, d_model)
plt.figure(figsize=(12, 8))
sns.heatmap(pe, cmap="PRGn", center=0)
plt.title("Sinusoidal Positional Encoding (d_model=128)")
plt.xlabel("Embedding Dimension")
plt.ylabel("Position in Sequence")
plt.show()
Pattern:
- Low freq → slow change across positions
- High freq → fast oscillation
→ Model learns relative distances
5. Learned Positional Encoding
class LearnedPositionalEncoding(nn.Module):
def __init__(self, d_model, max_seq_len=5000):
super().__init__()
self.pe = nn.Embedding(max_seq_len, d_model)
def forward(self, x):
seq_len = x.size(1)
positions = torch.arange(seq_len, device=x.device).unsqueeze(0)
return x + self.pe(positions).transpose(0, 1)
6. Full Attention with Positional Encoding
class TransformerBlock(nn.Module):
def __init__(self, d_model, num_heads, use_learned_pe=False):
super().__init__()
self.d_model = d_model
self.num_heads = num_heads
# Positional Encoding
if use_learned_pe:
self.pos_encoding = LearnedPositionalEncoding(d_model)
else:
self.pos_encoding = SinusoidalPositionalEncoding(d_model)
# Multi-Head Attention
self.mha = MultiHeadAttention(d_model, num_heads)
# Feed Forward
self.ffn = nn.Sequential(
nn.Linear(d_model, d_model * 4),
nn.GELU(),
nn.Linear(d_model * 4, d_model)
)
# Layer Norm
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
def forward(self, x, mask=None):
# 1. Add positional encoding
x = self.pos_encoding(x)
# 2. Multi-Head Attention + Residual
attn_out, attn_weights = self.mha(x, x, x, mask)
x = self.norm1(x + attn_out)
# 3. Feed Forward + Residual
ffn_out = self.ffn(x)
x = self.norm2(x + ffn_out)
return x, attn_weights
Note:
MultiHeadAttentionfrom previous module
7. Full Working Example: Positional Sensitivity Test
# Test: Can model distinguish order?
vocab_size = 10
d_model = 16
seq_len = 4
batch_size = 2
# Input: two sequences with swapped tokens
input1 = torch.tensor([[1, 2, 3, 4]]) # "A B C D"
input2 = torch.tensor([[4, 3, 2, 1]]) # "D C B A"
x1 = nn.Embedding(vocab_size, d_model)(input1)
x2 = nn.Embedding(vocab_size, d_model)(input2)
model = TransformerBlock(d_model=d_model, num_heads=4, use_learned_pe=False)
out1, _ = model(x1)
out2, _ = model(x2)
print("Output 1 (A B C D):", out1[0, 0].detach().numpy()[:5])
print("Output 2 (D C B A):", out2[0, 0].detach().numpy()[:5])
print("Are they different?", not torch.allclose(out1, out2))
Output:
True→ Model sees order!
8. Compare Fixed vs Learned PE
model_fixed = TransformerBlock(d_model=16, num_heads=4, use_learned_pe=False)
model_learned = TransformerBlock(d_model=16, num_heads=4, use_learned_pe=True)
# Train learned PE on copy task
optimizer = torch.optim.Adam(model_learned.parameters(), lr=0.01)
criterion = nn.MSELoss()
for epoch in range(200):
x = torch.randint(0, 5, (32, 6))
emb = nn.Embedding(10, 16)(x)
out, _ = model_learned(emb)
loss = criterion(out, emb)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % 50 == 0:
print(f"Learned PE - Epoch {epoch}, Loss: {loss.item():.6f}")
Learned PE adapts to data
Sinusoidal generalizes to any length
9. Graph: Sinusoidal Wavelengths
pos = 0
dims = torch.arange(0, d_model, 2)
wavelengths = 10000 ** (2 * dims / d_model)
plt.figure(figsize=(10, 5))
plt.plot(dims, wavelengths, 'o-')
plt.yscale('log')
plt.xlabel("Dimension Index (i)")
plt.ylabel("Wavelength (period)")
plt.title("Sinusoidal PE: Wavelength per Dimension")
plt.grid(True, alpha=0.3)
plt.show()
Insight:
- Dim 0: ~6.28 (slow)
- Dim 126: ~1e8 (fast)
10. Updated Summary Table
| Component | Purpose | Implementation |
|---|---|---|
| Embedding | Token → vector | nn.Embedding |
| Positional Encoding | Order → signal | sin/cos or Embedding |
| Q, K, V Projection | Role split | Linear(d_model, d_model) |
| Scaled Dot-Product | Relevance | softmax(QK^T / √d_k)V |
| Multi-Head | Parallel views | Split → Attend → Concat |
| LayerNorm + Residual | Stable training | x + norm(attn(x)) |
11. Final Full Code (Copy-Paste Ready)
import torch
import torch.nn as nn
import torch.nn.functional as F
# === 1. Scaled Dot-Product Attention ===
def scaled_dot_product_attention(Q, K, V, mask=None):
d_k = Q.size(-1)
scores = torch.matmul(Q, K.transpose(-2, -1)) / (d_k ** 0.5)
if mask is not None:
scores = scores.masked_fill(mask == 0, float('-inf'))
attn = F.softmax(scores, dim=-1)
return torch.matmul(attn, V), attn
# === 2. Multi-Head Attention ===
class MultiHeadAttention(nn.Module):
def __init__(self, d_model, num_heads):
super().__init__()
self.d_model = d_model
self.num_heads = num_heads
self.d_k = d_model // num_heads
self.W_q = nn.Linear(d_model, d_model)
self.W_k = nn.Linear(d_model, d_model)
self.W_v = nn.Linear(d_model, d_model)
self.W_o = nn.Linear(d_model, d_model)
def split_heads(self, x):
batch, seq, _ = x.shape
return x.view(batch, seq, self.num_heads, self.d_k).transpose(1, 2)
def combine_heads(self, x):
batch, _, seq, d_k = x.shape
return x.transpose(1, 2).contiguous().view(batch, seq, self.d_model)
def forward(self, Q, K, V, mask=None):
Q = self.split_heads(self.W_q(Q))
K = self.split_heads(self.W_k(K))
V = self.split_heads(self.W_v(V))
attn, weights = scaled_dot_product_attention(Q, K, V, mask)
return self.W_o(self.combine_heads(attn)), weights
# === 3. Positional Encoding ===
class SinusoidalPositionalEncoding(nn.Module):
def __init__(self, d_model, max_seq_len=5000):
super().__init__()
pe = torch.zeros(max_seq_len, d_model)
pos = torch.arange(0, max_seq_len, dtype=torch.float).unsqueeze(1)
div = torch.exp(torch.arange(0, d_model, 2).float() * (-torch.log(torch.tensor(10000.0)) / d_model))
pe[:, 0::2] = torch.sin(pos * div)
pe[:, 1::2] = torch.cos(pos * div)
self.register_buffer('pe', pe.unsqueeze(0))
def forward(self, x):
return x + self.pe[:, :x.size(1)]
# === 4. Full Transformer Block ===
class TransformerBlock(nn.Module):
def __init__(self, d_model=16, num_heads=4):
super().__init__()
self.pos_enc = SinusoidalPositionalEncoding(d_model)
self.mha = MultiHeadAttention(d_model, num_heads)
self.ffn = nn.Sequential(nn.Linear(d_model, d_model*4), nn.GELU(), nn.Linear(d_model*4, d_model))
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
def forward(self, x, mask=None):
x = self.pos_enc(x)
attn_out, attn_weights = self.mha(x, x, x, mask)
x = self.norm1(x + attn_out)
x = self.norm2(x + self.ffn(x))
return x, attn_weights
# === Test ===
model = TransformerBlock(d_model=16, num_heads=4)
x = torch.randn(1, 5, 16)
out, attn = model(x)
print("Input:", x.shape, "→ Output:", out.shape)
Practice Exercises
- Train a small model to reverse sequences using learned PE.
- Visualize attention maps with and without PE.
- Extrapolate: Test sinusoidal PE on sequences longer than training.
- Ablate: Remove PE → does model fail on order?
- RoPE: Implement Rotary Positional Embeddings (used in LLaMA).
Key Takeaways
| Check | Insight |
|---|---|
| Check | Positional encoding = order signal |
| Check | Sinusoidal = fixed, infinite length |
| Check | Learned = flexible, limited length |
| Check | Add, don’t concat → same dimension |
| Check | Essential for non-recurrent models |
Final Words
You now have the full Transformer input pipeline:
Token IDs → Embedding → + Positional Encoding → Multi-Head Attention
Next: Stack 6 layers → train a mini-GPT!
End of Module
Attention + Position = Transformer
You’re ready to build LLMs.
"Attention is All You Need" — Add Positional Encodings
Complete Module: Scaled Dot-Product Attention + Positional Encoding + Visualization
"Attention is All You Need" — Add Positional Encodings
Complete Module: Scaled Dot-Product Attention + Positional Encoding + Visualization
"Attention is All You Need" — Add Positional Encodings
Complete Module: Scaled Dot-Product Attention + Positional Encoding + Visualization
Updated Objective
Implement full Transformer attention with positional encodings — from scratch, with math, code, graphs, and intuition.
1. Why Positional Encoding?
Attention is permutation-invariant
→ Without position info,["I", "love", "AI"]=["AI", "love", "I"]
Solution: Inject position-aware signals into token embeddings.
2. Two Types of Positional Encodings
| Type | Formula | Learnable? |
|---|---|---|
| Fixed (Sinusoidal) | $ PE(pos, 2i) = \sin(pos / 10000^{2i/d}) $ $ PE(pos, 2i+1) = \cos(pos / 10000^{2i/d}) $ |
No |
| Learned | $ PE \in \mathbb{R}^{max_seq \times d} $ | Yes |
We’ll implement both — sinusoidal is default in original paper.
3. Sinusoidal Positional Encoding — Math
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
import seaborn as sns
class SinusoidalPositionalEncoding(nn.Module):
def __init__(self, d_model, max_seq_len=5000):
super().__init__()
pe = torch.zeros(max_seq_len, d_model)
position = torch.arange(0, max_seq_len, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-torch.log(torch.tensor(10000.0)) / d_model))
pe[:, 0::2] = torch.sin(position * div_term) # even
pe[:, 1::2] = torch.cos(position * div_term) # odd
self.register_buffer('pe', pe.unsqueeze(0)) # (1, max_seq, d_model)
def forward(self, x):
"""
x: (batch, seq_len, d_model)
"""
return x + self.pe[:, :x.size(1), :]
4. Visualize Positional Encoding
d_model = 128
max_seq = 100
pe_layer = SinusoidalPositionalEncoding(d_model, max_seq)
pe = pe_layer.pe[0].cpu().numpy() # (seq, d_model)
plt.figure(figsize=(12, 8))
sns.heatmap(pe, cmap="PRGn", center=0)
plt.title("Sinusoidal Positional Encoding (d_model=128)")
plt.xlabel("Embedding Dimension")
plt.ylabel("Position in Sequence")
plt.show()
Pattern:
- Low freq → slow change across positions
- High freq → fast oscillation
→ Model learns relative distances
5. Learned Positional Encoding
class LearnedPositionalEncoding(nn.Module):
def __init__(self, d_model, max_seq_len=5000):
super().__init__()
self.pe = nn.Embedding(max_seq_len, d_model)
def forward(self, x):
seq_len = x.size(1)
positions = torch.arange(seq_len, device=x.device).unsqueeze(0)
return x + self.pe(positions).transpose(0, 1)
6. Full Attention with Positional Encoding
class TransformerBlock(nn.Module):
def __init__(self, d_model, num_heads, use_learned_pe=False):
super().__init__()
self.d_model = d_model
self.num_heads = num_heads
# Positional Encoding
if use_learned_pe:
self.pos_encoding = LearnedPositionalEncoding(d_model)
else:
self.pos_encoding = SinusoidalPositionalEncoding(d_model)
# Multi-Head Attention
self.mha = MultiHeadAttention(d_model, num_heads)
# Feed Forward
self.ffn = nn.Sequential(
nn.Linear(d_model, d_model * 4),
nn.GELU(),
nn.Linear(d_model * 4, d_model)
)
# Layer Norm
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
def forward(self, x, mask=None):
# 1. Add positional encoding
x = self.pos_encoding(x)
# 2. Multi-Head Attention + Residual
attn_out, attn_weights = self.mha(x, x, x, mask)
x = self.norm1(x + attn_out)
# 3. Feed Forward + Residual
ffn_out = self.ffn(x)
x = self.norm2(x + ffn_out)
return x, attn_weights
Note:
MultiHeadAttentionfrom previous module
7. Full Working Example: Positional Sensitivity Test
# Test: Can model distinguish order?
vocab_size = 10
d_model = 16
seq_len = 4
batch_size = 2
# Input: two sequences with swapped tokens
input1 = torch.tensor([[1, 2, 3, 4]]) # "A B C D"
input2 = torch.tensor([[4, 3, 2, 1]]) # "D C B A"
x1 = nn.Embedding(vocab_size, d_model)(input1)
x2 = nn.Embedding(vocab_size, d_model)(input2)
model = TransformerBlock(d_model=d_model, num_heads=4, use_learned_pe=False)
out1, _ = model(x1)
out2, _ = model(x2)
print("Output 1 (A B C D):", out1[0, 0].detach().numpy()[:5])
print("Output 2 (D C B A):", out2[0, 0].detach().numpy()[:5])
print("Are they different?", not torch.allclose(out1, out2))
Output:
True→ Model sees order!
8. Compare Fixed vs Learned PE
model_fixed = TransformerBlock(d_model=16, num_heads=4, use_learned_pe=False)
model_learned = TransformerBlock(d_model=16, num_heads=4, use_learned_pe=True)
# Train learned PE on copy task
optimizer = torch.optim.Adam(model_learned.parameters(), lr=0.01)
criterion = nn.MSELoss()
for epoch in range(200):
x = torch.randint(0, 5, (32, 6))
emb = nn.Embedding(10, 16)(x)
out, _ = model_learned(emb)
loss = criterion(out, emb)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % 50 == 0:
print(f"Learned PE - Epoch {epoch}, Loss: {loss.item():.6f}")
Learned PE adapts to data
Sinusoidal generalizes to any length
9. Graph: Sinusoidal Wavelengths
pos = 0
dims = torch.arange(0, d_model, 2)
wavelengths = 10000 ** (2 * dims / d_model)
plt.figure(figsize=(10, 5))
plt.plot(dims, wavelengths, 'o-')
plt.yscale('log')
plt.xlabel("Dimension Index (i)")
plt.ylabel("Wavelength (period)")
plt.title("Sinusoidal PE: Wavelength per Dimension")
plt.grid(True, alpha=0.3)
plt.show()
Insight:
- Dim 0: ~6.28 (slow)
- Dim 126: ~1e8 (fast)
10. Updated Summary Table
| Component | Purpose | Implementation |
|---|---|---|
| Embedding | Token → vector | nn.Embedding |
| Positional Encoding | Order → signal | sin/cos or Embedding |
| Q, K, V Projection | Role split | Linear(d_model, d_model) |
| Scaled Dot-Product | Relevance | softmax(QK^T / √d_k)V |
| Multi-Head | Parallel views | Split → Attend → Concat |
| LayerNorm + Residual | Stable training | x + norm(attn(x)) |
11. Final Full Code (Copy-Paste Ready)
import torch
import torch.nn as nn
import torch.nn.functional as F
# === 1. Scaled Dot-Product Attention ===
def scaled_dot_product_attention(Q, K, V, mask=None):
d_k = Q.size(-1)
scores = torch.matmul(Q, K.transpose(-2, -1)) / (d_k ** 0.5)
if mask is not None:
scores = scores.masked_fill(mask == 0, float('-inf'))
attn = F.softmax(scores, dim=-1)
return torch.matmul(attn, V), attn
# === 2. Multi-Head Attention ===
class MultiHeadAttention(nn.Module):
def __init__(self, d_model, num_heads):
super().__init__()
self.d_model = d_model
self.num_heads = num_heads
self.d_k = d_model // num_heads
self.W_q = nn.Linear(d_model, d_model)
self.W_k = nn.Linear(d_model, d_model)
self.W_v = nn.Linear(d_model, d_model)
self.W_o = nn.Linear(d_model, d_model)
def split_heads(self, x):
batch, seq, _ = x.shape
return x.view(batch, seq, self.num_heads, self.d_k).transpose(1, 2)
def combine_heads(self, x):
batch, _, seq, d_k = x.shape
return x.transpose(1, 2).contiguous().view(batch, seq, self.d_model)
def forward(self, Q, K, V, mask=None):
Q = self.split_heads(self.W_q(Q))
K = self.split_heads(self.W_k(K))
V = self.split_heads(self.W_v(V))
attn, weights = scaled_dot_product_attention(Q, K, V, mask)
return self.W_o(self.combine_heads(attn)), weights
# === 3. Positional Encoding ===
class SinusoidalPositionalEncoding(nn.Module):
def __init__(self, d_model, max_seq_len=5000):
super().__init__()
pe = torch.zeros(max_seq_len, d_model)
pos = torch.arange(0, max_seq_len, dtype=torch.float).unsqueeze(1)
div = torch.exp(torch.arange(0, d_model, 2).float() * (-torch.log(torch.tensor(10000.0)) / d_model))
pe[:, 0::2] = torch.sin(pos * div)
pe[:, 1::2] = torch.cos(pos * div)
self.register_buffer('pe', pe.unsqueeze(0))
def forward(self, x):
return x + self.pe[:, :x.size(1)]
# === 4. Full Transformer Block ===
class TransformerBlock(nn.Module):
def __init__(self, d_model=16, num_heads=4):
super().__init__()
self.pos_enc = SinusoidalPositionalEncoding(d_model)
self.mha = MultiHeadAttention(d_model, num_heads)
self.ffn = nn.Sequential(nn.Linear(d_model, d_model*4), nn.GELU(), nn.Linear(d_model*4, d_model))
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
def forward(self, x, mask=None):
x = self.pos_enc(x)
attn_out, attn_weights = self.mha(x, x, x, mask)
x = self.norm1(x + attn_out)
x = self.norm2(x + self.ffn(x))
return x, attn_weights
# === Test ===
model = TransformerBlock(d_model=16, num_heads=4)
x = torch.randn(1, 5, 16)
out, attn = model(x)
print("Input:", x.shape, "→ Output:", out.shape)
Practice Exercises
- Train a small model to reverse sequences using learned PE.
- Visualize attention maps with and without PE.
- Extrapolate: Test sinusoidal PE on sequences longer than training.
- Ablate: Remove PE → does model fail on order?
- RoPE: Implement Rotary Positional Embeddings (used in LLaMA).
Key Takeaways
| Check | Insight |
|---|---|
| Check | Positional encoding = order signal |
| Check | Sinusoidal = fixed, infinite length |
| Check | Learned = flexible, limited length |
| Check | Add, don’t concat → same dimension |
| Check | Essential for non-recurrent models |
Final Words
You now have the full Transformer input pipeline:
Token IDs → Embedding → + Positional Encoding → Multi-Head Attention
Next: Stack 6 layers → train a mini-GPT!
End of Module
Attention + Position = Transformer
You’re ready to build LLMs.