NumPy → PyTorch: Math, Tensors, Arrays, Matrices & Vector Operations

Master the transition from NumPy to PyTorch by understanding how core mathematical operations on arrays, matrices, and vectors map between the two libraries — with practical, runnable examples.

NumPy → PyTorch: Math, Tensors, Arrays, Matrices & Vector Operations

Master the transition from NumPy to PyTorch by understanding how core mathematical operations on arrays, matrices, and vectors map between the two libraries — with practical, runnable examples.

NumPy → PyTorch: Math, Tensors, Arrays, Matrices & Vector Operations

A Complete One-Module Learning Tutorial with Examples


Module Objective

Master the transition from NumPy to PyTorch by understanding how core mathematical operations on arrays, matrices, and vectors map between the two libraries — with practical, runnable examples.


1. Setup & Installation

pip install numpy torch
import numpy as np
import torch

print(f"NumPy version: {np.__version__}")
print(f"PyTorch version: {torch.__version__}")

Key Concept:
- np.arraytorch.tensor
- Both are N-dimensional arrays, but PyTorch adds autograd & GPU support.


2. Creating Tensors (Arrays)

Operation NumPy PyTorch
From list np.array([1,2,3]) torch.tensor([1,2,3])
Zeros np.zeros((2,3)) torch.zeros(2,3)
Ones np.ones((2,3)) torch.ones(2,3)
Range np.arange(0,10,2) torch.arange(0,10,2)
Linspace np.linspace(0,1,5) torch.linspace(0,1,5)
Random np.random.rand(2,3) torch.rand(2,3)

Example:

# NumPy
np_arr = np.array([[1, 2], [3, 4]])
np_zeros = np.zeros((2, 2))
np_rand = np.random.randn(2, 2)

# PyTorch
torch_arr = torch.tensor([[1, 2], [3, 4]], dtype=torch.float32)
torch_zeros = torch.zeros(2, 2)
torch_rand = torch.randn(2, 2)

print("NumPy Array:\n", np_arr)
print("PyTorch Tensor:\n", torch_arr)

Note: Use dtype=torch.float32 for ML compatibility.


3. Data Types (dtype)

# NumPy
np_int = np.array([1, 2], dtype=np.int32)
np_float = np.array([1.0, 2.0], dtype=np.float64)

# PyTorch
torch_int = torch.tensor([1, 2], dtype=torch.int32)
torch_float = torch.tensor([1.0, 2.0], dtype=torch.float32)  # Default for ML

Common PyTorch dtypes:

torch.float32  # float
torch.float64  # double
torch.int32    # int
torch.int64    # long
torch.bool

4. Shape, Size, Reshape

Operation NumPy PyTorch
Shape .shape .shape or .size()
Reshape .reshape(2,3) .reshape(2,3) or .view(2,3)
Flatten .flatten() .flatten() or .view(-1)

Example:

x_np = np.arange(6)
x_torch = torch.arange(6)

print("Original:", x_torch)
print("Reshaped (view):", x_torch.view(2, 3))
print("Reshaped (reshape):", x_torch.reshape(2, 3))
print("Flattened:", x_torch.flatten())

.view() requires contiguous memory
.reshape() is more flexible (may copy)


5. Indexing & Slicing

arr = np.arange(10)
tensor = torch.arange(10)

# Same syntax!
print(arr[2:5])        # [2 3 4]
print(tensor[2:5])     # tensor([2, 3, 4])

# 2D indexing
mat_np = np.array([[1,2,3], [4,5,6]])
mat_torch = torch.tensor([[1,2,3], [4,5,6]])

print(mat_np[1, 2])    # 6
print(mat_torch[1, 2]) # 6

Advanced: Boolean masking

mask = tensor > 5
print(tensor[mask])  # tensor([6, 7, 8, 9])

6. Math Operations (Element-wise)

Operation NumPy PyTorch
Add + or np.add() + or torch.add()
Multiply * *
Power ** or np.power() ** or torch.pow()
sqrt, exp, log np.sqrt(), etc. torch.sqrt(), etc.

Example:

a = torch.tensor([1., 2., 3.])
b = torch.tensor([4., 5., 6.])

print(a + b)         # tensor([5., 7., 9.])
print(a * b)         # tensor([ 4., 10., 18.])
print(a ** 2)        # tensor([1., 4., 9.])
print(torch.sqrt(a)) # tensor([1.0000, 1.4142, 1.7321])

7. Broadcasting (Same as NumPy!)

A = torch.randn(3, 1)
b = torch.tensor([[10], [20], [30]])

print(A + b)
# Works: b (3,1) broadcast to match A (3,1)

8. Matrix Operations

Operation NumPy PyTorch
Transpose .T .T or .transpose(0,1)
MatMul @ or np.matmul() @ or torch.matmul()
Dot np.dot() torch.dot() (1D only)

Example:

A = torch.tensor([[1, 2], [3, 4]], dtype=torch.float32)
B = torch.tensor([[5, 6], [7, 8]], dtype=torch.float32)

print("A @ B:\n", A @ B)
print("torch.matmul(A, B):\n", torch.matmul(A, B))
print("A.T:\n", A.T)

9. Aggregation (Sum, Mean, Max, etc.)

x = torch.tensor([[1, 2], [3, 4]], dtype=torch.float32)

print(x.sum())          # tensor(10.)
print(x.sum(dim=0))     # tensor([4., 6.]) → column sum
print(x.sum(dim=1))     # tensor([3., 7.]) → row sum
print(x.mean(), x.std())
print(x.max(), x.argmax())

10. NumPy ↔ PyTorch Conversion

# NumPy → PyTorch
np_arr = np.array([[1, 2], [3, 4]])
torch_tensor = torch.from_numpy(np_arr)  # Shares memory!

# PyTorch → NumPy
torch_tensor2 = torch.tensor([[5, 6], [7, 8]])
np_arr2 = torch_tensor2.numpy()          # Shares memory (if on CPU)

Warning: Shared memory → changes in one affect the other!

np_arr[0,0] = 99
print(torch_tensor)  # Changed too!

11. GPU Acceleration (PyTorch Only!)

if torch.cuda.is_available():
    device = torch.device("cuda")
else:
    device = torch.device("cpu")

x = torch.randn(1000, 1000, device=device)
y = torch.randn(1000, 1000, device=device)

%timeit x @ y  # Much faster on GPU!

Move tensor to GPU:

x_cpu = torch.randn(3, 3)
x_gpu = x_cpu.to('cuda')  # or .cuda()
x_back = x_gpu.to('cpu')  # or .cpu()

12. Autograd: The Magic of PyTorch

x = torch.tensor([2.0], requires_grad=True)
y = x ** 2 + 3 * x + 1

y.backward()  # Compute gradient
print(x.grad)  # tensor([7.]) → dy/dx = 2x + 3

NumPy has no autograd → PyTorch enables deep learning.


13. Full Working Example: Linear Regression Step

# Data
X_np = np.array([[1], [2], [3], [4]], dtype=np.float32)
y_np = np.array([[2], [4], [6], [8]], dtype=np.float32)

# To PyTorch
X = torch.from_numpy(X_np)
y = torch.from_numpy(y_np)

# Model: y = w * x + b
w = torch.randn(1, requires_grad=True)
b = torch.randn(1, requires_grad=True)
lr = 0.01

for epoch in range(100):
    y_pred = X @ w + b
    loss = ((y_pred - y) ** 2).mean()

    loss.backward()

    with torch.no_grad():
        w -= lr * w.grad
        b -= lr * b.grad
        w.grad.zero_()
        b.grad.zero_()

print(f"Learned: y = {w.item():.2f}x + {b.item():.2f}")

Output: y = 2.00x + 0.00 → Perfect fit!


Summary Cheat Sheet

Concept NumPy PyTorch
Array np.array() torch.tensor()
Zeros np.zeros() torch.zeros()
Shape .shape .shape
Reshape .reshape() .reshape() / .view()
Transpose .T .T
MatMul @ @
Sum .sum(axis=) .sum(dim=)
GPU tensor.to('cuda')
Autograd requires_grad=True
Convert torch.from_numpy(), .numpy()

Practice Exercises

  1. Create a 3×3 identity matrix in both NumPy and PyTorch.
  2. Compute element-wise sin(x) + cos(x)^2 for x = [0, π/2, π].
  3. Perform matrix multiplication of two random 4×4 matrices on GPU.
  4. Convert a NumPy array to PyTorch, modify it, and see shared memory effect.
  5. Write a PyTorch version of np.linalg.norm() using only basic ops.

Final Words

NumPy is great for scientific computing.
PyTorch is NumPy + autograd + GPU + deep learning.

You now have full fluency to move from NumPy → PyTorch!


Keep Learning:
Try torch.nn, DataLoader, and build your first neural net next!


End of Module
Mastered: Arrays → Tensors → Math → GPU → Autograd
Now go build something awesome! 🚀

Last updated: Nov 13, 2025

NumPy → PyTorch: Math, Tensors, Arrays, Matrices & Vector Operations

Master the transition from NumPy to PyTorch by understanding how core mathematical operations on arrays, matrices, and vectors map between the two libraries — with practical, runnable examples.

NumPy → PyTorch: Math, Tensors, Arrays, Matrices & Vector Operations

Master the transition from NumPy to PyTorch by understanding how core mathematical operations on arrays, matrices, and vectors map between the two libraries — with practical, runnable examples.

NumPy → PyTorch: Math, Tensors, Arrays, Matrices & Vector Operations

A Complete One-Module Learning Tutorial with Examples


Module Objective

Master the transition from NumPy to PyTorch by understanding how core mathematical operations on arrays, matrices, and vectors map between the two libraries — with practical, runnable examples.


1. Setup & Installation

pip install numpy torch
import numpy as np
import torch

print(f"NumPy version: {np.__version__}")
print(f"PyTorch version: {torch.__version__}")

Key Concept:
- np.arraytorch.tensor
- Both are N-dimensional arrays, but PyTorch adds autograd & GPU support.


2. Creating Tensors (Arrays)

Operation NumPy PyTorch
From list np.array([1,2,3]) torch.tensor([1,2,3])
Zeros np.zeros((2,3)) torch.zeros(2,3)
Ones np.ones((2,3)) torch.ones(2,3)
Range np.arange(0,10,2) torch.arange(0,10,2)
Linspace np.linspace(0,1,5) torch.linspace(0,1,5)
Random np.random.rand(2,3) torch.rand(2,3)

Example:

# NumPy
np_arr = np.array([[1, 2], [3, 4]])
np_zeros = np.zeros((2, 2))
np_rand = np.random.randn(2, 2)

# PyTorch
torch_arr = torch.tensor([[1, 2], [3, 4]], dtype=torch.float32)
torch_zeros = torch.zeros(2, 2)
torch_rand = torch.randn(2, 2)

print("NumPy Array:\n", np_arr)
print("PyTorch Tensor:\n", torch_arr)

Note: Use dtype=torch.float32 for ML compatibility.


3. Data Types (dtype)

# NumPy
np_int = np.array([1, 2], dtype=np.int32)
np_float = np.array([1.0, 2.0], dtype=np.float64)

# PyTorch
torch_int = torch.tensor([1, 2], dtype=torch.int32)
torch_float = torch.tensor([1.0, 2.0], dtype=torch.float32)  # Default for ML

Common PyTorch dtypes:

torch.float32  # float
torch.float64  # double
torch.int32    # int
torch.int64    # long
torch.bool

4. Shape, Size, Reshape

Operation NumPy PyTorch
Shape .shape .shape or .size()
Reshape .reshape(2,3) .reshape(2,3) or .view(2,3)
Flatten .flatten() .flatten() or .view(-1)

Example:

x_np = np.arange(6)
x_torch = torch.arange(6)

print("Original:", x_torch)
print("Reshaped (view):", x_torch.view(2, 3))
print("Reshaped (reshape):", x_torch.reshape(2, 3))
print("Flattened:", x_torch.flatten())

.view() requires contiguous memory
.reshape() is more flexible (may copy)


5. Indexing & Slicing

arr = np.arange(10)
tensor = torch.arange(10)

# Same syntax!
print(arr[2:5])        # [2 3 4]
print(tensor[2:5])     # tensor([2, 3, 4])

# 2D indexing
mat_np = np.array([[1,2,3], [4,5,6]])
mat_torch = torch.tensor([[1,2,3], [4,5,6]])

print(mat_np[1, 2])    # 6
print(mat_torch[1, 2]) # 6

Advanced: Boolean masking

mask = tensor > 5
print(tensor[mask])  # tensor([6, 7, 8, 9])

6. Math Operations (Element-wise)

Operation NumPy PyTorch
Add + or np.add() + or torch.add()
Multiply * *
Power ** or np.power() ** or torch.pow()
sqrt, exp, log np.sqrt(), etc. torch.sqrt(), etc.

Example:

a = torch.tensor([1., 2., 3.])
b = torch.tensor([4., 5., 6.])

print(a + b)         # tensor([5., 7., 9.])
print(a * b)         # tensor([ 4., 10., 18.])
print(a ** 2)        # tensor([1., 4., 9.])
print(torch.sqrt(a)) # tensor([1.0000, 1.4142, 1.7321])

7. Broadcasting (Same as NumPy!)

A = torch.randn(3, 1)
b = torch.tensor([[10], [20], [30]])

print(A + b)
# Works: b (3,1) broadcast to match A (3,1)

8. Matrix Operations

Operation NumPy PyTorch
Transpose .T .T or .transpose(0,1)
MatMul @ or np.matmul() @ or torch.matmul()
Dot np.dot() torch.dot() (1D only)

Example:

A = torch.tensor([[1, 2], [3, 4]], dtype=torch.float32)
B = torch.tensor([[5, 6], [7, 8]], dtype=torch.float32)

print("A @ B:\n", A @ B)
print("torch.matmul(A, B):\n", torch.matmul(A, B))
print("A.T:\n", A.T)

9. Aggregation (Sum, Mean, Max, etc.)

x = torch.tensor([[1, 2], [3, 4]], dtype=torch.float32)

print(x.sum())          # tensor(10.)
print(x.sum(dim=0))     # tensor([4., 6.]) → column sum
print(x.sum(dim=1))     # tensor([3., 7.]) → row sum
print(x.mean(), x.std())
print(x.max(), x.argmax())

10. NumPy ↔ PyTorch Conversion

# NumPy → PyTorch
np_arr = np.array([[1, 2], [3, 4]])
torch_tensor = torch.from_numpy(np_arr)  # Shares memory!

# PyTorch → NumPy
torch_tensor2 = torch.tensor([[5, 6], [7, 8]])
np_arr2 = torch_tensor2.numpy()          # Shares memory (if on CPU)

Warning: Shared memory → changes in one affect the other!

np_arr[0,0] = 99
print(torch_tensor)  # Changed too!

11. GPU Acceleration (PyTorch Only!)

if torch.cuda.is_available():
    device = torch.device("cuda")
else:
    device = torch.device("cpu")

x = torch.randn(1000, 1000, device=device)
y = torch.randn(1000, 1000, device=device)

%timeit x @ y  # Much faster on GPU!

Move tensor to GPU:

x_cpu = torch.randn(3, 3)
x_gpu = x_cpu.to('cuda')  # or .cuda()
x_back = x_gpu.to('cpu')  # or .cpu()

12. Autograd: The Magic of PyTorch

x = torch.tensor([2.0], requires_grad=True)
y = x ** 2 + 3 * x + 1

y.backward()  # Compute gradient
print(x.grad)  # tensor([7.]) → dy/dx = 2x + 3

NumPy has no autograd → PyTorch enables deep learning.


13. Full Working Example: Linear Regression Step

# Data
X_np = np.array([[1], [2], [3], [4]], dtype=np.float32)
y_np = np.array([[2], [4], [6], [8]], dtype=np.float32)

# To PyTorch
X = torch.from_numpy(X_np)
y = torch.from_numpy(y_np)

# Model: y = w * x + b
w = torch.randn(1, requires_grad=True)
b = torch.randn(1, requires_grad=True)
lr = 0.01

for epoch in range(100):
    y_pred = X @ w + b
    loss = ((y_pred - y) ** 2).mean()

    loss.backward()

    with torch.no_grad():
        w -= lr * w.grad
        b -= lr * b.grad
        w.grad.zero_()
        b.grad.zero_()

print(f"Learned: y = {w.item():.2f}x + {b.item():.2f}")

Output: y = 2.00x + 0.00 → Perfect fit!


Summary Cheat Sheet

Concept NumPy PyTorch
Array np.array() torch.tensor()
Zeros np.zeros() torch.zeros()
Shape .shape .shape
Reshape .reshape() .reshape() / .view()
Transpose .T .T
MatMul @ @
Sum .sum(axis=) .sum(dim=)
GPU tensor.to('cuda')
Autograd requires_grad=True
Convert torch.from_numpy(), .numpy()

Practice Exercises

  1. Create a 3×3 identity matrix in both NumPy and PyTorch.
  2. Compute element-wise sin(x) + cos(x)^2 for x = [0, π/2, π].
  3. Perform matrix multiplication of two random 4×4 matrices on GPU.
  4. Convert a NumPy array to PyTorch, modify it, and see shared memory effect.
  5. Write a PyTorch version of np.linalg.norm() using only basic ops.

Final Words

NumPy is great for scientific computing.
PyTorch is NumPy + autograd + GPU + deep learning.

You now have full fluency to move from NumPy → PyTorch!


Keep Learning:
Try torch.nn, DataLoader, and build your first neural net next!


End of Module
Mastered: Arrays → Tensors → Math → GPU → Autograd
Now go build something awesome! 🚀

Last updated: Nov 13, 2025