End-to-End ML Project: Fraud Detection System

Goal: Build a production-ready fraud detection system in under 2 hours — your capstone portfolio project.

End-to-End ML Project: Fraud Detection System

End-to-End ML Project: Fraud Detection System

End-to-End ML Project: Fraud Detection System

data → clean → model → API → Streamlit dashboard

Goal: Build a production-ready fraud detection system in under 2 hours — your capstone portfolio project.

Dataset: Credit Card Fraud (284k rows)
Tech Stack: Python, Pandas, Scikit-learn, FastAPI, Streamlit, Docker (optional)
Outcome: Live dashboard + API → "Fraud Score: 98.7%"


Project Structure

fraud-detection-system/
├── data/
│   └── creditcard.csv
├── notebooks/
│   └── 01_eda.ipynb
├── src/
│   ├── data_cleaner.py
│   ├── model.py
│   ├── api.py
│   └── app.py
├── models/
│   └── fraud_model.pkl
├── requirements.txt
├── Dockerfile
└── README.md

Step 1: Data → Load & Explore

# src/data_loader.py
import pandas as pd

def load_data(path="data/creditcard.csv"):
    df = pd.read_csv(path)
    print(f"Loaded {df.shape[0]:,} rows × {df.shape[1]} cols")
    print(f"Fraud rate: {df['Class'].mean():.4%}")
    return df

Key Insight:

Only 0.17% fraud → highly imbalanced → need SMOTE + class weights


Step 2: Clean → Preprocess Pipeline

# src/data_cleaner.py
from sklearn.preprocessing import StandardScaler
from imblearn.over_sampling import SMOTE
import pandas as pd

def clean_and_scale(df):
    X = df.drop('Class', axis=1)
    y = df['Class']

    # Scale (Amount + Time)
    scaler = StandardScaler()
    X['Amount'] = scaler.fit_transform(X[['Amount']])
    X['Time'] = scaler.fit_transform(X[['Time']])

    return X, y, scaler

Step 3: Model → XGBoost with SMOTE

# src/model.py
import xgboost as xgb
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, roc_auc_score
from imblearn.over_sampling import SMOTE
import joblib

def train_model(X, y):
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y, random_state=42)

    smote = SMOTE(random_state=42)
    X_train_res, y_train_res = smote.fit_resample(X_train, y_train)

    model = xgb.XGBClassifier(
        scale_pos_weight=len(y_train_res)/sum(y_train_res),
        eval_metric='auc',
        use_label_encoder=False,
        random_state=42
    )
    model.fit(X_train_res, y_train_res)

    # Evaluate
    y_pred = model.predict(X_test)
    y_prob = model.predict_proba(X_test)[:, 1]
    print("AUC:", roc_auc_score(y_test, y_prob))
    print(classification_report(y_test, y_pred))

    # Save
    joblib.dump(model, "models/fraud_model.pkl")
    return model

Result:

AUC: 0.9987
              precision    recall  f1-score   support
           0       1.00      1.00      1.00     56863
           1       0.95      0.86      0.90        98

Step 4: API → FastAPI Endpoint

# src/api.py
from fastapi import FastAPI
from pydantic import BaseModel
import joblib
import pandas as pd
import uvicorn

app = FastAPI(title="Fraud Detection API")

model = joblib.load("models/fraud_model.pkl")

class Transaction(BaseModel):
    Time: float
    V1: float
    V2: float
    # ... V28
    Amount: float

@app.post("/predict")
def predict_fraud(transaction: Transaction):
    data = pd.DataFrame([transaction.dict()])
    prob = model.predict_proba(data)[0, 1]
    fraud = prob > 0.5
    return {
        "fraud_score": round(prob, 4),
        "is_fraud": fraud,
        "risk_level": "HIGH" if prob > 0.8 else "MEDIUM" if prob > 0.5 else "LOW"
    }

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000)

Test API:

curl -X POST "http://localhost:8000/predict" \
     -H "Content-Type: application/json" \
     -d '{"Time": 0, "V1": -1.3, ..., "Amount": 100}'

Step 5: Dashboard → Streamlit App

# src/app.py
import streamlit as st
import requests
import pandas as pd
import joblib
import matplotlib.pyplot as plt

st.title("Real-Time Fraud Detection System")
st.sidebar.header("Input Transaction")

# Input form
with st.sidebar.form("transaction"):
    time = st.number_input("Time", value=0.0)
    amount = st.number_input("Amount", value=100.0)
    v1 = st.number_input("V1", value=-1.359)
    # ... add V1–V28
    submitted = st.form_submit_button("Check Fraud")

if submitted:
    payload = {"Time": time, "Amount": amount, "V1": v1, ...}
    response = requests.post("http://localhost:8000/predict", json=payload).json()

    col1, col2, col3 = st.columns(3)
    col1.metric("Fraud Score", f"{response['fraud_score']:.4f}")
    col2.metric("Risk Level", response['risk_level'])
    col3.metric("Is Fraud", "YES" if response['is_fraud'] else "NO")

    # Gauge chart
    fig, ax = plt.subplots()
    ax.pie([response['fraud_score'], 1-response['fraud_score']], 
           colors=['red', 'green'], startangle=90)
    ax.text(0, 0, f"{response['fraud_score']:.1%}", ha='center', fontsize=20)
    st.pyplot(fig)

Run:

# Terminal 1
uvicorn src.api:app --reload

# Terminal 2
streamlit run src/app.py

Step 6: Dockerize (Optional but Impressive)

# Dockerfile
FROM python:3.9-slim

WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .

CMD ["uvicorn", "src.api:app", "--host", "0.0.0.0", "--port", "8000"]
# docker-compose.yml
version: '3'
services:
  api:
    build: .
    ports:
      - "8000:8000"
  dashboard:
    image: streamlit/streamlit
    command: streamlit run src/app.py --server.port 8501
    ports:
      - "8501:8501"
    depends_on:
      - api

requirements.txt

pandas
scikit-learn
xgboost
imbalanced-learn
fastapi
uvicorn
streamlit
requests
matplotlib
joblib

README.md (Portfolio Gold)

# Real-Time Fraud Detection System

**Live Demo**: [streamlit.app/fraud-detect](https://yourname-fraud-detection.streamlit.app)  
**API Docs**: [localhost:8000/docs](http://localhost:8000/docs)

## Features
- **99.87% AUC** on imbalanced data
- **SMOTE + XGBoost** with class weighting
- **FastAPI** backend with Pydantic validation
- **Streamlit** real-time dashboard
- **Docker** ready

## How to Run
```bash
docker-compose up
# API: http://localhost:8000
# Dashboard: http://localhost:8501

Results

Metric Value
AUC 0.9987
Precision (Fraud) 0.95
Recall (Fraud) 0.86
F1 0.90

"Detected 86% of fraud with only 5% false positives"

---

## Deploy to Cloud (Bonus)

| Platform | Link |
|--------|------|
| **Streamlit Cloud** | Free dashboard |
| **Render / Railway** | Free FastAPI |
| **Hugging Face Spaces** | Free + Git |

---

## Interview Talking Points

| Question | Your Answer |
|--------|------------|
| "How did you handle imbalance?" | **SMOTE + `scale_pos_weight` + AUC focus** |
| "Why XGBoost?" | **Handles non-linearity, missing values, fast** |
| "How is it deployed?" | **FastAPI + Docker + Streamlit** |
| "What would you improve?" | **Drift monitoring, SHAP explainer, A/B test threshold** |

---

## Final Checklist

| Task | Done? |
|------|-------|
| Load & explore data |  |
| Clean + scale |  |
| Train XGBoost + SMOTE |  |
| Save model |  |
| FastAPI `/predict` |  |
| Streamlit dashboard |  |
| Docker compose |  |
| Push to GitHub |  |

**All done?**  **You just built a production ML system!**

---

## Next: MLOps & Monitoring
> Add **MLflow**, **Evidently AI**, **Prometheus**  senior-level project

---

**Start Now**:
```bash
mkdir fraud-detection-system && cd fraud-detection-system
wget https://github.com/nsethi31/Kaggle-Data-Credit-Card-Fraud-Detection/archive/master.zip
unzip master.zip

Tag me when you deploy live!
This is the project that gets you hired.

Last updated: Nov 09, 2025

End-to-End ML Project: Fraud Detection System

Goal: Build a production-ready fraud detection system in under 2 hours — your capstone portfolio project.

End-to-End ML Project: Fraud Detection System

End-to-End ML Project: Fraud Detection System

End-to-End ML Project: Fraud Detection System

data → clean → model → API → Streamlit dashboard

Goal: Build a production-ready fraud detection system in under 2 hours — your capstone portfolio project.

Dataset: Credit Card Fraud (284k rows)
Tech Stack: Python, Pandas, Scikit-learn, FastAPI, Streamlit, Docker (optional)
Outcome: Live dashboard + API → "Fraud Score: 98.7%"


Project Structure

fraud-detection-system/
├── data/
│   └── creditcard.csv
├── notebooks/
│   └── 01_eda.ipynb
├── src/
│   ├── data_cleaner.py
│   ├── model.py
│   ├── api.py
│   └── app.py
├── models/
│   └── fraud_model.pkl
├── requirements.txt
├── Dockerfile
└── README.md

Step 1: Data → Load & Explore

# src/data_loader.py
import pandas as pd

def load_data(path="data/creditcard.csv"):
    df = pd.read_csv(path)
    print(f"Loaded {df.shape[0]:,} rows × {df.shape[1]} cols")
    print(f"Fraud rate: {df['Class'].mean():.4%}")
    return df

Key Insight:

Only 0.17% fraud → highly imbalanced → need SMOTE + class weights


Step 2: Clean → Preprocess Pipeline

# src/data_cleaner.py
from sklearn.preprocessing import StandardScaler
from imblearn.over_sampling import SMOTE
import pandas as pd

def clean_and_scale(df):
    X = df.drop('Class', axis=1)
    y = df['Class']

    # Scale (Amount + Time)
    scaler = StandardScaler()
    X['Amount'] = scaler.fit_transform(X[['Amount']])
    X['Time'] = scaler.fit_transform(X[['Time']])

    return X, y, scaler

Step 3: Model → XGBoost with SMOTE

# src/model.py
import xgboost as xgb
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, roc_auc_score
from imblearn.over_sampling import SMOTE
import joblib

def train_model(X, y):
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y, random_state=42)

    smote = SMOTE(random_state=42)
    X_train_res, y_train_res = smote.fit_resample(X_train, y_train)

    model = xgb.XGBClassifier(
        scale_pos_weight=len(y_train_res)/sum(y_train_res),
        eval_metric='auc',
        use_label_encoder=False,
        random_state=42
    )
    model.fit(X_train_res, y_train_res)

    # Evaluate
    y_pred = model.predict(X_test)
    y_prob = model.predict_proba(X_test)[:, 1]
    print("AUC:", roc_auc_score(y_test, y_prob))
    print(classification_report(y_test, y_pred))

    # Save
    joblib.dump(model, "models/fraud_model.pkl")
    return model

Result:

AUC: 0.9987
              precision    recall  f1-score   support
           0       1.00      1.00      1.00     56863
           1       0.95      0.86      0.90        98

Step 4: API → FastAPI Endpoint

# src/api.py
from fastapi import FastAPI
from pydantic import BaseModel
import joblib
import pandas as pd
import uvicorn

app = FastAPI(title="Fraud Detection API")

model = joblib.load("models/fraud_model.pkl")

class Transaction(BaseModel):
    Time: float
    V1: float
    V2: float
    # ... V28
    Amount: float

@app.post("/predict")
def predict_fraud(transaction: Transaction):
    data = pd.DataFrame([transaction.dict()])
    prob = model.predict_proba(data)[0, 1]
    fraud = prob > 0.5
    return {
        "fraud_score": round(prob, 4),
        "is_fraud": fraud,
        "risk_level": "HIGH" if prob > 0.8 else "MEDIUM" if prob > 0.5 else "LOW"
    }

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000)

Test API:

curl -X POST "http://localhost:8000/predict" \
     -H "Content-Type: application/json" \
     -d '{"Time": 0, "V1": -1.3, ..., "Amount": 100}'

Step 5: Dashboard → Streamlit App

# src/app.py
import streamlit as st
import requests
import pandas as pd
import joblib
import matplotlib.pyplot as plt

st.title("Real-Time Fraud Detection System")
st.sidebar.header("Input Transaction")

# Input form
with st.sidebar.form("transaction"):
    time = st.number_input("Time", value=0.0)
    amount = st.number_input("Amount", value=100.0)
    v1 = st.number_input("V1", value=-1.359)
    # ... add V1–V28
    submitted = st.form_submit_button("Check Fraud")

if submitted:
    payload = {"Time": time, "Amount": amount, "V1": v1, ...}
    response = requests.post("http://localhost:8000/predict", json=payload).json()

    col1, col2, col3 = st.columns(3)
    col1.metric("Fraud Score", f"{response['fraud_score']:.4f}")
    col2.metric("Risk Level", response['risk_level'])
    col3.metric("Is Fraud", "YES" if response['is_fraud'] else "NO")

    # Gauge chart
    fig, ax = plt.subplots()
    ax.pie([response['fraud_score'], 1-response['fraud_score']], 
           colors=['red', 'green'], startangle=90)
    ax.text(0, 0, f"{response['fraud_score']:.1%}", ha='center', fontsize=20)
    st.pyplot(fig)

Run:

# Terminal 1
uvicorn src.api:app --reload

# Terminal 2
streamlit run src/app.py

Step 6: Dockerize (Optional but Impressive)

# Dockerfile
FROM python:3.9-slim

WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .

CMD ["uvicorn", "src.api:app", "--host", "0.0.0.0", "--port", "8000"]
# docker-compose.yml
version: '3'
services:
  api:
    build: .
    ports:
      - "8000:8000"
  dashboard:
    image: streamlit/streamlit
    command: streamlit run src/app.py --server.port 8501
    ports:
      - "8501:8501"
    depends_on:
      - api

requirements.txt

pandas
scikit-learn
xgboost
imbalanced-learn
fastapi
uvicorn
streamlit
requests
matplotlib
joblib

README.md (Portfolio Gold)

# Real-Time Fraud Detection System

**Live Demo**: [streamlit.app/fraud-detect](https://yourname-fraud-detection.streamlit.app)  
**API Docs**: [localhost:8000/docs](http://localhost:8000/docs)

## Features
- **99.87% AUC** on imbalanced data
- **SMOTE + XGBoost** with class weighting
- **FastAPI** backend with Pydantic validation
- **Streamlit** real-time dashboard
- **Docker** ready

## How to Run
```bash
docker-compose up
# API: http://localhost:8000
# Dashboard: http://localhost:8501

Results

Metric Value
AUC 0.9987
Precision (Fraud) 0.95
Recall (Fraud) 0.86
F1 0.90

"Detected 86% of fraud with only 5% false positives"

---

## Deploy to Cloud (Bonus)

| Platform | Link |
|--------|------|
| **Streamlit Cloud** | Free dashboard |
| **Render / Railway** | Free FastAPI |
| **Hugging Face Spaces** | Free + Git |

---

## Interview Talking Points

| Question | Your Answer |
|--------|------------|
| "How did you handle imbalance?" | **SMOTE + `scale_pos_weight` + AUC focus** |
| "Why XGBoost?" | **Handles non-linearity, missing values, fast** |
| "How is it deployed?" | **FastAPI + Docker + Streamlit** |
| "What would you improve?" | **Drift monitoring, SHAP explainer, A/B test threshold** |

---

## Final Checklist

| Task | Done? |
|------|-------|
| Load & explore data |  |
| Clean + scale |  |
| Train XGBoost + SMOTE |  |
| Save model |  |
| FastAPI `/predict` |  |
| Streamlit dashboard |  |
| Docker compose |  |
| Push to GitHub |  |

**All done?**  **You just built a production ML system!**

---

## Next: MLOps & Monitoring
> Add **MLflow**, **Evidently AI**, **Prometheus**  senior-level project

---

**Start Now**:
```bash
mkdir fraud-detection-system && cd fraud-detection-system
wget https://github.com/nsethi31/Kaggle-Data-Credit-Card-Fraud-Detection/archive/master.zip
unzip master.zip

Tag me when you deploy live!
This is the project that gets you hired.

Last updated: Nov 09, 2025