Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 51 additions & 0 deletions .github/workflows/docker-build.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
name: Docker Build and Test

on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main, develop ]

jobs:
build:
runs-on: ubuntu-latest

steps:
- name: Checkout code
uses: actions/checkout@v4

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

- name: Build Backend
run: |
cd Backend
docker build -t inpactai-backend:test .
- name: Build Frontend
run: |
cd Frontend
docker build -t inpactai-frontend:test .
- name: Start services
run: |
docker compose up -d
sleep 30
Comment on lines +30 to +33
Copy link

Copilot AI Dec 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The workflow will fail in CI because it tries to start services without providing the required environment variables. The docker-compose.yml depends on Backend/.env and Frontend/.env files which won't exist in CI. Consider either creating mock .env files in CI or providing environment variables through GitHub Secrets.

Copilot uses AI. Check for mistakes.
- name: Check backend health
run: |
curl -f http://localhost:8000/ || exit 1
- name: Check frontend health
run: |
curl -f http://localhost:5173/ || exit 1
Comment on lines +30 to +41
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Replace fixed sleep with proper health check polling.

Using sleep 30 is unreliable—services may take longer to start or be ready sooner. The workflow should use Docker Compose's built-in health check waiting or implement retry logic.

Apply this diff to use Docker Compose's wait functionality:

     - name: Start services
       run: |
-        docker compose up -d
-        sleep 30
+        docker compose up -d --wait --wait-timeout 120
         
     - name: Check backend health
       run: |
-        curl -f http://localhost:8000/ || exit 1
+        for i in {1..30}; do
+          if curl -f http://localhost:8000/; then
+            echo "Backend is healthy"
+            break
+          fi
+          if [ $i -eq 30 ]; then
+            echo "Backend health check failed after 30 attempts"
+            exit 1
+          fi
+          echo "Waiting for backend... attempt $i/30"
+          sleep 2
+        done
         
     - name: Check frontend health
       run: |
-        curl -f http://localhost:5173/ || exit 1
+        for i in {1..30}; do
+          if curl -f http://localhost:5173/; then
+            echo "Frontend is healthy"
+            break
+          fi
+          if [ $i -eq 30 ]; then
+            echo "Frontend health check failed after 30 attempts"
+            exit 1
+          fi
+          echo "Waiting for frontend... attempt $i/30"
+          sleep 2
+        done

Alternatively, if your Docker Compose file defines health checks, the --wait flag alone should be sufficient.

🤖 Prompt for AI Agents
.github/workflows/docker-build.yml around lines 30 to 41: replace the fixed
"sleep 30" with deterministic readiness checks — either use Docker Compose's
built-in wait by running "docker compose up -d --wait" (if services define
healthchecks) or implement polling loops that retry the service endpoints until
healthy (e.g., loop with curl -f and exponential/backoff retries and a timeout)
for both backend (http://localhost:8000/) and frontend (http://localhost:5173/);
ensure the step fails if services do not become healthy within a reasonable
timeout.

- name: Show logs on failure
if: failure()
run: |
docker compose logs
- name: Cleanup
if: always()
run: |
docker compose down -v
21 changes: 21 additions & 0 deletions Backend/.dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
__pycache__
*.pyc
*.pyo
*.pyd
.Python
*.so
.env
.venv
env/
venv/
ENV/
.git
.gitignore
.pytest_cache
.coverage
htmlcov/
dist/
build/
*.egg-info/
.DS_Store
*.log
12 changes: 12 additions & 0 deletions Backend/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
user=postgres
password=your_postgres_password
host=your_postgres_host
port=5432
dbname=postgres
GROQ_API_KEY=your_groq_api_key
SUPABASE_URL=your_supabase_url
SUPABASE_KEY=your_supabase_key
GEMINI_API_KEY=your_gemini_api_key
YOUTUBE_API_KEY=your_youtube_api_key
REDIS_HOST=redis
REDIS_PORT=6379
18 changes: 18 additions & 0 deletions Backend/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
FROM python:3.10-slim

WORKDIR /app

RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
libpq-dev \
curl \
&& rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Ensure .dockerignore excludes unnecessary files.

Line 14 uses COPY . . which copies all files from the build context. Without a proper .dockerignore file, this could include:

  • .env files with secrets
  • __pycache__ directories
  • .git directory
  • Virtual environment folders
  • Test files

Verify that Backend/.dockerignore exists and includes:

#!/bin/bash
# Check if .dockerignore exists and review its contents
if [ -f "Backend/.dockerignore" ]; then
  echo "=== Backend/.dockerignore contents ==="
  cat Backend/.dockerignore
else
  echo "WARNING: Backend/.dockerignore not found"
fi
🤖 Prompt for AI Agents
Backend/Dockerfile around lines 14 to 14, the Dockerfile uses COPY . . which
will include everything in the build context; create or update
Backend/.dockerignore to exclude secrets and unnecessary files (e.g., .env,
*.pem, .git, __pycache__, *.pyc, venv/ or .venv/, node_modules/, tests/,
Dockerfile, .dockerignore, build artifacts) and commit it so sensitive and bulky
files aren’t sent to the daemon; alternatively, narrow the COPY to only required
files (or use a dedicated build context) after verifying the .dockerignore is
present and contains the above exclusions.


EXPOSE 8000

CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
33 changes: 33 additions & 0 deletions Backend/Dockerfile.prod
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
FROM python:3.10-slim AS builder

WORKDIR /app

RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --no-cache-dir --user -r requirements.txt

FROM python:3.10-slim

WORKDIR /app

RUN apt-get update && apt-get install -y --no-install-recommends \
libpq5 \
&& rm -rf /var/lib/apt/lists/* \
&& groupadd -r appuser && useradd -r -g appuser appuser

COPY --from=builder /root/.local /root/.local
COPY . .

RUN chown -R appuser:appuser /app

USER appuser

ENV PATH=/root/.local/bin:$PATH
Comment on lines +22 to +29
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Permission mismatch prevents appuser from accessing installed packages.

The Dockerfile installs Python packages to /root/.local (line 11 with --user flag), copies them to the runtime stage (line 22), then switches to the non-root appuser (line 27). However, appuser cannot access /root/.local due to directory permissions, causing the application to fail at runtime when trying to import packages.

Apply this diff to fix the permission issue:

-COPY --from=builder /root/.local /root/.local
+COPY --from=builder /root/.local /home/appuser/.local
 COPY . .
 
 RUN chown -R appuser:appuser /app
 
 USER appuser
 
-ENV PATH=/root/.local/bin:$PATH
+ENV PATH=/home/appuser/.local/bin:$PATH

Alternatively, you can copy to a shared location accessible to all users:

-COPY --from=builder /root/.local /root/.local
+COPY --from=builder /root/.local /usr/local
 COPY . .
 
 RUN chown -R appuser:appuser /app
 
 USER appuser
 
-ENV PATH=/root/.local/bin:$PATH
+ENV PATH=/usr/local/bin:$PATH
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
COPY --from=builder /root/.local /root/.local
COPY . .
RUN chown -R appuser:appuser /app
USER appuser
ENV PATH=/root/.local/bin:$PATH
COPY --from=builder /root/.local /home/appuser/.local
COPY . .
RUN chown -R appuser:appuser /app && \
chown -R appuser:appuser /home/appuser/.local
USER appuser
ENV PATH=/home/appuser/.local/bin:$PATH
🤖 Prompt for AI Agents
In Backend/Dockerfile.prod around lines 22-29, the runtime stage copies packages
into /root/.local and then switches to non-root appuser, causing permission
errors; to fix, copy the builder's .local into a shared or appuser-owned
directory (e.g., /opt/.local or /home/appuser/.local) instead of /root/.local,
chown that directory recursively to appuser:appuser after copying, and update
ENV PATH to point at the chosen location (e.g., /opt/.local/bin or
/home/appuser/.local/bin) so appuser can access installed packages at runtime.


EXPOSE 8000

CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
6 changes: 5 additions & 1 deletion Backend/app/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,11 @@ async def lifespan(app: FastAPI):
# Add CORS middleware
app.add_middleware(
CORSMiddleware,
allow_origins=["http://localhost:5173"],
allow_origins=[
"http://localhost:5173",
"http://frontend:5173",
"http://127.0.0.1:5173"
],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
Expand Down
24 changes: 18 additions & 6 deletions Backend/app/routes/post.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,25 +18,37 @@
import uuid
from datetime import datetime, timezone

# Load environment variables
load_dotenv()
url: str = os.getenv("SUPABASE_URL")
key: str = os.getenv("SUPABASE_KEY")
supabase: Client = create_client(url, key)

url: str = os.getenv("SUPABASE_URL", "")
key: str = os.getenv("SUPABASE_KEY", "")

if not url or not key or "your-" in url:
print("⚠️ Supabase credentials not configured. Some features will be limited.")
Copy link

Copilot AI Dec 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The warning message "Some features will be limited" is vague and doesn't inform the user which features will be affected or what action they should take. Consider providing a more specific message such as "User management features require Supabase configuration. Please set SUPABASE_URL and SUPABASE_KEY in your .env file."

Suggested change
print("⚠️ Supabase credentials not configured. Some features will be limited.")
print("⚠️ Supabase credentials not configured. User management features require Supabase configuration. Please set SUPABASE_URL and SUPABASE_KEY in your .env file.")

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI Dec 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Print statement may execute during import.

Copilot uses AI. Check for mistakes.
supabase = None
else:
try:
supabase: Client = create_client(url, key)
except Exception as e:
print(f"❌ Supabase connection failed: {e}")
Copy link

Copilot AI Dec 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Print statement may execute during import.

Copilot uses AI. Check for mistakes.
supabase = None

# Define Router
router = APIRouter()

# Helper Functions
def generate_uuid():
return str(uuid.uuid4())

def current_timestamp():
return datetime.now(timezone.utc).isoformat()

# ========== USER ROUTES ==========
def check_supabase():
if not supabase:
raise HTTPException(status_code=503, detail="Database service unavailable. Please configure Supabase credentials.")

@router.post("/users/")
async def create_user(user: UserCreate):
check_supabase()
user_id = generate_uuid()
t = current_timestamp()

Expand Down
175 changes: 175 additions & 0 deletions DOCKER-ARCHITECTURE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,175 @@
# Docker Architecture Diagram

```
┌─────────────────────────────────────────────────────────────────────┐
│ Docker Host Machine │
│ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ Docker Network: inpactai-network │ │
│ │ │ │
│ │ ┌──────────────────┐ ┌──────────────────┐ ┌────────┐│ │
│ │ │ Frontend │ │ Backend │ │ Redis ││ │
│ │ │ Container │ │ Container │ │ Container │
│ │ │ │ │ │ │ ││ │
│ │ │ Node 18-alpine │ │ Python 3.10-slim │ │ Redis 7││ │
│ │ │ Vite Dev Server │◄───┤ FastAPI + uvicorn │ Alpine ││ │
│ │ │ Port: 5173 │ │ Port: 8000 │◄───┤ Port: ││ │
│ │ │ │ │ │ │ 6379 ││ │
│ │ └──────────────────┘ └──────────────────┘ └────────┘│ │
│ │ │ │ │ │ │
│ │ │ Volume Mount │ Volume Mount │ │ │
│ │ │ (Hot Reload) │ (Hot Reload) │ │ │
│ │ ▼ ▼ ▼ │ │
│ │ ┌──────────────┐ ┌─────────────┐ ┌──────────┐│ │
│ │ │ ./Frontend │ │ ./Backend │ │redis_data││ │
│ │ │ /app │ │ /app │ │ Volume ││ │
│ │ └──────────────┘ └─────────────┘ └──────────┘│ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
│ Port Mappings: │
│ ┌─────────────┬──────────────┬────────────────────────────────┐ │
│ │ Host:5173 │ ──────────► │ frontend:5173 (React + Vite) │ │
│ │ Host:8000 │ ──────────► │ backend:8000 (FastAPI) │ │
│ │ Host:6379 │ ──────────► │ redis:6379 (Cache) │ │
│ └─────────────┴──────────────┴────────────────────────────────┘ │
│ │
│ Environment Files: │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Backend/.env → Backend Container │ │
│ │ Frontend/.env → Frontend Container │ │
│ └────────────────────────────────────────────────────────────┘ │
│ │
└───────────────────────────────────────────────────────────────────────┘

User Browser
http://localhost:5173 ──► Frontend Container ──► React UI
│ API Calls
http://backend:8000 ──► Backend Container ──► FastAPI
│ Cache/PubSub
redis:6379 ──► Redis Container


Communication Flow:
──────────────────────

1. User accesses http://localhost:5173
└─► Docker routes to Frontend Container

2. Frontend makes API call to /api/*
└─► Vite proxy forwards to http://backend:8000
└─► Docker network resolves 'backend' to Backend Container

3. Backend connects to Redis
└─► Uses REDIS_HOST=redis environment variable
└─► Docker network resolves 'redis' to Redis Container

4. Backend connects to Supabase
└─► Uses credentials from Backend/.env
└─► External connection via internet


Service Dependencies:
─────────────────────

redis (no dependencies)
└─► backend (depends on redis)
└─► frontend (depends on backend)


Health Checks:
──────────────

Redis: redis-cli ping
Backend: curl http://localhost:8000/
Frontend: No health check (depends on backend health)


Volume Mounts:
──────────────

Development:
./Backend:/app (Hot reload for Python)
./Frontend:/app (Hot reload for Vite)
/app/__pycache__ (Excluded)
/app/node_modules (Excluded)

Production:
redis_data:/data (Persistent Redis storage only)


Build Process:
──────────────

Development:
1. Copy package files
2. Install dependencies
3. Copy source code
4. Start dev server with hot reload

Production:
Stage 1: Build
1. Copy package files
2. Install dependencies
3. Copy source code
4. Build optimized bundle

Stage 2: Serve
1. Copy built artifacts
2. Use minimal runtime (nginx for frontend)
3. Serve optimized files


Network Isolation:
──────────────────

Internal Network (inpactai-network):
- frontend ←→ backend (HTTP)
- backend ←→ redis (TCP)

External Access:
- Host machine → All containers (via port mapping)
- Backend → Supabase (via internet)
- Backend → External APIs (via internet)


Security Model:
───────────────

Development:
- Root user in containers (for hot reload)
- Source code mounted as volumes
- Debug logging enabled

Production:
- Non-root user in containers
- No volume mounts (except data)
- Production logging
- Resource limits enforced
- Optimized images
```

## Quick Command Reference

```bash
Start: docker compose up --build
Stop: docker compose down
Logs: docker compose logs -f
Rebuild: docker compose up --build
Clean: docker compose down -v
```

## Service URLs

| Service | Internal | External |
|---------|----------|----------|
| Frontend | frontend:5173 | http://localhost:5173 |
| Backend | backend:8000 | http://localhost:8000 |
| Redis | redis:6379 | localhost:6379 |
Loading