Skip to content

Conversation

@kavyamali
Copy link

@kavyamali kavyamali commented Nov 7, 2025

Description

Simplified my previous PR and added cu126 Pytorch installation for Maxwell GPUs dynamically as cu128 is unsupported. No other code is touched.

--no-cache-dir is used for my variable to bypass memory restrictions.

Screenshots/videos:

Checklist:

@w-e-w
Copy link
Collaborator

w-e-w commented Nov 7, 2025

I made a PR as well


--no-cache-dir is used for my variable to bypass memory restrictions.

what memory restrictions your system RAM?
how much do you have?

@kavyamali
Copy link
Author

I made a PR as well

--no-cache-dir is used for my variable to bypass memory restrictions.

what memory restrictions your system RAM? how much do you have?

16GB DDR3

@kavyamali
Copy link
Author

kavyamali commented Nov 7, 2025

I made a PR as well

--no-cache-dir is used for my variable to bypass memory restrictions.

what memory restrictions your system RAM? how much do you have?

I can edit my code to include pascal as well:

def cu126():
"""For Maxwell GPUs"""
cc = get_cuda_comp_cap()
if 5.0 <= cc < 7.0:
torch_command = os.environ.get('TORCH_COMMAND', f"pip install --no-cache-dir torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126")
return torch_command

return None

def prepare_environment():
torch_index_url = os.environ.get('TORCH_INDEX_URL', "https://download.pytorch.org/whl/cu128")
torch_command = os.environ.get('TORCH_COMMAND', f"pip install torch==2.7.0 torchvision==0.22.0 --extra-index-url {torch_index_url}")
torch_command = cu126() or os.environ.get('TORCH_COMMAND', f"pip install torch==2.7.0 torchvision==0.22.0 --extra-index-url {torch_index_url}")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants