Reference:#
ps. The official uv is really considerate, it provides a detailed configuration for various cpu
, cu11
, cu12
, intel gpu
, etc.
I remember seeing poetry
users considering using a specified whl path method during installation, but there are significant limitations because whl locks the Python version and system.
Installing a specific PyTorch build (f/e CPU-only) with Poetry
Problem Description:#
Under normal circumstances, when we use uv add torch==2.1.0
, the installation is for the cpu+cuda version of torch:
xnne@xnne-PC:~/code/Auto_Caption_Generated_Offline$ uv add torch==2.1.0 torchaudio==2.1.0
⠴ nvidia-cusparse-cu12==12.1.0.106 ^C
To start, consider the following (default) configuration, which would be generated by running uv init --python 3.12 followed by uv add torch torchvision.
First, please consider the following (default) configuration, which can be generated by running uv init --python 3.12 followed by uv add torch torchvision.
In this case, PyTorch would be installed from PyPI, which hosts CPU-only wheels for Windows and macOS, and GPU-accelerated wheels on Linux (targeting CUDA 12.4):
In this case, PyTorch can be installed from PyPI, which hosts CPU-only wheels for Windows and macOS, and GPU-accelerated wheels on Linux (targeting CUDA 12.4):
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = [
"torch>=2.6.0",
"torchvision>=0.21.0",
]
This is sometimes not what we want, for example, I do not have the nvidia driver installed on my deepin, so I downloaded an environment of 3~4G that I cannot use at all.
I tried using uv add torch==2.1.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
xnne@xnne-PC:~/code/Auto_Caption_Generated_Offline$ uv add torch==2.1.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
Resolved 63 packages in 10.37s
Built auto-caption-generate-offline @ fil
⠹ Preparing packages... (4/5)
torch ------------------------------ 19.06 MiB/176.29 MiB
It did download the cpu version of torch, but when I tried to install from my remote repository, I encountered a problem:
xnne@xnne-PC:~/code/test/Auto_Caption_Generated_Offline$ uv pip install git+https://github.com/MrXnneHang/[email protected]
Updated https://github.com/MrXnneHang/Auto_Caption_Generated_Offline (12065e01ec1dc11f8f224fbb132cfd1c18ec3ac1)
× No solution found when resolving dependencies:
╰─▶ Because there is no version of torch==2.1.0+cpu and auto-caption-generate-offline==2.4.0 depends on torch==2.1.0+cpu, we can conclude that auto-caption-generate-offline==2.4.0 cannot be used.
And because only auto-caption-generate-offline==2.4.0 is available and you require auto-caption-generate-offline, we can conclude that your requirements are unsatisfiable.
Reason:#
The reason is that the PyTorch uploaded images are not on the PyPI index.
From a packaging perspective, PyTorch has a few uncommon characteristics:
From a packaging perspective, PyTorch has several unusual characteristics:
Many PyTorch wheels are hosted on a dedicated index, rather than the Python Package Index (PyPI). As such, installing PyTorch often requires configuring a project to use the PyTorch index.
Many PyTorch wheels are hosted on a dedicated index, rather than the Python Package Index (PyPI). Therefore, installing PyTorch often requires configuring the project to use the PyTorch index.
PyTorch produces distinct builds for each accelerator (e.g., CPU-only, CUDA). Since there's no standardized mechanism for specifying these accelerators when publishing or installing, PyTorch encodes them in the local version specifier. As such, PyTorch versions will often look like 2.5.1+cpu, 2.5.1+cu121, etc.
PyTorch produces different builds for each accelerator (e.g., CPU-only, CUDA). Since there is no standardized mechanism for specifying these accelerators during publishing or installation, PyTorch encodes them in the local version specifier. Therefore, PyTorch versions often appear as 2.5.1+cpu, 2.5.1+cu121, etc.
Builds for different accelerators are published to different indexes. For example, the +cpu builds are published on https://download.pytorch.org/whl/cpu, while the +cu121 builds are published on https://download.pytorch.org/whl/cu121.
Builds for different accelerators are published to different indexes. For example, the +cpu builds are published at https://download.pytorch.org/whl/cpu, while the +cu121 builds are published at https://download.pytorch.org/whl/cu121.
Solution:#
Finally, the relevant configuration I settled on is as follows:
dependencies = [
"funasr==1.2.4",
"pyaml==25.1.0",
"torch==2.1.0",
"torchaudio==2.1.0",
]
[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
[tool.uv.sources]
torch = [
{ index = "pytorch-cpu" },
]
torchaudio = [
{ index = "pytorch-cpu" },
]
Then we uv lock
, and push it up.
ps: When configuring the cuda version, we should consider using a previous version of cuda, such as 11.8, instead of the latest 12.x or even 13.x, because users' drivers are not always the latest, and new drivers are compatible with old cu versions. Unless there is a significant performance improvement, generally, there is not much difference.
Install from GitHub:#
Finally succeeded =-=.
xnne@xnne-PC:~/code/test/Auto_Caption_Generated_Offline$ uv venv -p 3.10 --seed
Using CPython 3.10.16
Creating virtual environment with seed packages at: .venv
+ pip==25.0.1
+ setuptools==75.8.2
+ wheel==0.45.1
xnne@xnne-PC:~/code/test/Auto_Caption_Generated_Offline$ uv pip install git+https://github.com/MrXnneHang/[email protected]
Resolved 63 packages in 5.85s
Prepared 2 packages in 11m 45s
...
+ torch==2.1.0+cpu
+ torch-complex==0.4.4
+ torchaudio==2.1.0+cpu
+ tqdm==4.67.1
...
...
However, during runtime, there was a slight conflict between the numpy of torch and the numpy of funasr:
xnne@xnne-PC:~/code/test/Auto_Caption_Generated_Offline$ uv run test-ACGO
Built auto-caption-generate-offline @ file:///home/xnne/code/test/Auto_Caption_Generated_Offline
Uninstalled 1 package in 0.57ms
Installed 1 package in 0.95ms
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.1.3 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.
Traceback (most recent call last): File "/home/xnne/code/test/Auto_Caption_Generated_Offline/.venv/bin/test-ACGO", line 4, in <module>
from uiya.test import main
File "/home/xnne/code/test/Auto_Caption_Generated_Offline/src/uiya/test.py", line 1, in <module>
import funasr
Manually downgrading to 1.26.4
resolved the issue.
uv add numpy==1.26.4
uv lock
xnne@xnne-PC:~/code/test$ uv venv -p 3.10 --seed
Using CPython 3.10.16
Creating virtual environment with seed packages at: .venv
+ pip==25.0.1
+ setuptools==75.8.2
+ wheel==0.45.1
Activate with: source .venv/bin/activate
xnne@xnne-PC:~/code/test$ uv pip install git+https://github.com/MrXnneHang/[email protected]
Resolved 63 packages in 7.90s
Prepared 2 packages in 603ms
Installed 63 packages in 259ms
+ aliyun-python-sdk-core==2.16.0
+ aliyun-python-sdk-kms==2.16.5
+ antlr4-python3-runtime==4.9.3
+ audioread==3.0.1
+ auto-caption-generate-offline==2.4.0 (from git+https://github.com/MrXnneHang/Auto_Caption_Generated_Offline@5f03a04ebdbe4b7a1329302b551e58092e8af9ee)
...
+ torch==2.1.0+cpu
+ torch-complex==0.4.4
+ torchaudio==2.1.0+cpu
+ tqdm==4.67.1
...
xnne@xnne-PC:~/code/test$ uv run test-ACGO
funasr:1.2.4
torch:2.1.0+cpu
torchaudio:2.1.0+cpu
Oops#
There was a small mishap in between. When I ran uv pip install
-> uv run
in the project directory, and there were pyproject.toml and uv.lock present, then the version that ran was definitely not the one I installed, but the one generated based on the pyproject.toml in my project. Thus, I fell into a situation where I had not updated the code, and uv run
kept reporting the original bug. This was resolved by git pull
.
So, if you have done a git pull
, you don't need to uv pip install git+
, just directly run uv run
.
If I encounter issues with the cuda version later, I may add more information.