I'm working on installing Python packages on an offline PC and need some advice. I've followed a method I found previously, where I create a virtual environment using `python -m venv .`, activate it with `.inactivate`, and then install my desired package with `pip install `. After that, I run `pip freeze > requirements.txt` to save my installed packages, and I use this to download the necessary wheels with `pip download -r requirements.txt`. I then transfer the downloaded wheels folder to my offline PC, set up a new virtual environment, and install the packages from there.
This method works fine as long as the packages are available as wheel files. However, I've encountered some packages that have dependencies needing to be built from source, leading to `tar.gz` files being downloaded instead. When I try to build these on the offline PC, it fails due to missing dependencies, build tools, or mismatched versions of setuptools. I'm looking for solutions or workarounds for this issue!
3 Answers
Have you thought about using Docker? It's really useful for creating consistent environments. You could set everything up in a Docker image and run it on any machine without the same dependency issues.
You might want to consider using Nuitka for this. It helps with compiling Python code into standalone executables, which can include those tricky dependencies you’re having trouble with. It’s powerful and might save you some headaches!
I usually package my scripts into standalone executables. Sure, the file sizes are bigger, but it eliminates dependency issues and makes distribution much simpler! Just curious though, what do you use for packaging?
I typically use PyInstaller for that. It handles a lot of the heavy lifting.
Totally agree! Nuitka is a great tool. It's worked wonders for us.