Is there a way (list of manual steps) to migrate 3rd party modules installed in one python installation on one machine to another machine?
This would be of great help to me because I have installed the list of 3rd party modules in one of my machines (using pip tool) and I want to migrate this setup to another machine where I cannot install using pip (due to network restrictions).
As schlamar said here:
Here is a completely different suggestion, this is recommended if you
want to synchronize the packages between the two PCs and not cloning
everything just once.
It only works if you install packages with pip. It does not work for
packages which are not installable/installed with pip.
Set up the pip cache to a network storage / USB stick which is accessible from both PCs (see https://stackoverflow.com/a/4806458/851737 for instructions)
Freeze your current package environment from the source PC into a requirements file:
$ pip freeze > req.txt
Copy the req file to the target PC and install the packages:
$ pip install -r req.txt
If you put the req.txt under a VCS you can automate and synchronize
this process very smoothly.
Related
i have all required python module libraries for my standalone software in my main laptop.But if i run my software in another laptop,it is getting errors.Do i have to reinstall those python modules(eg.PySide2,PyQt)?Or should i copy or install modules into my software folder using this command?
pip install --target=d:\somewhere\other\than\the\default package_name
you know what i mean?i dont see any solution for this kind of post.So is this question weird or how do software developer do this?
In order to work in more than one laptop, I would recommend working in a virtual environment. This will allow you to install all the dependencies at once.
First, you should create the virtual environment
Then, in order to get all the libraries in your pc, use pip freeze. When you have them, create a requirements.txt in the root folder and include them in there.
When you activate the virtual environment and gotten inside of it, use the command pip install -r requirements.txt in order to install the libraries that you were using in the other pc.
I would like to easily export one Python project from one PC to other. When I created the project, I used a virtual environment in order to avoid problems with different package versions.
What I did was to just copy the project folder and paste it in the destination PC. Once I opened the project with Pycharm, I activated the virtual environment with project_path/venv/Scripts/activate, but when I tried to execute any Script, it said it didnĀ“t find the modules.
Which is the workflow I should follow in order to create projects and be able to run them from multiple PC-s without needing to install all the dependencies?
Since you did not specify your Python version I will provide a solution working for both Python 2.x and 3.x.
My suggestion is to create a requirements.txt file containing all your requirements.
This file can be easily prepared using the output from the command:
pip freeze
Then you can paste the output in your requirements.txt file and when you are going to install your Python code on another PC you can simply:
pip install -r requirements.txt
To install your requirements again.
Depending on your project it could be possible, for example, to create a single EXE file (if you are using Windows machines) but more detailed is needed if this is the case.
In case you are using Python 3 the method that is at the moment arguably more popular in the Python community is Pipenv.
Here's its relevant documentation.
And here you can read a simple example of a workflow.
if you are using python3 then use pipenv. It will automatically create Pipfile and Pipfile.lock. That will insure reinstalling dependencies on different machine will have the same packages.
basic and helpful commands:
pipenv shell # activate virutalenv
pipenv install # will install dependencies in Pipfile
pipenv install requests # will install requests lib. and will auto update Pipfile and Pipfile.lock
When I need to work on one of my pet projects, I simply clone the repository as usual (git clone <url>), edit what I need, run the tests, update the setup.py version, commit, push, build the packages and upload them to PyPI.
What is the advantage of using pip install -e? Should I be using it? How would it improve my workflow?
I find pip install -e extremely useful when simultaneously developing a product and a dependency, which I do a lot.
Example:
You build websites using Django for numerous clients, and have also developed an in-house Django app called locations which you reuse across many projects, so you make it available on pip and version it.
When you work on a project, you install the requirements as usual, which installs locations into site packages.
But you soon discover that locations could do with some improvements.
So you grab a copy of the locations repository and start making changes. Of course, you need to test these changes in the context of a Django project.
Simply go into your project and type:
pip install -e /path/to/locations/repo
This will overwrite the directory in site-packages with a symbolic link to the locations repository, meaning any changes to code in there will automatically be reflected - just reload the page (so long as you're using the development server).
The symbolic link looks at the current files in the directory, meaning you can switch branches to see changes or try different things etc...
The alternative would be to create a new version, push it to pip, and hope you've not forgotten anything. If you have many such in-house apps, this quickly becomes untenable.
For those who don't have time:
If you install your project with an -e flag (e.g. pip install -e mynumpy) and use it in your code (e.g. from mynumpy import some_function), when you make any change to some_function, you should be able to use the updated function without reinstalling it.
pip install -e is how setuptools dependencies are handled via pip.
What you typically do is to install the dependencies:
git clone URL
cd project
run pip install -e . or pip install -e .[dev]*
And now all the dependencies should be installed.
*[dev] is the name of the requirements group from setup.py
Other than setuptools (egg) there is also a wheel system of python installation.
Both these systems are based on promise that no building and compilation is performed.
I run Vagrant on Mac OS X. I am coding inside a virtual machine with CentOS 6, and I have the same versions of Python and Ruby in my development and production environment. I have these restrictions:
I cannot manually install. Everything must come through RPM.
I cannot use pip install and gem install to install the libraries I want as the system is managed through Puppet, and everything I add will be removed.
yum has old packages. I usually cannot find the latest versions of the libraries.
I would like to put my libraries locally in a lib directory near my scripts, and create an RPM that includes those frozen versions of dependencies. I cannot find an easy way to bundle my libraries for my scripts and push everything into my production server. I would like to know the easiest way to gather my dependencies in Python and Ruby.
I tried:
virtualenv (with --relocatable option)
PYTHONPATH
sys.path.append("lib path")
I don't know which is the right way to go. Also for ruby, is there any way to solve my problems with bundler? I see that bundler is for rails. Does it work for custom small scripts?
I like the approach in Node.JS and NPM; all packages are stored locally in node_modules. I have nodejs rpm installed, and I deploy a folder with my application on the production server. I would like to do it this way in Ruby and Python.
I don't know Node, but what you describe for NPM seems to be exactly what a virtualenv is. Once the virtualenv is activated, pip installs only within that virtualenv - so puppet won't interfere. You can write out your current list of packages to a requirements.txt file with pip freeze, and recreate the whole thing again with pip install -r requirements.txt. Ideally you would then deploy with puppet, and the deploy step would involve creating or updating the virtualenv, activating it, then running that pip command.
Maybe take a look at Docker?
With Docker you could create a image of your specific environment and deploy that.
https://www.docker.com/whatisdocker/
Is there any easy way to export the libs my script needs so that I can put all of the files into a git repo and run the script from Jenkins without the need of installing anything?
context:
remote Jenkins without some python libs (RO - no access to terminal)
need to run my script that needs external libs such as paramiko, requests, etc
I have tried freeze.py but it fails at make stage
I have found some articles here regarding freeze.py, p2exe, p2app, but none of those helped me.
You can use a virtual environment to install your required python dependencies in the workspace. In short, this sets up a local version of python and pip for which you can install packages without affecting the system installation. Using virtual environments is also a great way to ensure dependencies from one job do not impact other jobs. This solution does require pip and virtualenv to be installed on the build machine.
Your build step should do something like:
virtualenv venv
. venv/bin/activate
pip install -r requirements.txt
# ... perform build, tests ...
If you separate your build into several steps, the environment variables set in the activate script will not be available in subsequent steps. You will need to either source the activate script in each step, or adjust the PATH (e.g. via EnvInject) so that the virtualenv python is run.