Moving project from one computer to another - python

I have a project I built in python 2.7 on one PC. It uses djagno 1.4.5 and some other modules that are kept in site-packages. I tried to copy the contents of Lib\site-packages to the new PC Python install, but I get missing module errors when I try to run manage.py runserver. Do I have to install everything again on the new pc and just transfer the project files?

Usually, you keep a list of your project requirements in requirements.txt and keep the file at the root of your project. Example requirements.txt content:
Django==1.6.5
lxml==3.3
On the new computer, you clone the repository containing the project source code (or get it different way), then install the requirements via pip python package manager:
pip install -r requirements.txt
Also, having separate virtual environments for every project is basically must have.
In order to create the list of requirements from your current python environment (virtual or system-wide), run:
pip freeze > requirements.txt

Related

How to install all my python modules in single shot

I am a beginner in python. I have done a website using django, flask, xml, wtforms and also i have used some API python modules too. The website was successfully created and working well in local machine.
But if i want to run in an another python available machine, i am in the need of install all my above mentioned modules manually.
Do we have something similar to gradle, maven or ant which will download/install the required modules during my first run?
Kindly help me.
One way is to freeze your current local python installations into a requirements.txt file and then install everything in one go in another machine.
$ pip freeze > requirements.txt
copy the requirements file into another machine,
install python and then ...
$ pip install -r requirements.txt

Transferring a Python project to different computers

I am working on a Python project on both my work computer and my home computer. GitHub has made the experience pretty seamless.
But I'm having a problem with the pyvenv.cfg file in my venv folder. Because my Python SDK has a different file path on my work computer compared to my home computer, I have to manually go into pyvenv.cfg to change the home = C:\Users\myName\... filepath each time I pull the updated version of my project from my other computer, or else the interpreter doesn't work.
Does anyone know a solution to this problem?
As confirmed in the comments, you've added the virtual environment folder to your project and included it in the files that you put on GitHub.
That's generally a bad idea, since it defeats part of the purpose of having a virtual environment in the first place. Your virtual environment will contain packages specific to the platform and configuration of the machine it is on - you could be developing on Linux on one machine and Windows on another and you'd have bigger problems than just one line in a configuration file.
What you should do:
create a virtual environment in a folder outside your project / source folder.
assuming you're using pip, you can run pip freeze > requirements.txt to create a requirements.txt folder which you can then use on the other system with pip install -r requirements.txt to recreate the exact same virtual environment.
All you have to do is keep that requirements.txt up to date and update the virtual environments on either computer whenever it changes, instead of pulling it through GitHub.
In more detail, a simple example (for Windows, very similar for Linux):
create a project folder, e.g. C:\projects\my_project
create a virtual environment for the project, e.g. python -m venv C:\projects\venv\my_project
activate the environment, i.e. C:\projects\venv\my_project\Scripts\activate.bat
install packages, e.g. pip install numpy
save what packages were installed to a file in C:\projects\my_project, called requirements.txt with pip freeze > requirements.txt
store the project in a Git repo, including that file
on another development machine, clone or pull the project, e.g. git clone https://github.com/my_project D:\workwork\projects\my_project
on that machine, create a new virtual environment, e.g. python -m venv D:\workwork\venv\my_project
activate the environment, i.e. D:\workwork\venv\my_project\Scripts\activate.bat
install the packages that are required with pip install -r D:\workwork\projects\my_project\requirements.txt
Since you say you're using PyCharm, it's a lot easier still, just make sure that the environment created by PyCharm sits outside your project folder. I like to keep all my virtual environments in one folder, with venv-names that match the project names.
You can still create a requirements.txt in your project and when you pull the project to another PC with PyCharm, just do the same: create a venv outside the project folder. PyCharm will recognise that it's missing packages from the requirements file and offer to install them for you.
You shouldn't keep the full virtualenv in source control, since more often than not it's much larger than your code, and there may be platform-and-interpreter-version specific bits and bobs in there.
Instead, you should save the packages required -- the tried and tested way is a requirements.txt file (but there are plenty of alternatives such as Pyenv, Poetry, Dephell, ...) -- and recreate the virtualenv on each machine you need to run the project on.
To save a requirements file for a pre-existing project, you can
pip freeze > requirements.txt
when the virtualenv is active.
Then, you can use
pip install -r requirements.txt
to install exactly those packages.
In general, I like to use pip-tools, so I only manage a requirements.in file with package requirement names, and the pip-compile utility then locks those requirements with exact versions into requirements.txt.

Export Python project from one PC to other

I would like to easily export one Python project from one PC to other. When I created the project, I used a virtual environment in order to avoid problems with different package versions.
What I did was to just copy the project folder and paste it in the destination PC. Once I opened the project with Pycharm, I activated the virtual environment with project_path/venv/Scripts/activate, but when I tried to execute any Script, it said it didn´t find the modules.
Which is the workflow I should follow in order to create projects and be able to run them from multiple PC-s without needing to install all the dependencies?
Since you did not specify your Python version I will provide a solution working for both Python 2.x and 3.x.
My suggestion is to create a requirements.txt file containing all your requirements.
This file can be easily prepared using the output from the command:
pip freeze
Then you can paste the output in your requirements.txt file and when you are going to install your Python code on another PC you can simply:
pip install -r requirements.txt
To install your requirements again.
Depending on your project it could be possible, for example, to create a single EXE file (if you are using Windows machines) but more detailed is needed if this is the case.
In case you are using Python 3 the method that is at the moment arguably more popular in the Python community is Pipenv.
Here's its relevant documentation.
And here you can read a simple example of a workflow.
if you are using python3 then use pipenv. It will automatically create Pipfile and Pipfile.lock. That will insure reinstalling dependencies on different machine will have the same packages.
basic and helpful commands:
pipenv shell # activate virutalenv
pipenv install # will install dependencies in Pipfile
pipenv install requests # will install requests lib. and will auto update Pipfile and Pipfile.lock

Bundling python dependencies in the project folder itself so as it can be easily shared and start application without downloading any dependencies

Whenever we download any python dependencies i.e django,MySQL by using 'pip' it get install in the root folder that would be c drive on windows( where we have install python).
How can I install this dependencies in my project folder itself, so when I start my application it will read the dependencies from project folder rather from c drive.
My purpose is whenever i give my application to user or third person he should directly start application instead of downloading all dependencies.
Go install VirtualEnvWrapper first. After that, create a new virtualenv, activate it, and run pip freeze. You should see nothing in there because nothing is installed. This env will store all your pip installs and will not use all the packages you have installed on your c drive. Anything you pip install while this env is activated will be in this env and this env alone. You can tie a project to this env as well by using commands from virtaulenvwrapper. Your env and project directories should be separate. Deactivate the env to go back to your 'Base' environment and pip freeze again. You will see all the installs you have done on your C drive.
A best practice is to create a requirements.txt file and version control it so everyone can use the same versions of the same packages. If you don't want to do this, simply activate your new virtual env and pip install everything you want.

When would the -e, --editable option be useful with pip install?

When would the -e, or --editable option be useful with pip install?
For some projects the last line in requirements.txt is -e .. What does it do exactly?
As the man page says it:
-e,--editable <path/url>
Install a project in editable mode (i.e. setuptools "develop mode") from a local project path or a VCS url.
So you would use this when trying to install a package locally, most often in the case when you are developing it on your system. It will just link the package to the original location, basically meaning any changes to the original package would reflect directly in your environment.
Some nuggets around the same here and here.
An example run can be:
pip install -e .
or
pip install -e ~/ultimate-utils/ultimate-utils-proj-src/
note the second is the full path to where the setup.py would be at.
Concrete example of using --editable in development
If you play with this test package as in:
cd ~
git clone https://github.com/cirosantilli/vcdvcd
cd vcdvcd
git checkout 5dd4205c37ed0244ecaf443d8106fadb2f9cfbb8
python -m pip install --editable . --user
it outputs:
Obtaining file:///home/ciro/bak/git/vcdvcd
Installing collected packages: vcdvcd
Attempting uninstall: vcdvcd
Found existing installation: vcdvcd 1.0.6
Can't uninstall 'vcdvcd'. No files were found to uninstall.
Running setup.py develop for vcdvcd
Successfully installed vcdvcd-1.0.6
The Can't uninstall 'vcdvcd' is normal: it tried to uninstall any existing vcdvcd to then replace them with the "symlink-like mechanism" that is produced in the following steps, but failed because there were no previous installations.
Then it generates a file:
~/.local/lib/python3.8/site-packages/vcdvcd.egg-link
which contains:
/home/ciro/vcdvcd
.
and acts as a "symlink" to the Python interpreter.
So now, if I make any changes to the git source code under /home/ciro/vcdvcd, it reflects automatically on importers who can from any directory do:
python -c 'import vcdvcd'
Note however that at my pip version at least, binary files installed with --editable, such as the vcdcat script provided by that package via scripts= on setup.py, do not get symlinked, just copied to:
~/.local/bin/vcdcat
just like for regular installs, and therefore updates to the git repository won't directly affect them.
By comparison, a regular non --editable install from the git source:
python -m pip uninstall vcdvcd
python -m pip install --user .
produces a copy of the installed files under:
~/.local/lib/python3.8/site-packages/vcdvcd
Uninstall of an editable package as done above requires a new enough pip as mentioned at: How to uninstall editable packages with pip (installed with -e)
Tested in Python 3.8, pip 20.0.2, Ubuntu 20.04.
Recommendation: develop directly in-tree whenever possible
The editable setup is useful when you are testing your patch to a package through another project.
If however you can fully test your change in-tree, just do that instead of generating an editable install which is more complex.
E.g., the vcdvcd package above is setup in a way that you can just cd into the source and do ./vcdcat without pip installing the package itself (in general, you might need to install dependencies from requirements.txt though), and the import vcdvcd that that executable does (or possibly your own custom test) just finds the package correctly in the same directory it lives in.
From Working in "development" mode:
Although not required, it’s common to locally install your project in
“editable” or “develop” mode while you’re working on it. This allows
your project to be both installed and editable in project form.
Assuming you’re in the root of your project directory, then run:
pip install -e .
Although somewhat cryptic, -e is short for
--editable, and . refers to the current working directory, so together, it means to install the current directory (i.e. your
project) in editable mode.
Some additional insights into the internals of setuptools and distutils from “Development Mode”:
Under normal circumstances, the distutils assume that you are going to
build a distribution of your project, not use it in its “raw” or
“unbuilt” form. If you were to use the distutils that way, you would
have to rebuild and reinstall your project every time you made a
change to it during development.
Another problem that sometimes comes up with the distutils is that you
may need to do development on two related projects at the same time.
You may need to put both projects’ packages in the same directory to
run them, but need to keep them separate for revision control
purposes. How can you do this?
Setuptools allows you to deploy your projects for use in a common
directory or staging area, but without copying any files. Thus, you
can edit each project’s code in its checkout directory, and only need
to run build commands when you change a project’s C extensions or
similarly compiled files. You can even deploy a project into another
project’s checkout directory, if that’s your preferred way of working
(as opposed to using a common independent staging area or the
site-packages directory).
To do this, use the setup.py develop command. It works very similarly
to setup.py install, except that it doesn’t actually install anything.
Instead, it creates a special .egg-link file in the deployment
directory, that links to your project’s source code. And, if your
deployment directory is Python’s site-packages directory, it will also
update the easy-install.pth file to include your project’s source
code, thereby making it available on sys.path for all programs using
that Python installation.
It is important to note that pip uninstall can not uninstall a module that has been installed with pip install -e. So if you go down this route, be prepared for things to get very messy if you ever need to uninstall. A partial solution is to (1) reinstall, keeping a record of files created, as in sudo python3 -m setup.py install --record installed_files.txt, and then (2) manually delete all the files listed, as in e.g. sudo rm -r /usr/local/lib/python3.7/dist-packages/tdc7201-0.1a2-py3.7.egg/ (for release 0.1a2 of module tdc7201). This does not 100% clean everything up however; even after you've done it, importing the (removed!) local library may succeed, and attempting to install the same version from a remote server may fail to do anything (because it thinks your (deleted!) local version is already up to date).
As suggested in previous answers, there is no symlinks that are getting created.
How does '-e' option work? -> It just updates the file "PYTHONDIR/site-packages/easy-install.pth" with the project path specified in the 'command pip install -e'.
So each time python search for a package it will check this directory as well => any changes to the files in this directory is instantly reflected.

Categories

Resources