Is there any easy way to export the libs my script needs so that I can put all of the files into a git repo and run the script from Jenkins without the need of installing anything?
context:
remote Jenkins without some python libs (RO - no access to terminal)
need to run my script that needs external libs such as paramiko, requests, etc
I have tried freeze.py but it fails at make stage
I have found some articles here regarding freeze.py, p2exe, p2app, but none of those helped me.
You can use a virtual environment to install your required python dependencies in the workspace. In short, this sets up a local version of python and pip for which you can install packages without affecting the system installation. Using virtual environments is also a great way to ensure dependencies from one job do not impact other jobs. This solution does require pip and virtualenv to be installed on the build machine.
Your build step should do something like:
virtualenv venv
. venv/bin/activate
pip install -r requirements.txt
# ... perform build, tests ...
If you separate your build into several steps, the environment variables set in the activate script will not be available in subsequent steps. You will need to either source the activate script in each step, or adjust the PATH (e.g. via EnvInject) so that the virtualenv python is run.
Related
I created an application using py2app and have completed it. Running it on my machine has no problem, but I am concerned it might have problems such as missing module or other errors when run on someone else's with no programs installed. Is there a way to test this?
(sorry, I'm sure this is on the Internet but I'm not sure how to search for it)
you really can address your concern of module not found and ... on your current system by testing the app in an isolated virtual environment.
you can create a virtual environment, install the necessary package in it and see if it works
to do so:
I think the fastest easiest and most comfortable way is to use miniconda
to create a backup of the current ( or a fresh working virtual environment) and re-create one on another (or same) machine.
miniconda is an environment managing tool, a super lightweight version of conda (only ~60MB)
Just install miniconda from the instructions here.
Then create a new environment as below:
conda create --name newtestevironemnet python=3.9
So, the command will create an environment with only python. and then you can test your app see what no module error you will get and install packages.
whenever you reach a point where everything is working you can export necessary packages by :
conda env export > environment.yml
you can also create a requirement.txt file and tell the user to do pip install -r requirements.txt before running the app by:
conda list -e > requirements.txt
I would like to easily export one Python project from one PC to other. When I created the project, I used a virtual environment in order to avoid problems with different package versions.
What I did was to just copy the project folder and paste it in the destination PC. Once I opened the project with Pycharm, I activated the virtual environment with project_path/venv/Scripts/activate, but when I tried to execute any Script, it said it didnĀ“t find the modules.
Which is the workflow I should follow in order to create projects and be able to run them from multiple PC-s without needing to install all the dependencies?
Since you did not specify your Python version I will provide a solution working for both Python 2.x and 3.x.
My suggestion is to create a requirements.txt file containing all your requirements.
This file can be easily prepared using the output from the command:
pip freeze
Then you can paste the output in your requirements.txt file and when you are going to install your Python code on another PC you can simply:
pip install -r requirements.txt
To install your requirements again.
Depending on your project it could be possible, for example, to create a single EXE file (if you are using Windows machines) but more detailed is needed if this is the case.
In case you are using Python 3 the method that is at the moment arguably more popular in the Python community is Pipenv.
Here's its relevant documentation.
And here you can read a simple example of a workflow.
if you are using python3 then use pipenv. It will automatically create Pipfile and Pipfile.lock. That will insure reinstalling dependencies on different machine will have the same packages.
basic and helpful commands:
pipenv shell # activate virutalenv
pipenv install # will install dependencies in Pipfile
pipenv install requests # will install requests lib. and will auto update Pipfile and Pipfile.lock
I'm used to using pip to install Python packages into my Django projects' virtual environments.
When I am working with a Divio Docker project locally, this does not work.
There are two things you need to be aware of when installing Python packages into a Docker project:
the package must be installed in the correct environment
if you want to use the installed package in the future, it needs to be installed in a more permanent way
The details below describe using a Divio project, but the principle will be similar for other Docker installations.
Installation in the correct environment
To use pip on the command line to install a Python package into a Dockerised project, you need to be using pip inside the Docker environment, that is, inside the container.
It's not enough to be in the directory where you have access to the project's files. In this respect, it's similar to using a virtual environment - you need to have the virtualenv activated. (Otherwise, your package will be installed not in the virtual environment, but on your own host environment.)
To activate a virtual environment, you'd run something like source bin/activate on it.
To install a package within a Divio web container:
# start a bash prompt inside the project
docker-compose run --rm web bash
# install the package in the usual way
pip install rsa
rsa is now installed and available to use.
More permanent installation
So far however, the package will only be installed and available in that particular container. As soon as you exit the bash shell, the container will disappear. The next time you launch a web container, you will not find the rsa package there. That's because the container is launched each time from its image.
In order to have the package remain installed, you will need to include it in the image.
A Divio project includes a requirements.in file, listing Python packages that will be included in the image.
Add a new line containing rsa to the end of that file. Then run:
docker-compose build web
This will rebuild the Docker image. Next time you launch a container with (for example) docker-compose run --rm web bash, it will include that Python package.
(The Divio Developer Handbook has some additional guidance on using pip.)
Note: I am a member of the Divio team. This question is one that we see quite regularly via our support channels.
I run Vagrant on Mac OS X. I am coding inside a virtual machine with CentOS 6, and I have the same versions of Python and Ruby in my development and production environment. I have these restrictions:
I cannot manually install. Everything must come through RPM.
I cannot use pip install and gem install to install the libraries I want as the system is managed through Puppet, and everything I add will be removed.
yum has old packages. I usually cannot find the latest versions of the libraries.
I would like to put my libraries locally in a lib directory near my scripts, and create an RPM that includes those frozen versions of dependencies. I cannot find an easy way to bundle my libraries for my scripts and push everything into my production server. I would like to know the easiest way to gather my dependencies in Python and Ruby.
I tried:
virtualenv (with --relocatable option)
PYTHONPATH
sys.path.append("lib path")
I don't know which is the right way to go. Also for ruby, is there any way to solve my problems with bundler? I see that bundler is for rails. Does it work for custom small scripts?
I like the approach in Node.JS and NPM; all packages are stored locally in node_modules. I have nodejs rpm installed, and I deploy a folder with my application on the production server. I would like to do it this way in Ruby and Python.
I don't know Node, but what you describe for NPM seems to be exactly what a virtualenv is. Once the virtualenv is activated, pip installs only within that virtualenv - so puppet won't interfere. You can write out your current list of packages to a requirements.txt file with pip freeze, and recreate the whole thing again with pip install -r requirements.txt. Ideally you would then deploy with puppet, and the deploy step would involve creating or updating the virtualenv, activating it, then running that pip command.
Maybe take a look at Docker?
With Docker you could create a image of your specific environment and deploy that.
https://www.docker.com/whatisdocker/
I'm new to virtualenv but I'm writting django app and finally I will have to deploy it somehow.
So lets assume I have my app working on my local virtualenv where I installed all the required libraries. What I want to do now, is to run some kind of script, that will take my virtualenv, check what's installed inside and produce a script that will install all these libraries on fresh virtualenv on other machine. How this can be done? Please help.
You don't copy paste your virtualenv. You export the list of all the packages installed like -
pip freeze > requirements.txt
Then push the requirements.txt file to anywhere you want to deploy the code, and then just do what you did on dev machine -
$ virtualenv <env_name>
$ source <env_name>/bin/activate
(<env_name>)$ pip install -r path/to/requirements.txt
And there you have all your packages installed with the exact version.
You can also look into Fabric to automate this task, with a function like this -
def pip_install():
with cd(env.path):
with prefix('source venv/bin/activate'):
run('pip install -r requirements.txt')
You can install virtualenvwrapper and try cpvirtualenv, but the developers advise caution here:
Warning
Copying virtual environments is not well supported. Each virtualenv
has path information hard-coded into it, and there may be cases where
the copy code does not know it needs to update a particular file. Use
with caution.
If it is going to be on the same path you can tar it and extract it on another machine. If all the same dependencies, libraries etc are available on the target machine it will work.