I am pretty new to python and pyramid. I am working on pyramid application that I run with the following command:
pserve development.ini
When I make some changes and restart the server (kill it and run it again) it keeps the old versions of the files in cache.
I have noticed that I can clean the cache by re-installing the application with
python setup.py install
but I am sure that there is a nicer way for this?
I have noticed that the cache files are kept in the build folder:
build/lib.linux-x86_64-2.7/*
Instead of using python setup.py install, use python setup.py develop. This will link your application's directory into the site-packages without creating a separate "installed" source tree.
Related
My team is enjoying using python to solve problems for our business. We write many small independent scripty applications.
However, we have to have a central windows box that runs these along with legacy applications.
Our challenge is going through a build and deploy process.
We want to have Bamboo check the script out of git, install requirements and run tests, then if all is green, just deploy to our production box.
We'd like libraries to be isolated from script to script so we don't have dependency issues.
We've tried to get virtualenvs to be portable but that seems a no go.
Pex looked promising, but it doesn't work on windows.
Ideally you'd see a folder like so:
AppOne
/Script.py
/Libs
/bar.egg
/foo.egg
AppTwo
/Script2.py
/Libs
/fnord.egg
/fleebly.py
Are we thinking about this wrong? What's the pythonic way to distribute scripts within an enterprise?
You may be able to do that with a neat if relatively unknown feature that was sneaked into Python 2.6 without much ado: executing zip files as Python applications. It got a bit (just a bit) more of publicity after PEP 441 (which is the one PEX is inspired in), although I think most people is still unaware of it. The idea is that you create a zip file (the recommeded extension is .pyz or .pyzw for windowed applications, but that's obviously not important) with all the code and modules that you want and then you simply run it with Python. The interpreter will add the contents of the zip file to sys.path and look for a top level module named __main__ and run it. Python 3.5 even introduced the convenience module zipapp to create such packaged applications, but there is really no magic in it and you may as well create it by hand or script.
In your case, I guess Bamboo could do the check out, dependency install and tests in virtualenvs and then package the application along with the environment libraries. It's not a one-click solution but it may do the trick without additional tools.
TL:DR:
Use Docker
A short story long:
You can use docker to create an independent image for every script that you want to deploy.
You can install a python image (slim is the lightest) as a base environment for each script or a group of scripts/applications and use it like a "virtualenv" in which you can install all your dependencies for that script.
There is also an integration for Bamboo and Docker which you may find useful.
Here is the Docker documentation for reference.
You can test each script individually in a separated container and if it passes then you can use the same container to deploy it in your main server.
It is not exactly what you are asking, but you can use this solution in every platform (Windows, Linux, etc.), you can deploy all your scripts to the enterprise server (or anywhere for that matter) and use them across your company.
Disclaimer: This is not THE solution, it is a solution that I am aware of which applies to the time of this answer (2017)
Another possibility is pyinstaller. It creates an executable that can be deployed. Python is not even required to be installed on the deployed production box. It is harder to debug problems that occur only on the deployed box. You also can't modify the scripts on the deployed box which depending on your trust of the owners of the machine is either a positive or negative. See http://www.pyinstaller.org/
As I understand it, you want to create self-contained application directories on a build server, then copy them over to a production server and run scripts directly from them. In particular, you want all dependencies (your own and external packages) installed within a Libs subdirectory in each application directory. Here's a fairly robust way to do that:
Create the top-level application directory (AppOne) and the Libs subdirectory inside it.
Use pip install --ignore-installed --target=Libs package_name to install dependencies into the Libs subdirectory.
Copy your own packages and modules into the Libs subdirectory (or install them there with pip).
Copy Script.py into the top-level directory.
Include code at the top of Script.py to add the Libs directory to sys.path:
import os, sys
app_path = os.path.dirname(__file__)
lib_path = os.path.abspath(os.path.join(app_path, 'Libs'))
sys.path.insert(0, lib_path)
This will make packages like Libs\bar.egg and modules like Libs\fleebly.py available to your script via import bar or import fleebly. Without code like this, there is no way for your script to find those packages and modules.
If you want to streamline this part of your script, there are a couple of options: (1) Put these lines in a separate fix_path.py module in the top-level directory and just call import fix_path at the start of your script. (2) Create a Libs\__init__.py file with the line sys.path.insert(0, os.path.dirname(__file__)), and then call import Libs from your script. After that, Libs\x can be imported via import x. This is neat, but it's a nonstandard use of the package and path mechanisms (it uses Libs as both a library directory and a package), so it could create some confusion about how importing works.
Once these directories and files are in place, you can copy this whole structure over to any Windows system with Python installed, and then run it using cd AppOne; python Script.py or python AppOne\Script.py. If you name your top-level script __main__.py instead of Script.py, then you can run your app just by executing python AppOne.
Further, as #jdehesa pointed out, if your script is named __main__.py, you can compress the contents of the AppOne directory (but not the AppOne directory itself) into a file called AppOne.zip, and then copy that to your production server and run it by calling python AppOne.zip. (On Python 3.5 or later, you can also create the zip file via python -m zipapp AppOne if your script is called __main__.py. You may also be able to use python -m zipapp AppOne -m Script if your script is called Script.py. See https://docs.python.org/3/library/zipapp.html.)
This kind of thing can be easily dealt with python setup.py
Sample setup.py
from setuptools import setup
setup(
name=name_for_distribution,
version=version_number,
py_modules=[pythonfiles],
install_requires=[
python packages that need to be installed
]
)
Create a virtual environment , activate it and run :
python setup.py install
I feel this is the most pythonic way to distribute and package your project.
Reading links:
https://pythonhosted.org/an_example_pypi_project/setuptools.html
https://docs.python.org/2/distutils/setupscript.html
I have a Python-based web app that I'm trying to package as a setuptools package so that it can be installed using pip and/or python setup.py xxxxx. This web app also contains static files for a React front end. I use webpack (and therefore node.js) to generate the JavaScript bundle for the website. I'm trying to figure out the most pythonic way to package this. From googling around a bit, I found nodeenv which seems relevant.
Ideally, I would like this package to have the following traits:
When installed with pip install or python setup.py install it should not install node and webpack, but the installed package should include the webpack output.
The webpack-generated output should not need to be checked into the source repo. (i.e. it will need to be generated at some point or another in the packaging process.)
When the package is set up for development via pip install -e or python setup.py develop, it should install node and webpack (I suspect the aforementioned nodeenv will be useful in this regard.) It should also run webpack at this time, so that afterwards, the webpack-generated content exists.
If it were easy, it would also be cool if webpack could be started in "watch" mode when the virtualenv is activated, and stopped when it's deactivated (but this is totally a stretch goal.)
My hunch, given these requirements, is that I will need to subclass the sdist command to cause the webpack output to be generated at source distribution generation time. I'm also guessing I'll need to subclass the develop command to inject the development-only requirements.
It seems like this is a bridge that someone must have crossed before. Anyone have any pointers?
I think you're better off splitting these concerns into different build steps, if we disect your process a bit, these steps come up (assuming that node, npm and the virtualenv are already installed on your box)
Install the required python modules in the local virtualenv.
Install webpack and the npm modules needed to run the webpack script.
Run the webpack config so your static assets will be compiled locally.
Each of these steps represent a command that can end up in a Makefile or just a simple shell script for example (or use Fabric if you want to stick with python) so you would end up with the following commands:
python-requirements
node-requirements
build-static
build -> python-requirements, node-requirements, build-static
Now you can run these commands at will! If you're deploying you would run build for example, which will run each step in succession.
We're not the same deployment system but seek the same sort of thing: no need for node on production, but build with webpack for the final deployment. We're using docker to run up a temporal build machine...
The builder installs all the distribution packages it needs, then checks out the code, calls setup.py to build itself, runs myriad tests, and finally deploys the build dir to prod.
So I've left it up to the docker's config to ensure that nodejs and npm are installed by adding curl... && apt-get etc. to the Dockerfile.
I've subclassed the sdist and modified the run command to just run npm install and webpack on the commandline when it runs.
So in setup.py
setup(
name='myapp',
...
cmdclass={'sdist': MySdistCommand}
...)
Then MySdistCommand is
from setuptools.command.sdist import sdist
class MySdistCommand(sdist):
def run(self):
import subprocess
subprocess.check_call(['npm', 'install'])
subprocess.check_call(['./node_modules/.bin/webpack', '-p'])
sdist.run(self)
Which seems to work so far. I'll let you know if quirks appear when we try to deploy it to prod (via a rather contorted docker+puppet system). I'm not sure what directory it will find itself in when it tries to run for real, but it works in dev. :-D
Sorry for the (incredibly late) answer. But I ran into this same problem, and solved it with an entry point, something like this. Adding an entry point allows us to add a script that will be in the same folder as a webpack config that you might have.
As a result, you can just check in the entry point, whether a build has occurred and build if it hasn't :) Alternatively, if you must build when you run setup.py, you can include two functions in your entry point, and just run the setup function on your setup.py install develop step and then use a custom build step to perform an npm install or something similar.
The main part you need is:
entry_points={
"console_scripts": [
"mywebpack=script_build:main",
]
},
My script looks something like
from os.path import exists, dirname, join
from subprocess import Popen
import sys
def main():
# Find the path of the script
path = dirname(__file__)
# Get the arguments that should pass to webpack
args = sys.argv[1:]
# Call webpack with the arguments passed to this program
webpack_invocation = join(path, 'node_modules', '.bin', 'webpack')
webpack_command = [webpack_invocation] + args
process = Popen(webpack_invocation, cwd=path, shell=True)
process.wait()
if __name__ == "__main__":
main()
Then you can use this from the console with
mywebpack <options>
This will guarantee that it uses a locally installed version of webpack :)
Hope that helps!
I am currently writing a command line application in Python, which needs to be made available to end users in such a way that it is very easy to download and run. For those on Windows, who may not have Python (2.7) installed, I intend to use PyInstaller to generate a self-contained Windows executable. Users will then be able to simply download "myapp.exe" and run myapp.exe [ARGUMENTS].
I would also like to provide a (smaller) download for users (on various platforms) who already have Python installed. One option is to put all of my code into a single .py file, "myapp.py" (beginning with #! /usr/bin/env python), and make this available. This could be downloaded, then run using myapp.py [ARGUMENTS] or python myapp.py [ARGUMENTS]. However, restricting my application to a single .py file has several downsides, including limiting my ability to organize the code and making it difficult to use third-party dependencies.
Instead I would like to distribute the contents of several files of my own code, plus some (pure Python) dependencies. Are there any tools which can package all of this into a single file, which can easily be downloaded and run using an existing Python installation?
Edit: Note that I need these applications to be easy for end users to run. They are not likely to have pip installed, nor anything else which is outside the Python core. Using PyInstaller, I can generate a file which these users can download from the web and run with one command (or, if there are no arguments, simply by double-clicking). Is there a way to achieve this ease-of-use without using PyInstaller (i.e. without redundantly bundling the Python runtime)?
I don't like the single file idea because it becomes a maintenance burden. I would explore an approach like the one below.
I've become a big fan of Python's virtual environments because it allows you to silo your application dependencies from the OS's installation. Imagine a scenario where the application you are currently looking to distribute uses a Python package requests v1.0. Some time later you create another application you want to distribute that uses requests v2.3. You may end up with version conflicts on a system where you want to install both applications side-by-side. Virtual environments solve this problem as each application would have its own package location.
Creating a virtual environment is easy. Once you have virtualenv installed, it's simply a matter of running, for example, virtualenv /opt/application/env. Now you have an isolated python environment for your application. Additionally, virtual environments are very easy to clean up, simply remove the env directory and you're done.
You'll need a setup.py file to install your application into the environment. Say your application uses requests v2.3.0, your custom code is in a package called acme, and your script is called phone_home. Your directory structure looks like this:
acme/
__init__.py
models.py
actions.py
scripts/
phone_home
setup.py
The setup.py would look something like this:
from distutils.core import setup
install_requires = [
'requests==2.3.0',
]
setup(name='phone_home',
version='0.0.1',
description='Sample application to phone home',
author='John Doe',
author_email='john#doe.com',
packages=['acme'],
scripts=['scripts/phone_home'],
url='http://acme.com/phone_home',
install_requires=install_requires,
)
You can now make a tarball out of your project and host it however you wish (your own web server, S3, etc.):
tar cvzf phone_home-0.0.1.tar.gz .
Finally, you can use pip to install your package into the virtual environment you created:
/opt/application/env/bin/pip install http://acme.com/phone_home-0.0.1.tar.gz
You can then run phone_home with:
/opt/application/env/bin/phone_home
Or create a symlink in /usr/local/bin to simply call the script using phone_home:
ln -s /opt/application/env/bin/phone_home /usr/local/bin/phone_home
All of the steps above can be put in a shell script, which would make the process a single-command install.
And with slight modification this approach works really well for development environments; i.e. using pip to install / reference your development directory: pip install -e . where . refers to the current directory and you should be in your project directory alongside setup.py.
Hope this helps!
You could use pip as suggested in the comments. You need to create a MANIFEST.in and setup.py in your project to make it installable. You can also add modules as prerequisites. More info can be found in this question (not specific to Django):
How do I package a python application to make it pip-installable?
This will make your module available in Python. You can then have users run a file that runs your module, by either python path/run.py, ./path/run.py (with +x permission) or python -c "some code here" (e.g. for an alias).
You can even have users install from a git public reporitory, like this
pip install git+https://bitbucket.org/yourname/projectname.git
...in which case they also need git.
I have created my program using virtual env. It is working in my project folder fine. Now i need to take this program and release it to the production environment that is supposed to be accessible by everybody.So this program should be runnable as is or it might be incorporated into other programs as a step. How am i supposed to deploy it? Zip the whole project folder? Is it possible to do without requiring clients to copy it and then unzip and run? Or the only way is to create a commonly accessible script that automates unzipping of the thing and configuring virtual env and then running it or there is a smarter way?
More complicated scenario is when it supposed to be used as library. How to deploy it so others could specify it as their dependency and pick it up? Seems like the only way is to create your own PyPi-like local repository - is that correct?
Thanks!
So here is what i have found:
If we have a project A as API:
create a folder where you will store the wheels (~/wheelhouse)
using pip config specify this folder as one to find links in http://www.pip-installer.org/en/latest/configuration.html
i have:
[global]
[install]
no-index = yes
find-links = /home/users/me/wheelhouse
Make sure the wheel package is installed.
In your project create setup.py file that will allow for the wheel creation and execute
python setup.py bdist_wheel
copy the generated wheel to the wheelhouse so it has:
~/wheelhouse/projectA-0.1-py33-none-any.whl
Now we want to create a project that uses that projectA API - project B
we are creating a separate folder for this project and then create a virtual environment for it.
mkdir projectB; cd projectB
virtualenv projectB_env
source projectB_env/bin/activate
pip install projectA
Now if you run python console in this folder you will be able to import the classes from the projectA! One problem solved!
Now you have finished the development of projectB and you need to run it.
For that purpose I'd recommend to use Pex (twitter.common.python) library. Pex now supports (v0.5.1) wheels lookup as dependencies. I'm feeding it the content of requirements.txt file to resolve dependencies. So as the result you will get the executable lightweight archived virtualenv that will have everything necessary for the project to run.
This should get you started:
http://docs.python.org/2/distutils/
http://guide.python-distribute.org/
http://pythonhosted.org/setuptools/
On windows 7 I have python2.6.6 installed at C:\Python26
I wanted to install Django, so I:
downloaded and untarred the files into C:\Python26\Django-1.4
ran python setup.py install
verified it was installed by opening IDLE and typing, import django
The next part is the problem... in the tutorial, it says to now run django-admin.py startproject mysite, however django-admin.py wasn't found, and while looking for it, I discovered that there is a duplication in the directories
C:\Python26\Django-1.4\build\lib\django
C:\Python26\Django-1.4\django
I didn't see anything in setup.cfg that would allow me to make sure that didn't happen or to pick a different setup folder, etc... but in the file C:\Python26\Django-1.4\INSTALL, it is stated that "AS AN ALTERNATIVE, you can just copy the entire "django" directory to python's site-packages directory"
So for my question: besides avoiding this duplication of code in the Django directories, what else is the difference with using the setup.py install command versus copying the directory? Are there other pros/cons?
Try adding "C:\Python26\;C:\Python26\Scripts;" to your PATHenvironmental variable and then running django-admin.py startproject mysite.