I have a simple application (just one .py file), that is using cherrypy & flask-restful to present a web service. My development environment is Windows. I use Python 3.5.2 and also create and use virtualenv to work on my project.
I have a need to deploy this on Linux systems. I was asked to create a "RPM" for this so that it can be installed and run on Linux machines.
I have googled and read several pieces of documentation:
https://docs.python.org/3.5/distutils/builtdist.html
http://docs.python-guide.org/en/latest/shipping/packaging/
But I'm very unclear on the steps that needs to be done to deploy this on a Linux system. Thanks in advance for all your help.
This is a mini demo structure output by tree command, color_print is the package name and directory
.
├── color_print
│ ├── color_print.py
│ └── __init__.py
├── __init__.py
└── setup.py
Here is an example setup.py for demo
from setuptools import setup
setup(name='color_print',
version='0.1',
description='Color String',
url='http://github/xxxx/color_print/',
author='Joe Bob',
author_email='joe.bob#gmail.com',
license='MIT',
packages=['color_print'],
zip_safe=False)
There is no need to change directory, run this one command to build rpms
python setup.py bdist_rpm
Here is the output, it is that easy:
-bash-4.1$ find . -name "*.spec"
./build/bdist.linux-x86_64/rpm/SPECS/color_print.spec
-bash-4.1$ find . -name "*.rpm"
./dist/color_print-0.1-1.noarch.rpm
./dist/color_print-0.1-1.src.rpm
In reality, you will definitely need to modify the spec files manually. and run
rpmbuild -ba ./build/bdist.linux-x86_64/rpm/SPECS/color_print.spec
Related
Im trying to use Poetry and the scripts option to run a script. Like so:
pyproject.toml
[tool.poetry.scripts]
xyz = "src.cli:main"
Folder layout
.
├── poetry.lock
├── pyproject.toml
├── run-book.txt
└── src
├── __init__.py
└── cli.py
I then perform an install like so:
❯ poetry install
Installing dependencies from lock file
No dependencies to install or update
If I then try and run the command its not found (?)
❯ xyz
zsh: command not found: xyz
Am i missing something here! Thanks,
Poetry is likely installing the script in your user local directory. On Ubuntu, for example, this is $HOME/.local/bin. If that directory isn't in your path, your shell will not find the script.
A side note: It is generally a good idea to put a subdirectory with your package name in the src directory. It's generally better to not have an __init__.py in your src directory. Also consider renaming cli.py to __main__.py. This will allow your package to be run as a script using python -m package_name.
You did everything right besides not activating the virtual environment or running that alias (xyz) via poerty run xyz. One can activate the virtualenv via poetry shell. Afterwards, xyz should run from your shell.
PS: #jisrael18's answer is totally right. Normally one would have another folder (which is your main Python module) inside the src folder.
.
├── src
│ └── pyproj
│ ├── __init__.py
│ └── __main__.py
...
The Setup
OS: Ubuntu 20.04
Python: 3.8.5 | pip: 20.0.2 | venv
Repo
.
├── build
├── dist
├── source.egg-info
├── source
├── readme.md
├── requirements.txt
├── setup.py
└── venv
source dir
.
├── config
├── examples
├── script.py
├── __init__.py
├── tests
└── utils
The important directories within the source directory are config, which contains a few .env and .json files; and utils, which is a package that contains a sub-package called config.
Running script.py, which references config and imports modules from utils, is how the CLI app is started. Ideally when it is run, it should load a bunch of environment variables, create some command aliases and display the application's prompt. (After which the user can start working within that shell.)
I created a wheel to install this application. The setup.py contains an entry point as follows:
entry_points={
'console_scripts': [
'script=source.script:main'
]
}
The Problem
I pip installed the wheel in a test directory with its own virtual environment. When I go to the corresponding site-packages directory and run python script.py, the CLI loads properly with the information about the aliases etc. However when I run simply script (the entry point) from the root directory of the environment the shell loads but I don't see any of the messages about the aliases etc., and some of the functionality which depends on the utils package aren't available either.
What could I be doing wrong? How can I make the command work as if it was running with all the necessary packages available?
Other information that may be useful
site-packages has copies of config and utils
config is included in the package as part of the package_data parameter in setup.py as ['./config/*.env', './config/*.json']
All import statements begin from source, i.e. from source.utils.config import etc.
which script gives me the location as venv/bin/script, but that bin directory does not have the packages. (Which is expected, I think.)
I've been searching the net for quite some time now but I can't seem to wrap my head around on how can I distribute my python scripts for my end user.
I've been using my scripts on my command line using this command python samplemodule.py "args1"
And this is also the way I want my user to also use it on their end with their command line. But my worry is that this certain modules have dependencies on other library or modules.
My scripts are working when they are all in the Project's root directory, but everything crumbles when I try to package them and put them in sub directories.
An example of this is I can't now run my scripts since its having an error when I'm importing a module from the data subdirectory.
This is my project structure.
MyProject
\formatter
__init__.py
__main__.py
formatter.py
addfilename.py
addscrapertype.py
...\data
__init__.py
helper.py
csv_formatter.py
setup.py
The csv_formatter.py file is just a wrapper to call the formatter.main.
Update: I was now able to generate a tar.gz package but the package wasn't callable when installed on my machine.
This is the setup.py:
import setuptools
with open("README.md", "r") as fh:
long_description = fh.read()
setuptools.setup(
name="formatter",
version="1.0.1",
author="My Name",
author_email="sample#email.com",
description="A package for cleaning and reformatting csv data",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/RhaEL012/Python-Scripts",
packages=["formatter"],
include_package_data=True,
package_data={
# If any package contains *.txt or *.rst files, include them:
"": ["*.csv", "*.rst", "*.txt"],
},
entry_points={
"console_scripts": [
"formatter=formatter.formatter:main"
]
},
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
python_requires='>=3.6',
install_requires=[
"pandas"
]
)
Now, after installing the package on the machine I wasn't able to call the module and it results in an error:
Z:\>addfilename "C:\Users\Username\Desktop\Python Scripts\"
Update: I try to install the setup.py in a virtual environment just to see where the error is coming from.
I install it then I get the following error: FileNotFoundError: [Errno 2] no such file or directory: 'README.md'
I try to include the README.md in the MANIFEST.in but still no luck.
So I try to make it a string just to see if the install will proceed.
The install proceed but then again, I encounter an error that says that the package directory 'formatter' does not exist
As I am not able to look into your specific files I will just explain how I usually tackle this issue.
This is the manner how I usually setup the command line interface (cli) tools. The project folder looks like:
Projectname
├── modulename
│ ├── __init__.py # this one is empty in this example
│ ├── cli
│ │ ├── __init__.py # this is the __init__.py that I refer to hereafter
│ ├── other_subfolder_with_scripts
├── setup.py
Where all functionality is within the modulename folder and subfolders.
In my __init__.py I have:
def main():
# perform the things that need to be done
# also all imports are within the function call
print('doing the stuff I should be doing')
but I think you can also import what you want into the __init__.py and still reference to it in the manner I do in setup.py.
In setup.py we have:
import setuptools
setuptools.setup(
name='modulename',
version='0.0.0',
author='author_name',
packages=setuptools.find_packages(),
entry_points={
'console_scripts': ['do_main_thing=modulename.cli:main'] # so this directly refers to a function available in __init__.py
},
)
Now install the package with pip install "path to where setup.py is" Then if it is installed you can call:
do_main_thing
>>> doing the stuff I should be doing
For the documentation I use: https://setuptools.readthedocs.io/en/latest/.
My recommendation is to start with this and slowly add the functionality that you want. Then step by step solve your problems, like adding a README.md etc.
I disagree with the other answer. You shouldn't run scripts in __init__.py but in __main__.py instead.
Projectfolder
├── formatter
│ ├── __init__.py
│ ├── cli
│ │ ├── __init__.py # Import your class module here
│ │ ├── __main__.py # Call your class module here, using __name__ == "__main__"
│ │ ├── your_class_module.py
├── setup.py
If you don't want to supply a readme, just remove that code and enter a description manually.
I use https://setuptools.readthedocs.io/en/latest/setuptools.html#find-namespace-packages instead of manually setting the packages.
You can now install your package by just running pip install ./ like you have been doing before.
After you've done that run: python -m formatter.cli arguments. It runs the __main__.py file you've created in the CLI folder (or whatever you've called it).
An important note about packaging modules is that you need to use relative imports. You'd use from .your_class_module import YourClassModule in that __init__.py for example. If you want to import something from an adjacent folder you need two dots, from ..helpers import HelperClass.
I'm not sure if this is helpful, but usually I package my python scripts using the wheel package:
pip install wheel
python setup.py sdist bdist_wheel
After those two commands a whl package is created in a 'dist' folder which you can then either upload to PyPi and download/install from there, or you can install it offline with the "pip install ${PackageName}.py"
Here's A useful user guide just in case there is something else that I didn't explain:
https://packaging.python.org/tutorials/packaging-projects/
I have a GUI app that another developer wrote that I am trying to turn into a conda package that will install a desktop icon on the desktop that users can then launch seamlessly.
Below is the folder structure and the code that I can share:
Documents/
└── project/
├── bld.bat
├── meta.yaml
├── setup.py
├── setup.cfg
└── mygui/
├── MainGUI.py
├── __init__.py
├── __main__.py
└── images/
└── icon.ico
Documents\project\bld.bat:
python setup.py install install_shortcuts
if errorlevel 1 exit 1
Documents\project\meta.yaml:
package:
name: mygui
version: 1.2.3
source:
path: ./
build:
number: 1
string: py{{ CONDA_PY }}_{{ ARCH }}
requirements:
build:
- python 2.7.13
- pyvisa 1.4
- setuptools
- setuptools-shortcut
- pydaqmx
- pmw
- matplotlib
- pyserial
- pil
run:
- python 2.7.13
- pyvisa 1.4
- pydaqmx
- pmw
- matplotlib
- pyserial
- pil
about:
license:
summary: My GUI application
Documents\project\setup.py:
from setuptools import setup, find_packages
setup(
name='mygui',
version='1.2.3',
author='Me',
author_email='me#myemail.com',
description=(
"An App I wrote."
),
long_description="Actually, someone else wrote it but I'm making the conda package.",
packages=find_packages(),
package_data={
'mygui': ['images/*ico'],
},
entry_points={
'gui_scripts': [
'MyApp = mygui.__main__:main'
],
},
install_requires=['pyvisa==1.4', 'pmw', 'pydaqmx', 'matplotlib', 'pyserial', 'pil']
)
Documents\project\setup.cfg:
[install]
single-version-externally-managed=1
record=out.txt
[install_shortcuts]
iconfile=mygui/images/icon.ico
name=MyApp
group=My Custom Apps
desktop=1
Documents\project\mygui__main__.py:
from MainGUI import main
if __name__ == '__main__':
main()
The original GUI developer had a code block in a block that went like:
if __name__ == '__main__':
<code here>
so I took all the code where would be and put it cut/paste it into:
def main():
<code here>
if __name__ == '__main__':
main()
all inside the MainGUI.py file. I cannot share the specifics of the code. But it works as I'll describe below.
When I open up my code in PyCharm and hit run or debug in a conda environment with all the packages listed in the meta.yaml file the application works just fine with no warnings or errors. However, when I run conda build, upload to the anaconda channel, and then install on the machine, the desktop icon gets created but the application won't run when I click on it.
Is there something wrong in my setup files? How can I debug the reason why the application fails? I don't see any command window or output of any kind and PyCharm doesn't complain so it must be something after the application gets made.
Update: This is my first time creating a conda package that installs itself as an app like this and I used a colleague's setup.py files as a template. I was curious if the conda package that he created on one of his projects was structurally different from the conda package coming out of my conda-build and it is. If I take that tar.bz file and unzip it this is the structure that I get:
mygui-1.2.3-py27_32/
├── info/
├── about.json
├── files
├── has_prefix
├── index.json
└── paths.json
├── Lib/
└── site-packages
└── mygui-1.2.3-py2.7.egg-info
├── dependency_links.txt
├── entry_points.txt
├── PKG-INFO
├── requires.txt
├── SOURCES.txt
└── top_level.txt
├── Menu/
├── mygui.ico
└── mygui_menu.json
└── Scripts/
├── MyApp.exe
├── MyApp.manifest
└── MyApp.pyw
But my colleague gets the same structure but he also gets a directory called Lib/site-packages/mygui/, for example, which contains the source code in .py and .pyc files and directories. Why is my package not getting these source files and could this be the reason my application won't launch? I also don't see any of my data files which I've indicated in my setup.py file (the *.ico files)
I was finally able to get this app made where it would install the shortcuts on the desktop and include the source code.
The problem was with the imports. Since the original source code was written YEARS ago they didn't have absolute_imports.
I had to go through and make sure
from __future__ import (
unicode_literals,
print_function,
division,
absolute_import
)
was at the top of every file that made imports and then also change the relative imports to absolute imports. In the root __init__.py file, however, I left relative imports. Oh, also another thing I was doing wrong was that in one version of my setup.py I was including these four imports. Don't do that or python will complain about the unicode_literals. I just left it out of setup.py and it was fine.
To debug the conda package and find more import errors I would do the following:
Test the code in PyCharm by running __main__.py.
If that worked, I would build the conda package.
Install the conda package.
In a command window I would run python "C:\Miniconda2\envs\myenv\Scripts\MyApp-script.pyw". This would give me the next error that PyCharm did not.
I would return to the source code, make the necessary change and repeat steps 1-4 until the program launched from the desktop icon.
Python newbie here. I'm trying to package a console app following this doc. To this end I created the following directory structure:
.
├── bin
│ └── txts
├── setup.py
└── txtstyle
├── __init__.py
├── ...
└── [snip]
My app has one executable script which I placed under bin. I could successfully run
python setup.py sdist
and create a tar.gz. However I can't execute the script under bin due to import errors.
So my question is how can the script access the main module from under bin?
You need to install the package. This puts all modules in the global module path, and thus allows you to import them. For development, use python setup.py develop which links the modules into the module python instead of copying them. This way you don't need to reinstall the package each time you changed a module.
There is a tool called virtualenv which creates virtual python environments. You can install modules into such environments without touching the global Python interpreter.