Python Poetry and Script Entrypoints - python

Im trying to use Poetry and the scripts option to run a script. Like so:
pyproject.toml
[tool.poetry.scripts]
xyz = "src.cli:main"
Folder layout
.
├── poetry.lock
├── pyproject.toml
├── run-book.txt
└── src
├── __init__.py
└── cli.py
I then perform an install like so:
❯ poetry install
Installing dependencies from lock file
No dependencies to install or update
If I then try and run the command its not found (?)
❯ xyz
zsh: command not found: xyz
Am i missing something here! Thanks,

Poetry is likely installing the script in your user local directory. On Ubuntu, for example, this is $HOME/.local/bin. If that directory isn't in your path, your shell will not find the script.
A side note: It is generally a good idea to put a subdirectory with your package name in the src directory. It's generally better to not have an __init__.py in your src directory. Also consider renaming cli.py to __main__.py. This will allow your package to be run as a script using python -m package_name.

You did everything right besides not activating the virtual environment or running that alias (xyz) via poerty run xyz. One can activate the virtualenv via poetry shell. Afterwards, xyz should run from your shell.
PS: #jisrael18's answer is totally right. Normally one would have another folder (which is your main Python module) inside the src folder.
.
├── src
│   └── pyproj
│   ├── __init__.py
│   └── __main__.py
...

Related

Python CLI entry point doesn't work as expected

The Setup
OS: Ubuntu 20.04
Python: 3.8.5 | pip: 20.0.2 | venv
Repo
.
├── build
├── dist
├── source.egg-info
├── source
├── readme.md
├── requirements.txt
├── setup.py
└── venv
source dir
.
├── config
├── examples
├── script.py
├── __init__.py
├── tests
└── utils
The important directories within the source directory are config, which contains a few .env and .json files; and utils, which is a package that contains a sub-package called config.
Running script.py, which references config and imports modules from utils, is how the CLI app is started. Ideally when it is run, it should load a bunch of environment variables, create some command aliases and display the application's prompt. (After which the user can start working within that shell.)
I created a wheel to install this application. The setup.py contains an entry point as follows:
entry_points={
'console_scripts': [
'script=source.script:main'
]
}
The Problem
I pip installed the wheel in a test directory with its own virtual environment. When I go to the corresponding site-packages directory and run python script.py, the CLI loads properly with the information about the aliases etc. However when I run simply script (the entry point) from the root directory of the environment the shell loads but I don't see any of the messages about the aliases etc., and some of the functionality which depends on the utils package aren't available either.
What could I be doing wrong? How can I make the command work as if it was running with all the necessary packages available?
Other information that may be useful
site-packages has copies of config and utils
config is included in the package as part of the package_data parameter in setup.py as ['./config/*.env', './config/*.json']
All import statements begin from source, i.e. from source.utils.config import etc.
which script gives me the location as venv/bin/script, but that bin directory does not have the packages. (Which is expected, I think.)

Using git submodules with python

I've read a lot of blog posts and questions on this site about the usage of git submodules and still have no idea how to better use them with python.
I mean, what is the easier way to manage dependencies if I have such a package:
├── mypkg
│   └── __init__.py
├── setup.py
└── submodules
├── subm1
└── subm2
Then, what if I need to use "mypkg" as a submodule for "top_level_pkg":
├── setup.py
├── submodules
│   └── mypkg
└── top_level_package
└── __init__.py
, I want to run pip install . and have all resolved correctly (have each submodule installed to the VENV in correct order).
What I've tried:
Install each submodule using "pip" running in a subprocess. But it seems to be a hacky way and hard to manage (Unexpected installation of GIT submodule)
Use "install_requires" with "setuptools.find_packages()" but without success
Use requirements.txt file for each submodule, but I can't find a way how to automate it so "pip" could automatically install all requirements for all submodules.
Ideally, I imagine a separate setup.py file for each submodule with install_requires=['submodules/subm1', 'submodules/submn'], but setuptools does not support it.
I'm not saying it's impossible, but very hard and very tricky. A safer way is to turn each submodule into an installable Python module (with it's own setup.py) and install the submodules from Git.
This link describes how to install packages from Git with setup.py: https://stackoverflow.com/a/32689886/2952185
Thankfully to Gijs Wobben and sinoroc I came up with solution that works for my case:
install_requires=['subm1 # file://localhost/<CURENT_DIR>/path/to/subm1']
I have managed to install a Python package from a git submodule together with a main package. These are proprietary and are never published to PyPI. And both pip and tox seem to work just fine.
To set the context, I have a git repo with a single Python package and a single git submodule; the git submodule also contains a single Python package. I think this structure is as generic and simple as it can possibly be, here's a visualization:
main-git-repo-name
├── mainpkg
│ └── __init__.py
├── setup.py
├── tests
└── util-git-repo-name (this is a git submodule)
├── setup.py
├── test
└── utilpkg
└── __init__.py
I wanted to have pip install everything in a single invocation, and the utilpkg should be usable in mainpkg via just import utilpkg (not nested oddly).
The answer for me was all in setup.py:
First, specify the packages to install and their locations:
packages=find_packages(exclude=["tests"])
+ find_packages(where="util-git-repo-name/utilpkg", exclude=["test"]),
package_dir={
"mainpkg": "mainpkg",
"utilpkg": "util-git-repo-name/utilpkg"
},
Second, copy all the install_requires items from the git submodule package's setup.py file into the top level. In my case the utility package is an API client generated by swagger-codegen, so I had to add:
install_requires=[
"urllib3 >= 1.15", "six >= 1.10", "certifi", "python-dateutil",
...],
Anyhow, when running pip3 install . this config results in exactly what I want in the site-packages area: a directory mainpkg/ and a directory utilpkg/
HTH

Failure to import names when custom project is installed in virtual environment

Problem
I have read this post, which provides a way to permanently avoid the sys.path hack when importing names between sibling directories. However, I followed the procedures listed in that post but found that I could not import installed package (i.e. test).
The following are things I have already done
Step1: create a project that looks like following. Both __init__.py are empty.
test
├── __init__.py
├── setup.py
├── subfolder1
│   ├── __init__.py
│   ├── program1.py
├── subfolder2
│   ├── __init__.py
│   └── program2.py
# setup.py
from setuptools import setup, find_packages
setup(name="test", version="0.1", packages=find_packages())
# program1
def func1():
print("I am from func1 in subfolder1/func1")
# program2
from test.subfolder1 import program1
program1.func1()
Step2. create virtual environment in project root directory (i.e. test directory)
conda create -n test --clone base
launch a new terminal and conda activate test
pip install -e .
conda list and I see the following, which means my test project is indeed installed in the virtual environment
...
test 0.1 dev_0 <develop>
...
Step3: go to the subfolder2 and python program2.py, but unexpectedly it returned
ModuleNotFoundError: No module named 'test.subfolder1'
The issue is I think test should be available as long as I am in virtual environment. However, it does not seem to be the case here.
Could some one help me? Thank you in advance!
You need to create an empty __init__.py file in subfolder1 to make it a package.
Edit:
You should change the import in program2.py:
from subfolder1 import program1
Or you can move setup.py a level up.

FileNotFoundError when using python -m pytest vs. pytest

I recently changed the IDE I am using to VSCode. For the most part I like it, but there is one particular problem that I can't seem to resolve. I didn't realize it was a problem either, until I moved IDEs.
I have a directory structure like this:
my_app
├── env
│ ├── bin
│ ├── include
│ ├── lib
│ ├── lib64 -> lib
│ ├── pyvenv.cfg
│ └── share
├── my_app
│ ├── expected_results
│ ├── __init__.py
│ ├── test_data
│ └── tests
├── pytest.ini
├── README.rst
├── setup.cfg
└── setup.py
When I launch my virtual environment I am sitting at the root of this directory structure.
I run my tests by issuing this command (or providing additional options). This currently works:
pytest
But, when VSCode launches, it spits out an error saying it can't find an expected file:
E FileNotFoundError: [Errno 2] No such file or directory: 'my_app/expected_results/expected_available_items.yml'
After some investigation, I figured out that this is because when VSCode launches it issues the following command:
python -m pytest
I am setting that path by doing this:
import pathlib
EXPECTED_RESULTS_BASE = pathlib.Path("my_app/expected_results")
expected_results = EXPECTED_RESULTS_BASE.joinpath('expected_available_items.yml')
What do I need to modify so that my tests will continue to operate when I just issue a pytest command AND will operate if I (or my IDE, apparently) issues python -m pytest?
I hope it's safe to assume that VSCode is launching this from the root of my_app like I am?
Probably not enough information to answer this for you straight out, but lets try some things:
In your test code, above the line where you are getting the error insert some lines like these and see if they print out what you're expecting
print(os.getcwd())
print(EXPECTED_RESULTS_BASE.absolute())
Since you're using a venv and the error is a result of calling pytest with a different command, try using which to see if you're actually calling different things. Both before and after activating your venv:
which pytest
which python
python -m pytest will call the pytest module installed with the version of python you've just called. If python calls a different version than you're getting from pytest inside your venv, then that could be the problem.
You should be able to check which python version pytest is calling by looking at the hashbang at the top of the file
head -1 $(which pytest)
On my system, macOS with anaconda Python installed, I get the following from those commands
$ which pytest
/Users/Shannon/anaconda/bin/pytest
$ which python
/Users/Shannon/anaconda/bin/python
$ head -1 $(which pytest)
#!/Users/Shannon/anaconda/bin/python
This tells me that pytest is calling the same version of python I get when I call python. As such, pytest and python -m pytest should result in the same thing for me, inside my default environment.
Are you sure VSCode is loading your venv correctly before running the test?
Assuming you have a virtual environment selected for your Python interpreter in VS Code:
Open the terminal in VS Code
Let the virtual environment activate
Run python -m pip install pytest
That will install pytest into the virtual environment which is why python -m fails (pytest globally on your PATH is installed in some global Python install which a virtual environment won't have access to).

App made with Python setuptools installs but won't launch from Desktop shortcut

I have a GUI app that another developer wrote that I am trying to turn into a conda package that will install a desktop icon on the desktop that users can then launch seamlessly.
Below is the folder structure and the code that I can share:
Documents/
└── project/
├── bld.bat
├── meta.yaml
├── setup.py
├── setup.cfg
└── mygui/
├── MainGUI.py
├── __init__.py
├── __main__.py
└── images/
└── icon.ico
Documents\project\bld.bat:
python setup.py install install_shortcuts
if errorlevel 1 exit 1
Documents\project\meta.yaml:
package:
name: mygui
version: 1.2.3
source:
path: ./
build:
number: 1
string: py{{ CONDA_PY }}_{{ ARCH }}
requirements:
build:
- python 2.7.13
- pyvisa 1.4
- setuptools
- setuptools-shortcut
- pydaqmx
- pmw
- matplotlib
- pyserial
- pil
run:
- python 2.7.13
- pyvisa 1.4
- pydaqmx
- pmw
- matplotlib
- pyserial
- pil
about:
license:
summary: My GUI application
Documents\project\setup.py:
from setuptools import setup, find_packages
setup(
name='mygui',
version='1.2.3',
author='Me',
author_email='me#myemail.com',
description=(
"An App I wrote."
),
long_description="Actually, someone else wrote it but I'm making the conda package.",
packages=find_packages(),
package_data={
'mygui': ['images/*ico'],
},
entry_points={
'gui_scripts': [
'MyApp = mygui.__main__:main'
],
},
install_requires=['pyvisa==1.4', 'pmw', 'pydaqmx', 'matplotlib', 'pyserial', 'pil']
)
Documents\project\setup.cfg:
[install]
single-version-externally-managed=1
record=out.txt
[install_shortcuts]
iconfile=mygui/images/icon.ico
name=MyApp
group=My Custom Apps
desktop=1
Documents\project\mygui__main__.py:
from MainGUI import main
if __name__ == '__main__':
main()
The original GUI developer had a code block in a block that went like:
if __name__ == '__main__':
<code here>
so I took all the code where would be and put it cut/paste it into:
def main():
<code here>
if __name__ == '__main__':
main()
all inside the MainGUI.py file. I cannot share the specifics of the code. But it works as I'll describe below.
When I open up my code in PyCharm and hit run or debug in a conda environment with all the packages listed in the meta.yaml file the application works just fine with no warnings or errors. However, when I run conda build, upload to the anaconda channel, and then install on the machine, the desktop icon gets created but the application won't run when I click on it.
Is there something wrong in my setup files? How can I debug the reason why the application fails? I don't see any command window or output of any kind and PyCharm doesn't complain so it must be something after the application gets made.
Update: This is my first time creating a conda package that installs itself as an app like this and I used a colleague's setup.py files as a template. I was curious if the conda package that he created on one of his projects was structurally different from the conda package coming out of my conda-build and it is. If I take that tar.bz file and unzip it this is the structure that I get:
mygui-1.2.3-py27_32/
├── info/
├── about.json
├── files
├── has_prefix
├── index.json
└── paths.json
├── Lib/
└── site-packages
└── mygui-1.2.3-py2.7.egg-info
├── dependency_links.txt
├── entry_points.txt
├── PKG-INFO
├── requires.txt
├── SOURCES.txt
└── top_level.txt
├── Menu/
├── mygui.ico
└── mygui_menu.json
└── Scripts/
├── MyApp.exe
├── MyApp.manifest
└── MyApp.pyw
But my colleague gets the same structure but he also gets a directory called Lib/site-packages/mygui/, for example, which contains the source code in .py and .pyc files and directories. Why is my package not getting these source files and could this be the reason my application won't launch? I also don't see any of my data files which I've indicated in my setup.py file (the *.ico files)
I was finally able to get this app made where it would install the shortcuts on the desktop and include the source code.
The problem was with the imports. Since the original source code was written YEARS ago they didn't have absolute_imports.
I had to go through and make sure
from __future__ import (
unicode_literals,
print_function,
division,
absolute_import
)
was at the top of every file that made imports and then also change the relative imports to absolute imports. In the root __init__.py file, however, I left relative imports. Oh, also another thing I was doing wrong was that in one version of my setup.py I was including these four imports. Don't do that or python will complain about the unicode_literals. I just left it out of setup.py and it was fine.
To debug the conda package and find more import errors I would do the following:
Test the code in PyCharm by running __main__.py.
If that worked, I would build the conda package.
Install the conda package.
In a command window I would run python "C:\Miniconda2\envs\myenv\Scripts\MyApp-script.pyw". This would give me the next error that PyCharm did not.
I would return to the source code, make the necessary change and repeat steps 1-4 until the program launched from the desktop icon.

Categories

Resources