Python venv shim shebang absolute path not working - python

EDIT:
This appears to be a shebang character limit issue. The workaround appears to be to create a wrapper script. If anyone has a better solution I'm all ears!
I'm building a CLI tool, i'm using virtualenvironment to build my package and create a command I can execute from the command line. I followed the guides from click here:
https://click.palletsprojects.com/en/8.1.x/setuptools/
I'm using pip install -e . to install the package.
I can see the ./venv/bin directory in my $PATH
I can see the shim for the cli in the ./venv/bin directory (see tree below)
running where cli gives me: <full path to venv>/venv/bin/cli so its on my path
If I try to run cli then i get cli: Command not found.
If I replace the shebang in the generated cli shim with #!/usr/bin/env python then it works!
I have tried setting the permissions to 777, this doesn't resolve the issue.
First question is why doesn't the default shebang work, second question is either how to resolve that issue or default to using #!/usr/bin/env python as the shebang?
I've put some more info about my environment etc below:
My environment:
shell is tcsh (cant switch)
Python 3.8.2
rhel 7
simple test dir structure is as follows:
.
├── cli_test
│ ├── __init__.py
│ └── cli.py
├── cli_test.egg-info
│ ├── dependency_links.txt
│ ├── entry_points.txt
│ ├── PKG-INFO
│ ├── requires.txt
│ ├── SOURCES.txt
│ └── top_level.txt
├── venv
│ ├── bin
│ │ ├── activate
│ │ ├── activate.csh
│ │ ├── activate.fish
│ │ ├── Activate.ps1
│ │ ├── cli
│ │ ├── easy_install
│ │ ├── easy_install-3.8
│ │ ├── pip
│ │ ├── pip3
│ │ ├── pip3.8
│ │ ├── python
│ │ └── python3
│ └── pyvenv.cfg
└── setup.py
cli.py:
def cli():
print("Hello world")
setup.py:
#!/usr/bin/env python
from setuptools import setup, find_packages
setup(
name="cli_test",
version=0.1,
packages=find_packages(),
python_requires='>=3.8',
install_requires = [
"Click"
],
entry_points = {
'console_scripts': [
'cli=cli_test.cli:cli'
],
},
)
The generated cli shim:
#!<full path to my venv>/venv/bin/python
# EASY-INSTALL-ENTRY-SCRIPT: 'cli-test','console_scripts','cli'
__requires__ = 'cli-test'
import re
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(
load_entry_point('cli-test', 'console_scripts', 'cli')()
)

Related

Getting ModuleNotFoundError: No module named trying to run unit test Python

I can run test_authentication.py no problem in PyCharm (Shift + F10). However, when I try to run python test_authentication.py in Anaconda Prompt, I am getting this error:
(etradebot) PS H:\My Drive\etradebot\tests> python test_authentication.py
Traceback (most recent call last):
File "H:\My Drive\etradebot\tests\test_authentication.py", line 4, in <module>
from authentication.authentication import Authentication
ModuleNotFoundError: No module named 'authentication'
This is my project's structure.
etradebot/
├── authentication/
│ ├── __init__.py
│ ├── authentication.py
│
├── etrade/
│ ├── __init__.py
│ ├── etrade.py
│
├── model/
│ ├── __init__.py
│ ├── model.py
│
├── tests/
│ ├── __init__.py
│ ├── test_authentication.py
│ ├── test_etrade.py
│ ├── test_model.py
│ ├── test_utils.py
│
├── utils/
│ ├── __init__.py
│ ├── fake_data.py
│ ├── logging_config.py
│
├── .gitignore
├── chromedriver.exe
├── msedgedriver.exe
├── main.py
├── strategy.py
Everywhere else in my project where I import authentication.py module, it import fine using the line:
from authentication.authentication import Authentication
Both my Anaconda Prompt and PyCharm IDE are using the same environment etradebot. Why does my test work in PyCharm but not Anaconda Prompt? Any suggestions are greatly appreciated.
I tried running python test_authentication.py in Anaconda Prompt and expected the unit test to run normally. Running test_authentication.py in PyCharm (Shift + F10) did work. I double-checked that both my Anaconda Prompt and PyCharm were using the same Anaconda environment etradebot, and they are. So I am confused.

Having trouble with opening files with Github Actions

I create a python tool for processing points clouds and I also wrote some unit tests to be sure my new implemented features will not impact the others. I want to automate these tests executions by using Github Actions : when I push some changes on my repo, I want to start the tests.
So I wrote a python-test.yml file that set-up Python, install my dependencies and execute my test.py file. The problem is when the python test.py is executed, I get this error :
Traceback (most recent call last):
File "src/test.py", line 68, in <module>
class TestDo(unittest.TestCase):
File "src/test.py", line 69, in TestDo
with open("../test/data/pipeline.json", "r") as p:
FileNotFoundError: [Errno 2] No such file or directory: '../test/data/pipeline.json'
I don't understand that because my pipeline.json file is well present in my /test/data folder :
.
├── LICENSE.txt
├── README.rst
├── environment.yml
├── pyproject.toml
├── setup.cfg
├── setup.py
├── src
│ ├── pdal_parallelizer
│ │ ├── __init__.py
│ │ ├── __main__.py
│ │ ├── bounds.py
│ │ ├── cloud.py
│ │ ├── do.py
│ │ ├── file_manager.py
│ │ └── tile.py
│ └── test.py
└── test
└── data
├── input
│ ├── echantillon_10pts.laz
│ ├── echantillon_10pts2.laz
│ └── echantillon_10pts3.laz
├── output
│ └── output.laz
├── pipeline.json
└── temp
├── temp__echantillon_10pts.pickle
├── temp__temp_name.pickle
└── temp_name.pickle
Did someone have an idea of what I'm doing wrong ?
Regards.
EDIT :
There is my python-test.yml file :
name: Use Setup-Miniconda From Marketplace To Run Tests
on: [push]
jobs:
miniconda:
name: Miniconda
runs-on: windows-latest
steps:
- uses: actions/checkout#v2
- uses: conda-incubator/setup-miniconda#v2
with:
activate-environment: test
environment-file: environment.yml
python-version: 3.8
auto-activate-base: false
- shell: bash -l {0}
run: |
conda info
conda list
- name: Run tests
shell: bash -l {0}
run: |
conda install pdal
python -m pip install --upgrade pip
pip install dask distributed pdal
python src/test.py

Proper setup.py creation

So I have a project which is structured as shown below:
project
├── src
│ ├── api
│ │ ├── __init__.py
│ │ └── api.py
│ ├── instance
│ │ ├── __init__.py
│ │ └── config.py
│ ├── packages
│ │ ├── __init__.py
│ │ └── app.py
├── tests
│ └── __init__.py
├── requirements.txt
├── README.md
├── .gitignore
└── setup.py
I am trying to create the setup.py in order to call the instance/config.py from the package/app.py. So I created the following setup.py:
from distutils.core import setup
from setuptools import setup, find_packages
setup(
name='project',
version='0.1dev0',
author='Author name',
author_email='my_email',
packages=['api', 'instance', 'packages'],
long_description=open('README.md').read()
)
But I get the following error:
...
warnings.warn(tmpl.format(**locals()))
package init file 'src\__init__.py' not found (or not a regular file)
error: package directory 'api' does not exist
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
So whenever I try to call the instance/config.py from the packages/app.py I get the following error:
from instance import config
ModuleNotFoundError: No module named 'instance'
What do I need to do to the setup.py file to make it work? Should the structure of the project be altered somehow?
Thanks a lot in advance!
In a nutshell, you have to create a root package project in project directory. See https://packaging.python.org/tutorials/packaging-projects/ for more details. Then
from project.instance import config
Rename src to project and put there __init__.py with imports
from . import api, instance, packages

Nose2 fails to import modules, but only when run in docker

I have a python Celery project of the following structure:
├── celeryconfig.py
├── Dockerfile
│ └── _templates
├── framework
│ ├── celery
│ │ ├── celery.py
│ │ ├── __init__.py
│ │ └── __init__.cpython-36.pyc
│ ├── tasks
│ │ ├── tasks.py
│ │ ├── __init__.py
│ ├── utilities
│ │ ├── utilities.py
│ │ ├── __init__.py
│ ├── tests
│ │ ├── __init__.py
│ │ └── test_celery.py
Which I run from the top level directory using the command celery framework.celery.celery worker
I have tets in the tests directory that I run using nose2. When I run it in the directory the tests pass without issue. However when the tests are run as part of a docker build process, the nose2 process fails because it can't make sense of the imports, for example
# test_celery.py
from framework.tasks.tasks import my_function
def test_my_function():
# Do some testing.
The import of the function succeeds, but fails on the imports of the imported file, which look like:
# tasks.py
from framework.utilities.utilities import some_other_function
Part of the issue I'm having is that Celery is particular about how it itself is structured, so attempts to restructure the directory have just resulted in celery not being able to start. I'm not quite sure why the tests only fail in a docker build, which looks like this:
FROM google/cloud-sdk:198.0.0-alpline
COPY . /project
WORKDIR /project
RUN apk add --no-cache python3 python3-dev && pip3 install -r requirements.txt
RUN nose2

Python/Django "Import Error" in project directory

I've been having troubles with this and cannot find the solution... I've browsed all SO questions about this but nothing worked in my case... So here's the problem:
I clone a project from git repository
Set up a virtualenv called env in project base dir with proper Python version(2.7) & install all reqs succesfully
Here's where it gets fun... I navigate to the folder that has my manage.py and do a python manage.py makemigrations which results in
from foo.bar.data import CoolioClass
ImportError: No module named foo.bar.data
My directories look like this (just for illustration):
project_root/
├──env/
├──django/
│ ├── app
│ │ └── models.py (from foo.bar.data import CoolioClass)
│ ├── django
│ │ └── settings.py
│ └── manage.py
└──foo/
├── bar
│ ├── __init__.py
│ ├── data.py
│ └── test.py
├── baz
│ ├── __init__.py
│ ├── data.py
│ └── test.py
└── __init__.py
When I print sys.path in python shell it results in:
/home/johnny/project_root/env/lib/python2.7
/home/johnny/project_root/env/lib/python2.7/plat-x86_64-linux-gnu
/home/johnny/project_root/env/lib/python2.7/lib-tk
/home/johnny/project_root/env/lib/python2.7/lib-old
/home/johnny/project_root/env/lib/python2.7/lib-dynload
/usr/lib/python2.7
/usr/lib/python2.7/plat-x86_64-linux-gnu
/usr/lib/python2.7/lib-tk
/home/johnny/project_root/env/local/lib/python2.7/site-packages
/home/johnny/project_root/env/lib/python2.7/site-packages

Categories

Resources