I'm posting here a problem I'm getting with poetry. I've put it as an bug report in their github, but now I'm not sure if its a bug or something I'm not understanding
I'm trying to use my project package from notebooks or some scripts placed outside of the source code folder. My project structure is the following:
├── notebooks
│ └── Untitled.ipynb
├── poetry.lock
├── pyproject.toml
├── scripts
│ ├── init.py
│ └── hello.py
├── source
│ ├── init.py
│ └── utils.py
├── tests
│ └── init.py
└── zip_project.py
If I run poetry install and try to import anything from source, whether it's from the notebook or from that hello.py script, it can't find the source module. It works if I do poetry run python scripts/hello.py, but it shouldn't be the solution as it would still not work for the notebooks. I use pyenv for python version management, in case it is relevant.
After checking out this update at poetry's docs, I've tried to run "poetry add ./source/", but got the following error:
TypeError
expected string or bytes-like object
at ~/.poetry/lib/poetry/vendor/py3.7/poetry/core/utils/helpers.py:27 in canonicalize_name
23│ canonicalize_regex = re.compile(r"[-]+")
24│
25│
26│ def canonicalize_name(name): # type: (str) -> str
→ 27│ return canonicalize_regex.sub("-", name).lower()
28│
29│
30│ def module_name(name): # type: (str) -> str
31│ return canonicalize_name(name).replace(".", "").replace("-", "")
EDIT: I've updated pyproject.toml (gist) trying to follow finswimmer advice, but I get a "Directory {} does not seem to be a Python package" error every time I run poetry update.
EDIT 2: I've found out that if my package is named after pyproject.toml project name, and remove any other reference to it, it gets installed (that is, the standard way by poetry docs). By the way, if I try to add a package with a different name by editing pyproject.toml or by the poetry add "./my_package/" way, I get the "Directory {} does not seem to be a Python package" error. Why does this happen?
Related
I am working on a project that involves remotely running code on compute clusters. My project directory structure is as follows:
Project
├── analysis
│ ├── __init__.py
│ └── B.py
├── data
│ ├── __init__.py
│ └── A.py
├── __init__.py
Inside B.py I have a print statement:
print("foo")
Inside A.py I have:
from analysis import B
When I run A.py on my local machine everything runs smoothly and I see the print statement. When I upload the project to the remote cluster and run it there I get a ModuleNotFoundError.
My question is:
what's the specific difference between the python environment/installation on my local machine and that on the remote cluster that is causing this behaviour? From looking at other answers I understand that there are ways one could work around this by, for example, importing sys at the start of my code but I would prefer to actually understand why this is happening under-the-hood.
My local machine is running a python 3.8 in a venv and the remote cluster is running a python 3.10 in a conda environment.
Using from ..analysis import B I get ImportError: attempted relative import with no known parent package.
SOLUTION
I solved this by installing my code as a package (thanks to #ShadowRanger). To do this I navigated to my project folder, installed conda-build with
conda install conda-build
and ran
conda develop .
I have the following folder structure for a package project (simplified):
projectname
├── docs
├── packagename
│ ├── somemodule
│ | ├── __init__.py (second)
| │ └── somescript.py
│ └── __init__.py (first)
├── setup.cfg
└── pyproject.toml
In first __init__.py I do from . import somemodule. In the second __init__.py I do from .somescript import *. And somescript.py contains some function my_sum. The problem is that sphinx doesn't see the function. It only sees module packagename.somemodule - there is no function doscstring for my_sum in generated documentation. The function works well if I install the library using pip install . from the projectname folder.
I'm sure the problem is in the folder structure or imports because there was no problem when somescript.py was placed directly in packagename folder.
Additional information (maybe useful, maybe not):
I use Read The Docs.
Part of .readthedocs.yaml:
python:
install:
- method: pip
path: .
UPD:
Read The Docs generates the following warnings:
WARNING: autodoc: failed to import module 'somescript' from module 'packagename'; the following exception was raised:
No module named 'packagename.somescripts'
WARNING: html_static_path entry '_static' does not exist
UPD2:
I fixed the warning from autodoc by fixing docs/source/packagename.rst but I still have the problem
I once had a similar problem and it could be solved by manually adding all directories of the module to the path. See also this answer.
I recently changed the IDE I am using to VSCode. For the most part I like it, but there is one particular problem that I can't seem to resolve. I didn't realize it was a problem either, until I moved IDEs.
I have a directory structure like this:
my_app
├── env
│ ├── bin
│ ├── include
│ ├── lib
│ ├── lib64 -> lib
│ ├── pyvenv.cfg
│ └── share
├── my_app
│ ├── expected_results
│ ├── __init__.py
│ ├── test_data
│ └── tests
├── pytest.ini
├── README.rst
├── setup.cfg
└── setup.py
When I launch my virtual environment I am sitting at the root of this directory structure.
I run my tests by issuing this command (or providing additional options). This currently works:
pytest
But, when VSCode launches, it spits out an error saying it can't find an expected file:
E FileNotFoundError: [Errno 2] No such file or directory: 'my_app/expected_results/expected_available_items.yml'
After some investigation, I figured out that this is because when VSCode launches it issues the following command:
python -m pytest
I am setting that path by doing this:
import pathlib
EXPECTED_RESULTS_BASE = pathlib.Path("my_app/expected_results")
expected_results = EXPECTED_RESULTS_BASE.joinpath('expected_available_items.yml')
What do I need to modify so that my tests will continue to operate when I just issue a pytest command AND will operate if I (or my IDE, apparently) issues python -m pytest?
I hope it's safe to assume that VSCode is launching this from the root of my_app like I am?
Probably not enough information to answer this for you straight out, but lets try some things:
In your test code, above the line where you are getting the error insert some lines like these and see if they print out what you're expecting
print(os.getcwd())
print(EXPECTED_RESULTS_BASE.absolute())
Since you're using a venv and the error is a result of calling pytest with a different command, try using which to see if you're actually calling different things. Both before and after activating your venv:
which pytest
which python
python -m pytest will call the pytest module installed with the version of python you've just called. If python calls a different version than you're getting from pytest inside your venv, then that could be the problem.
You should be able to check which python version pytest is calling by looking at the hashbang at the top of the file
head -1 $(which pytest)
On my system, macOS with anaconda Python installed, I get the following from those commands
$ which pytest
/Users/Shannon/anaconda/bin/pytest
$ which python
/Users/Shannon/anaconda/bin/python
$ head -1 $(which pytest)
#!/Users/Shannon/anaconda/bin/python
This tells me that pytest is calling the same version of python I get when I call python. As such, pytest and python -m pytest should result in the same thing for me, inside my default environment.
Are you sure VSCode is loading your venv correctly before running the test?
Assuming you have a virtual environment selected for your Python interpreter in VS Code:
Open the terminal in VS Code
Let the virtual environment activate
Run python -m pip install pytest
That will install pytest into the virtual environment which is why python -m fails (pytest globally on your PATH is installed in some global Python install which a virtual environment won't have access to).
I created my customized package called 'dto' in my project folder.
But It does not recognize this package and module.
How can I make my visual studio code to find it?
In Pycharm, if I create new package, it automatically detects that.
I executed my simulator.py script in my simulation package.
I have encountered the same problem. It seems visual studio code cannot automatically detect new python package. It has something to do with $PYTHONPATH configuration. I found an official reference from visual studio code documentation. Please have a look at this doc.
adding a dev.env file inside your project
PYTHONPATH=${workspaceFolder}:${PYTHONPATH}
adding the following in your workspace settings.json config file
"python.envFile": "${workspaceFolder}/dev.env"
This works for me. The debugger can find modules in the new package. Hopefully, this will help you.
From what I can see from the directory tree, you need to use a relative import(python >= 2.5):
from ..dto import price
Here the .. is used to specify that the import should be made from two folders up the current location of the script that is being invoked.
In your case, relative imports cannot be used as the files are in different packages. Please find the relevant post here beyond top level package error in relative import
#Yossarian42's answer will work if you have a folder with your package name in the root of your project. However if your project follows the structure mentioned in Py-Pkgs and uses a src dir like:
mypkg
├── CHANGELOG.md ┐
...
├── pyproject.toml ┐
├── src │
│ └── mypkg │ Package source code, metadata,
│ ├── __init__.py │ and build instructions
│ └── mypkg.py ┘
└── tests ┐
└── ... ┘ Package tests
└── examples ┐
├── mypkg_examples.py │
└── ... ┘ Package examples
In this case, you use the following for dev.env in your workspaceFolder (root):
PYTHONPATH=./src:${PYTHONPATH}
Create or edit {workspaceFolder}/.vscode/settings.json with:
"python.envFile": "${workspaceFolder}/.env"
Full settings.json example:
{
"python.envFile": "${workspaceFolder}/dev.env"
}
For debugging the python.envFile setting, you can print out the Python path with the following code:
import sys; print(f"sys.path: {sys.path}")
I have a GUI app that another developer wrote that I am trying to turn into a conda package that will install a desktop icon on the desktop that users can then launch seamlessly.
Below is the folder structure and the code that I can share:
Documents/
└── project/
├── bld.bat
├── meta.yaml
├── setup.py
├── setup.cfg
└── mygui/
├── MainGUI.py
├── __init__.py
├── __main__.py
└── images/
└── icon.ico
Documents\project\bld.bat:
python setup.py install install_shortcuts
if errorlevel 1 exit 1
Documents\project\meta.yaml:
package:
name: mygui
version: 1.2.3
source:
path: ./
build:
number: 1
string: py{{ CONDA_PY }}_{{ ARCH }}
requirements:
build:
- python 2.7.13
- pyvisa 1.4
- setuptools
- setuptools-shortcut
- pydaqmx
- pmw
- matplotlib
- pyserial
- pil
run:
- python 2.7.13
- pyvisa 1.4
- pydaqmx
- pmw
- matplotlib
- pyserial
- pil
about:
license:
summary: My GUI application
Documents\project\setup.py:
from setuptools import setup, find_packages
setup(
name='mygui',
version='1.2.3',
author='Me',
author_email='me#myemail.com',
description=(
"An App I wrote."
),
long_description="Actually, someone else wrote it but I'm making the conda package.",
packages=find_packages(),
package_data={
'mygui': ['images/*ico'],
},
entry_points={
'gui_scripts': [
'MyApp = mygui.__main__:main'
],
},
install_requires=['pyvisa==1.4', 'pmw', 'pydaqmx', 'matplotlib', 'pyserial', 'pil']
)
Documents\project\setup.cfg:
[install]
single-version-externally-managed=1
record=out.txt
[install_shortcuts]
iconfile=mygui/images/icon.ico
name=MyApp
group=My Custom Apps
desktop=1
Documents\project\mygui__main__.py:
from MainGUI import main
if __name__ == '__main__':
main()
The original GUI developer had a code block in a block that went like:
if __name__ == '__main__':
<code here>
so I took all the code where would be and put it cut/paste it into:
def main():
<code here>
if __name__ == '__main__':
main()
all inside the MainGUI.py file. I cannot share the specifics of the code. But it works as I'll describe below.
When I open up my code in PyCharm and hit run or debug in a conda environment with all the packages listed in the meta.yaml file the application works just fine with no warnings or errors. However, when I run conda build, upload to the anaconda channel, and then install on the machine, the desktop icon gets created but the application won't run when I click on it.
Is there something wrong in my setup files? How can I debug the reason why the application fails? I don't see any command window or output of any kind and PyCharm doesn't complain so it must be something after the application gets made.
Update: This is my first time creating a conda package that installs itself as an app like this and I used a colleague's setup.py files as a template. I was curious if the conda package that he created on one of his projects was structurally different from the conda package coming out of my conda-build and it is. If I take that tar.bz file and unzip it this is the structure that I get:
mygui-1.2.3-py27_32/
├── info/
├── about.json
├── files
├── has_prefix
├── index.json
└── paths.json
├── Lib/
└── site-packages
└── mygui-1.2.3-py2.7.egg-info
├── dependency_links.txt
├── entry_points.txt
├── PKG-INFO
├── requires.txt
├── SOURCES.txt
└── top_level.txt
├── Menu/
├── mygui.ico
└── mygui_menu.json
└── Scripts/
├── MyApp.exe
├── MyApp.manifest
└── MyApp.pyw
But my colleague gets the same structure but he also gets a directory called Lib/site-packages/mygui/, for example, which contains the source code in .py and .pyc files and directories. Why is my package not getting these source files and could this be the reason my application won't launch? I also don't see any of my data files which I've indicated in my setup.py file (the *.ico files)
I was finally able to get this app made where it would install the shortcuts on the desktop and include the source code.
The problem was with the imports. Since the original source code was written YEARS ago they didn't have absolute_imports.
I had to go through and make sure
from __future__ import (
unicode_literals,
print_function,
division,
absolute_import
)
was at the top of every file that made imports and then also change the relative imports to absolute imports. In the root __init__.py file, however, I left relative imports. Oh, also another thing I was doing wrong was that in one version of my setup.py I was including these four imports. Don't do that or python will complain about the unicode_literals. I just left it out of setup.py and it was fine.
To debug the conda package and find more import errors I would do the following:
Test the code in PyCharm by running __main__.py.
If that worked, I would build the conda package.
Install the conda package.
In a command window I would run python "C:\Miniconda2\envs\myenv\Scripts\MyApp-script.pyw". This would give me the next error that PyCharm did not.
I would return to the source code, make the necessary change and repeat steps 1-4 until the program launched from the desktop icon.