Context: The package being considered is an application that was developed by another team and I want to use the functionality exposed as part of an API call in my django project.
Directory layout:
<repo>
├── org_wide_django_code
│ ├── my_django_project
│ │ ├── my_django_project
│ │ ├── manage.py
│ │ ├── requirements.txt
│ │ └── my_application
├── frontend
│ └── application
├── other_team_work
│ ├── README.md
│ ├── __init__.py
│ ├── implemented_logic
What is the best way for me to use other_team_work in my django project my_django_project?
Prior reading:
Manipulating PYTHONPATH or sys.path is an option
Setting up a .whl or .egg to install other_team_work (also need to add a setup.py there)
I am not sure there is a best way since this depends a lot on the internal tooling of your organisation. However the main thing to pay attention to IMO are release and development cycles. Ideally you want both teams to be able to release new features, functionalities without impacting the other. (for instance if other_team makes a change to their code for a third team you would like this to have no possible impact on your application. Similarly other_team should not need to know how and when you use their module to make changes to their application.
One way to do this would be to have other_team_work be packaged and installable (using a setup.py) and then pushed to a package registery (for instance gitlab's package registery or github packages). This way you can use other_team_work as though it were just another python package with pip (using the extra-package-url argument) and using specific versions in a requirements.txt.
The simplest way (but can be dangerous) is add other_team_work to your project in PYTHONPATH. You can add this to django settings.py file, after BASE_DIR definition.
Example:
import sys
BASE_DIR = Path(__file__).resolve().parent.parent # default Django base dir
OTHER_TEAM_WORK_PARENT_DIR = BASE_DIR.parent # in accordance with your directories
sys.path.append(str(OTHER_TEAM_WORK_PARENT_DIR))
This code add your directory org_wide_django_code to PYTHONPATH and you can import other_team_work in your code:
from other_team_work import implemented_logic
Related
I am not finding the way to properly code so that both pylint and the execution of the code (within VSCode or from the command line) would work.
There are some similar questions but none seems to apply to my project structure with a src directory under which there will be multiple packages. Here's the simplified project structure:
.
├── README.md
├── src
│ ├── rssita
│ │ ├── __init__.py
│ │ ├── feeds.py
│ │ ├── rssita.py
│ │ └── termcolors.py
│ └── zanotherpackage
│ ├── __init__.py
│ └── anothermodule.py
└── tests
├── __init__.py
└── test_feeds.py
From what I understand rssita is one of my my packages (because of the init.py file) with some modules under it amongst which rssita.py file contains the following imports:
from feeds import RSS_FEEDS
from termcolors import PC
The rssita.py code as shown above runs well from both within VSCode and from command line python ( python src/rssita/rssita.py ) from the project root, but at the same time pylint (both from within VSCode and from the command line (pylint src or pylint src/rssita)) flags the two imports as not found.
If I modify the code as follows:
from rssita.feeds import RSS_FEEDS
from rssita.termcolors import PC
pylint will then be happy but the code will not run anymore since it would not find the imports.
What's the cleanest fix for this?
As far as I'm concerned pylinty is right, your setup / PYTHONPATH is screwed up: in Python 3, all imports are absolute by default, so
from feeds import RSS_FEEDS
from termcolors import PC
should look for top-level packages called feeds and termcolors which I don't think exist.
python src/rssita/rssita.py
That really ain't the correct invocation, it's going to setup a really weird PYTHONPATH in order to run a random script.
The correct imports should be package-relative:
from .feeds import RSS_FEEDS
from .termcolors import PC
Furthermore if you intend to run a package, that should be either a runnable package using __main__:
python -m rssita
or you should run the sub-package as a module:
python -m rssita.rssita
Because you're using an src-package, you'll either need to create a pyproject.toml so you can use an editable install, or you'll have to PYTHONPATH=src before you run the command. This ensures the packages are visible at the top-level of the PYTHONPATH, and thus correctly importable. Though I'm not a specialist in the interaction of src layouts & runnable packages, so there may be better solutions.
The initial situation is that I have two fast-api servers that access the same database. One is the real service for my application and the other is a service for loading data from different sources. Both services have their own Github repository, but use the same orm data model.
My question is: What are best practices to manage this orm data model without always editing it in both code bases?
The goal would be to have no duplicated code and to adapt changes to the database only in one place.
Current structure of the projects:
Service 1 (actual backend):
.
├── app
│ ├── __init__.py
│ ├── main.py
│ └── database
│ ├── __init__.py
│ ├── ModelItem.py # e.g. model item
│ ├── ... # other models
│ ├── SchemaItem.py # e.g. schema item
│ └── ... # other schemas
│ ├── routers
│ └── ...
Service 2 (data loader):
.
├── app
│ ├── __init__.py
│ ├── main.py
│ └── database
│ ├── __init__.py
│ ├── ModelItem.py # e.g. model item
│ ├── ... # other models
│ ├── SchemaItem.py # e.g. schema item
│ └── ... # other schemas
│ ├── routers
│ └── ...
sqlalchemy(.orm) library is used for the orm models and schemas.
I had thought about making another repository out of it or a library of my own. But then I would not know how best to import them and whether I won't just produce more work.
Based on the answer, I created a private repository that I install via pip in the two servers.
Structure of the package Python packaging projects
Install the package from a private Github repository Installing Private Python Packages
Modularisation (avoiding code repetition) is a best practice, and so the ideal thing to do would indeed be to extract the model definition to a single file and import it where needed.
The problem is, when you deploy your two services both of them need to be able to 'see' the model file ... otherwise they can't import it. Because they are deployed separately, this isn't going to be as easy as before (they presumably aren't sharing a filesystem; if they were, it would be just as simple as before).
The usual solution to this is to make your model file available as a Python module and provide it via some server both services can reach over the network (for example, the public python package index, PyPI). You then install the module and import it in both deployed services.
This is inevitably going to be (a lot!) more work than just maintaining two copies. However, if you go to more than two copies that calculus could quickly change.
I am very new to Conan and I am quite lost in the documentation at the moment and I cannot really find a way to do what I want.
Maybe I am not using the right tool for this, I am very open to suggestions.
Let's say my C++ project needs opencv and nlohmann_json to be built.
I want to be able to create an archive of all my project's dependencies to be used as is, so it can be redistributed on our gitlab instance.
I want to have a repository which roughly looks like this:
├── conanfile.py
├── json
│ └── 3.10.4
│ └── conanfile.py
└── opencv
├── 4.4.0
│ └── conanfile.py
└── 5.1.0
└── conanfile.py
Where invoking the root conanfile.py would automatically build the required libs and copy their files as to have something like this:
├── conanfile.py
├── json
│ └── 3.10.4
│ └── conanfile.py
├── opencv
│ ├── 4.4.0
│ │ └── conanfile.py
│ └── 5.1.0
│ └── conanfile.py
└── win64_x86
├── include
│ ├── nlohmann
│ └── opencv2
└── lib
└── <multiple files>
and the CI/CD would archive the directory and make it available for download.
I have managed to build and package a very simple repo (https://github.com/jameskbride/cmake-hello-world) independently using the git tools and doing something like this:
def config_options(self):
if self.settings.os == "Windows":
del self.options.fPIC
def source(self):
git = tools.Git(folder="repo")
git.clone("https://github.com/jameskbride/cmake-hello-world", "master")
def build(self):
cmake = CMake(self)
cmake.configure(source_folder="repo")
cmake.build()
However, I cannot figure out how to get the files that I want to package from my main conanfile.py. All of the approaches I have tried so far needed the package to be built separately (ie, going to the folder, using conan source . conan build . and conan export-pkg .. I am, 100% sure I am not doing this properly but I can't really find a way to do this. I found something about deploy() in the docs but can't figure out how to use it...
NB: I would really, really prefer not using a remote with prebuilt binaries for dependencies. I want to build everything from source with the CMake configurations I have defined.
I also explicitely don't want a conan package in the end. I just want an aggregation of all the files needed to be shipped directly to the machines.
This question has been answered by Conan's co-founder james here: https://github.com/conan-io/conan/issues/9874
Basically:
Use the CMakeDeps generator instead of cmake / cmake_find_package
Patch the resulting cmake config files at install time inside the generate() method from the main conanfile.py
Edit recipes accordingly (but should be fine most of the time)
I'm in the process of structuring my PyQt5 application to more established conventions. Now it looks like this
MyProj
├── my_qt_tool
│ ├── __init__.py
│ ├── class1.py
│ ├── my_qt_tool.py
│ ├── wizard1.py
│ ├── resources
│ │ └── templates
│ │ └── tool.conf.template
│ └── ui
│ ├── __init__.py
│ ├── mainwindow.py
│ ├── mainwindow.ui
│ ├── wizard_01_start.py
│ ├── wizard_01_start.ui
│ ├── ...
├── my_qt_tool.spec # for PyInstaller
├── bin
│ └── generate_ui_code.py # for compiling Qt *.ui to *.py
├── dist
│ └── my_qt_tool
├── environment.yml # conda environment requirements.
├── LICENSE
└── README.md
So MyProj is the top-level git repo, my_qt_tool is the package of my application, with a subpackage for UI specific code, my_qt_tool.py contains the "main" code which runs the GUI, class1.py handles business logic and wizard1.py is just some extra class for a GUI wizard.
Q1: Is this project structure canonical? Is the main function where it should be? Should *.ui files be separated to resources?
Now, after some haggling with imports, I added my_qt_tool as source directory to pycharm to make the imports work and created a run for my_qt_tool.py with working dir MyProj/my_qt_tool.
Q2: Technically, I want the working dir to be MyProj, but then I would have to reference resources/templates/tool.conf.template with my_qt_tool/resources.., which seems yucky... or is this the way to do it?
Now the imports in my_qt_tool look like this:
from class1 import DataModel
from ui.mainwindow import Ui_MainWindow
...
so no relative imports or the like, because everything is in the same package, right? (Again: to make this work, I had to add my_qt_tool as source directory in my PyCharm project settings...)
Q3: Okay, now the thing that doesn't work. Running PyInstaller on the spec file, which is pretty much stock with Analysis(['my_qt_tool/my_qt_tool.py'], ..., the resulting binary fails to start with the error message: ModuleNotFoundError: No Module named 'class1'. How can I fix this up?
Q1
if project going to get larger you may create module specific folders and each module has py and gui files within. structure it as mvc project folders.
For folder structure of mvc: https://www.tutorialsteacher.com/mvc/mvc-folder-structure and here is hov model-view architecture can be implemented https://www.learnpyqt.com/courses/model-views/modelview-architecture/.
Q2
read resources/templates/tool.conf.template when application bootstrapped instead of statically referencing. this could be done in generate_ui_code.py to load all configs as part of app reference
so no relative imports or the like, because everything is in the same package, right? (Again: to make this work, I had to add my_qt_tool as source directory in my PyCharm project settings...)
no need to add my_qt_tool if bootstrapped properly
Q3
add these 2 lines top of spec file
import sys
sys.setrecursionlimit(5000)
if you still encounter class1 related issue try importing it in where pyinstaller being called for your case its my_qt_tool.py
fix pyinstaller issue first and than consider refactoring folder structure with model-view conventions.
here are some fairly large project examples
https://wiki.python.org/moin/PyQt/SomeExistingApplications
https://github.com/topics/pyqt5-desktop-application
Consider this application:
.
├── LICENSE
├── MANIFEST.in
├── program
│ ├── apple.py
│ ├── __init__.py
│ ├── __main__.py
│ ├── nonfruit.py
│ ├── pear.py
│ ├── strawberry.py
│ └── vegetables
│ ├── carrot.py
│ ├── __init__.py
│ └── lettuce.py
├── README.md
├── setup.cfg
└── setup.py
__main__.py is the file that users should run to use my program. I am distributing my program via PyPI and so I want to be able to install it via pip as well. Because of that, I created a setup.py file with an entry point:
entry_points = {
'console_scripts': ['pg=program.__main__:main']}
The problem I'm facing is that there are several imports in my program, and these result in the situation that my program does run 'locally' (by executing python ./__main__.py, but not from installation (by running pg). Or, depending on the way I import it, the other way around.
__main__.py imports nonfruit.py:
from nonfruit import Nonfruit
nonfruit.py imports vegetables/carrot.py:
import vegetables.carrot
ca = vegetables.carrot.Carrot()
I would like to hear some advice in structuring my project regarding imports, so that it runs both locally and from setuptools installation. For example, should I use absolute imports or relative imports? And should I use from X import Y or import X.Y?
I found a solution on Jan-Philip Gehrcke's website.
The instructions are written for use with Python 3, but I applied it to Python 2.7 with success. The problem I was having originated from my directory becoming a package. Jan advices to create one file to run it from source (bootstrap-runner.py) and one file to run it from installation (bootstrap/__main__.py). Furthermore, he advices to use explicit relative imports:
from .X import Y
This will probably be a good guideline in the next applications I'm writing.