I have a dockerized project that uses Poetry for dependency management. I'm developing a Python library that I'm using within that dockerized project and would like to be able to make a change to that library and then use it within the project, preferably without doing more than saving those changes.
Right now, this works:
Make a change
poetry build
Copy the *tar.gz file into the dockerized project's root directory
Run docker-compose up --build
I've tried changing the path field for the project to something like, since the dockerized project and the lib both live on the same level on my filesystem:
my-lib = {path="../my-lib", develop=true}
Poetry can't find it, so I added a COPY command in my dockerfile. Docker didn't like that. I started finding a workaround for that, but thought, "Maybe somebody knows a better way."
Is there something I'm missing?
Is there a better way to do what I'm trying to do?
Related
I made a website in node.js and I have used child process to run python scripts and use their results in my website. But when I host my application on heroku, heroku is unable to run the python scripts.
I have noticed that heroku was able to run python scripts which only had inbuilt python packages like sys, json but it was failing for scripts which were using external packages like requests, beautifulsoup.
Attempt-1
I made a requirements.txt file inside the project folder and pushed my code to heroku again. But it was still not working. I noticed that heroku uses the requirements.txt file only for python based web applications but not for node.js applications.
Attempt-2
I made a python virtual env inside the project folder and imported the required python packages into the venv. I then removed gitignore from the venv and pushed the whole venv folder to heroku. It still didn't work.
Please let me know if anyone came across a way to handle this.
You can use a Procfile to specify a command to install the python dependencies then start the server.
Procfile:
web: pip install -r requirements.txt && npm start
Place Procfile in your project's root directory and deploy to heroku.
Solution:
Have a requirements.txt file in your project folder.
After you create a heroku app using "heroku create", heroku will identify your app as python based and will not download any of the node dependencies.
Now, go to your heroku profile and go to your app's settings. There is an option named "Add Buildpack" in there. Click on that and add node.js as one of the buildpacks.
Go back to your project and push it to heroku. This time heroku will identify your app as both python and node.js based and will download all the required files.
What I have
I have 2 python apps that share a few bits of code, enough that I am trying to isolate the shared parts into modules/packages/libraries (I'm keeping the term vague on purpose, as I am not sure what the solution is). All my code is in a monorepo, because I am hoping to overcome some of the annoyances of managing more repos than we have team members.
Currently my file layout looks like:
+ myproject
+ appA
| + python backend A
| + js frontend
+ appB
| + B stuff
+ libs
+ lib1
+ lib2
Both appA and appB use lib1 and lib2 (they are essentially data models to abstract away the shared database). appA is a webapp with several components, not all of which are python. It is deployed as a docker stack that involve a bunch of containers.
I manage my dependencies with poetry to ensure reproducible builds, etc... Each python component (appA, appB...) have their own pyproject.toml file, virtual env, etc...
appB is deployed separately.
All development is on linux, if it makes any difference.
What I need
I am looking for a clean way to deal with the libs:
development for appA is done in a local docker-compose setup. The backend auto-reloads on file changes (using a docker volume), and I would like it to happen for changes in the libs too.
development for appB is simpler, but is moving to docker so the problem will be the same.
What I've tried
My initial "solution" was to copy the libs folder over to a temporary location for development in appA. It works for imports, but it's messy as soon as I want to change the libs code (which is still quite often), as I need to change the original file, copy over, rebuild the container.
I tried symlinking the libs into the backend's docker environment, but symlinks don't seem to work well with docker (it did not seem to follow the link, so the files don't end up in the docker image, unless I essentially copy the files inside the docker build context, which defeats the purpose of the link.)
I have tried packaging each lib into a python package, and install them via poetry add ../../libs/lib1 which doesn't really work inside docker because the paths don't match, and then I'm back to the symlink issue.
I am sure there is a clean way to do this, but I can't figure it out. I know I could break up the repo into smaller ones and install dependencies, but development would still cause problems inside docker, as I would still need to rebuild the container each time I change the lib files, so I would rather keep the monorepo.
If you are using docker-compose anyway you could use volumes to mount the local libs in your container and be able to edit them in your host system and the container. Not super fancy, but that should work, right?
#ckaserer your suggestion seems to work, indeed. In short, in the docker files I do COPY ../libs/lib1 /app/lib1 and then for local development, I mount ../libs/lib1 onto /app/lib1. That gives me the behavior I was looking for. I use a split docker-compose file for this. The setup causes a few issues with various tools needing some extra config so they know that the libs are part of the code base, but nothing impossible. Thanks for the idea!
So even though it's not an ideal solution locally mounting over the app and lib directories works on Linux systems.
FYI: On Windows hosts you might run into trouble if you want to watch for file changes as that is not propagated from a windows host to a Linux container.
I'm trying to run a hobby project on bluemix that is a combination of nodejs and python, that expects the runtimes to be collocated.
At first, I didn't even know there was a python dependency. So I deployed based on the node SDK starter app.
There is a "requirements.txt" for python dependencies, but I can see it's not really being used. Is there something I can do to get deployment to recognize that the app is a hybrid like this, IOW to process the requirements.txt so when python is invoked my deps are there?
In general, I would recommend splitting the application up so they do not have this dependency. But if you can't, I can think of two options:
Use multi-buildpack. Create the .buildpacks file in the application root directory and push your application with -b https://github.com/ddollar/heroku-buildpack-multi.git option. During staging, the buildpacks specified in that file will be called on your application.
Write your own custom buildpack. It's not that difficult and you can only do the minimum your application needs.
Expanding on this question I am trying to deploy Django on OpenShift but I'm having some problems understanding OpenShift.
I have managed to get as far as setting up a quick app with the git repo https://github.com/openshift/django-example but have the following questions:
Can I develop locally after git cloning to my local? (virtualenv, adding packages)
Packages, what's the deal? local, remote, adding, sycing, virtualenv, git, ...
I came across this line in Nate Aune's PaaS Bakeoff (slide 42) for setup.py and it looks quite useful:
install_requires=open('%s/project.txt' % \
os.environ.get('OPENSHIFT_REPO_DIR', PROJECT_ROOT)).readlines(),
(because I know I can pip freeze > requirments.txt in my virtualenv)
... Is %s/project.txt in wsgi or tthe directory below wsgi? Do I have to set PROJECT_ROOT with some funky os stuff?
EDIT
Basically:
Is it best to ssh into your OpenShift application (lest say you have a dev one) and work there or work off a local copy?
How do you install python packages after you have ssh'ed in to your OpenShift app? (virtualenv)
If you ssh'ed into your OpenShift app do you have to do anything after: creating a project, creating an app (manage.py startapp ...), changing code in your django app
If local is the best option:
How do I use the example locally?
Do I need to setup a virtualenv to work locally?
How do I make sure the python packages django needs are on OpenShift?
How do I add python packages to my OpenShift version (I'm presuming git doesn't do that)
I would suggest developing in a local copy.
In my experience there is no reason to ssh in, unless if you
need to perform one-of operation (migrating databases for example)
The requirements for the openshift app are specified in the setup.py file.
install_requires=['Django>=1.6.0', 'redis>=2.0']
Locally you can of course work in a virtualenv
I was facing a similar situation and found the answer provided by Luis Masuelli to the question posted here very informative. Hope it might help you.
I am trying to create my first site in Django and as I'm looking for example apps out there to draw inspiration from, I constantly stumble upon a term called "reusable apps".
I understand the concept of an app that is reusable easy enough, but the means of reusing an app in Django are quite lost for me. Few questions that are bugging me in the whole business are:
What is the preferred way to re-use an existing Django app? Where do I put it and how do I reference it?
From what I understand, the recommendation is to put it on your "PYTHONPATH", but that breaks as soon as I need to deploy my app to a remote location that I have limited access to (e.g. on a hosting service).
So, if I develop my site on my local computer and intend to deploy it on an ISP where I only have ftp access, how do I re-use 3rd party Django apps so that if I deploy my site, the site keeps working (e.g. the only thing I can count on is that the service provider has Python 2.5 and Django 1.x installed)?
How do I organize my Django project so that I could easily deploy it along with all of the reusable apps I want to use?
In general, the only thing required to use a reusable app is to make sure it's on sys.path, so that you can import it from Python code. In most cases (if the author follows best practice), the reusable app tarball or bundle will contain a top-level directory with docs, a README, a setup.py, and then a subdirectory containing the actual app (see django-voting for an example; the app itself is in the "voting" subdirectory). This subdirectory is what needs to be placed in your Python path. Possible methods for doing that include:
running pip install appname, if the app has been uploaded to PyPI (these days most are)
installing the app with setup.py install (this has the same result as pip install appname, but requires that you first download and unpack the code yourself; pip will do that for you)
manually symlinking the code directory to your Python site-packages directory
using software like virtualenv to create a "virtual Python environment" that has its own site-packages directory, and then running setup.py install or pip install appname with that virtualenv active, or placing or symlinking the app in the virtualenv's site-packages (highly recommended over all the "global installation" options, if you value your future sanity)
placing the application in some directory where you intend to place various apps, and then adding that directory to the PYTHONPATH environment variable
You'll know you've got it in the right place if you can fire up a Python interpreter and "import voting" (for example) without getting an ImportError.
On a server where you have FTP access only, your only option is really the last one, and they have to set it up for you. If they claim to support Django they must provide some place where you can upload packages and they will be available for importing in Python. Without knowing details of your webhost, it's impossible to say how they structure that for you.
An old question, but here's what I do:
If you're using a version control system (VCS), I suggest putting all of the reusable apps and libraries (including django) that your software needs in the VCS. If you don't want to put them directly under your project root, you can modify settings.py to add their location to sys.path.
After that deployment is as simple as cloning or checking out the VCS repository to wherever you want to use it.
This has two added benefits:
Version mismatches; your software always uses the version that you tested it with, and not the version that was available at the time of deployment.
If multiple people work on the project, nobody else has to deal with installing the dependencies.
When it's time to update a component's version, update it in your VCS and then propagate the update to your deployments via it.