I am running a dockerized django app and I am looking for a way to run (a) directive(s) every time before I build a docker container. More concretely, I would like to run docker-compose -f production.yml run --rm django python manage.py check --deploy each time before I either build or up the production.yml file and stop the build process if any erroroccur. Like a pre-hook.
I know I could achieve this with a bash-script, yet I was wondering if there is a way of doing this inside the docker-compose file. I can't find anything in the docker documentation (except events, but I don't understand if they serve for what I want to achieve) about it and I assume that this is not possible. Yet, maybe it is in fact possible or maybe there is a hacky workaround?
Thanks in advance for any tips.
Currently, this is not possible. There have been multiple requests to add such functionality, but the maintainers do not consider this a good idea.
See:
https://github.com/docker/compose/issues/468
https://github.com/docker/compose/issues/1341
https://github.com/docker/compose/issues/6736
Related
I've been working on my VPS lately in a goal which is to have nginx handle all traffic in a reverse proxy to containers which contain my projects, which most importantly are flask and ran using gunicorn wsgi
I've tried multiple attempts using Dockerfile, docker-compose, etc...
The closest thing I got was using this post.
And those are the steps I followed to install docker.
Where even then it did not as the example said and wasn't able to see the message displayed by flask but only saw the ngnx installation successful message.
Even if it worked, I was suggested by people on the python server to use gunicorn (which makes sense) and I wasn't able to switch it out, and most importantly to have nginx OUTSIDE the container, where nginx would just handle the traffic into the other containers without being in one, which I also couldn't pull off.
I come from a "heroku" background so It's still a little complicated for me to pull this off, I've watched countless videos and explanations online but I couldn't find anyone that had a good tutorial on my specific case of "gunicorn, nginx, flask, docker".
This is a lot to ask so I'm not really expecting a detailed explanation on how to do this (even though I would REALLY appreciate that), nor a step by step guide of course, but pointing me in the right direction would, like is there something I'm doing that could be done in a more efficient or just better manner? if not then is there anything you suggest I look into? articles, docs links, anything, hit me with it!
I would show the files I ended up with but they were made for nginx to be inside a container which isn't what I want my end result to be and they didn't work anyways so I'm hoping to start from scratch
Those are the steps I was suggested to figure out (I'm still working on them because I'm restarting and taking this slowly, unlike last time):
Figure out the command to run your app using gunicorn
Add that command to your dockerfile, and make sure you can build the container
Expose the container's ports and make sure you can make your requests correctly
Use NGINX to forward those requests
I don't know if this will help you, but for flask with gunicorn inside a container, I suggest you to follow this tutorial : https://mark.douthwaite.io/getting-production-ready-a-minimal-flask-app/
with its source code : https://github.com/markdouthwaite/minimal-flask-api
Personally, it helped me a lot.
Maybe next you want to build the image :
docker build -t image_name .
where the dockerfile is located and then and run the container with something like :
docker run -p 8080:8080 image_name
And then use Nginx to set a proxy path on localhost:8080/
I am using the Docker Python SDK docker-py to create a script that allows starting one or multiple containers (depending on a program argument in a way like script.py --all or script.py --specific_container) and it has to be possible to start each container with its own configuration (image, container_name, etc.) just like in typical docker-compose.yml files.
So basically, im trying to do the same what docker-compose does, just with the Python Docker SDK.
I've read that some people are trying to stick with docker-compose by using subprocess but it is not recommended and i would like to avoid this.
I am searching for possibly existing libraries for this but i haven't found anything just yet. Do you know anything i could use?
Another option would be to somehow store configuration files for the "specific_container"-profiles and for the "all"-profile as JSON (?) and then parse them and populate the Docker SDK's run method of the Container class, which lets you give all options that you can also give in the docker-compose file.
Maybe someone knows another, better solution?
Thanks in advance guys.
I have been writing a pretty simple python quizz system (called game.py) and I am working to deploy it on heroku. The app functions exclusively within the confines of a python console, with no interface of any kind but that provided by a terminal.
As such, I would like to be able to have the application on Heroku simply be akin to what you obtain with a one-off dyno, available on the dashboard (or in a terminal with the CLI) with:
heroku run python game.py
The application works perfectly well in it's deployed form (exclusively from the Heroku git) and locally, but in order for the app to be available to a larger public, I would need to have such a console appear on the "https://[appname].herokuapp.com/" URL that you are given on deployment of the app.
Naively, I would think this to be unspeakably simple to pull off, but I have yet to find a way to do it.
The only reasonable thing I have found would have been to create a Procfile, but lacking any documentation on the commands available, I only have been able to try variations of:
web: run python game.py
Which doesn't create a web console. And:
web: bash
Which simply crash with error code h10, with no other information given.
Any help, any suggestion, any workaround you can think of would be extremely appreciated.
I am doing a project with regard to fog computing. I would like to use one docker container to simulate a fog node which can process data, store data in the database and send data to Cloud. I need Ubuntu, Python, and Redis to develop my application.
I was wondering is it possible to install them in a single container? Because I can only install them separately in different containers by using 'docker pull' command.
Can anyone help me out here?
Thanks!
This is bad practice and very time consuming. Anyway if you really want to go down that way you have to create your own image, follow the instructions to install Redis and put each step inside the your new Dockerfile (or you can try to adapt the redis Dockerfile - https://github.com/docker-library/redis/blob/99a06c057297421f9ea46934c342a2fc00644c4f/3.2/Dockerfile).
Once this is done, you simply add the new commands to install python and build.
Good luck.
Problem: to run one.py from a server.
Error
When I try to do it in Mac, I get errors:
$python http://cs.edu.com/u/user/TEST/one.py ~
/Library/Frameworks/Python.framework/Versions/2.5/Resources/Python.app/Contents/MacOS/Python: can't open file 'http://cs.edu.com/u/user/TEST/one.py': [Errno 2] No such file or directory
one.py is like:
print 1
When I do it in Ubuntu, I get "file is not found".
Question: How can I run Python code from a server?
So far as I know, the standard Python shell doesn't know how to execute remote scripts. Try using curl or wget to retrieve the script and run it from the local copy.
$ wget http://cs.edu.com/u/user/TEST/one.py
$ python one.py
UPDATE: Based on the question referenced in the comment to this answer, you need to execute one.py based on incoming HTTP requests from end users. The simplest solution is probably CGI, but depending on what else you need to do, a more robust solution might involve a framework of some sort. They each have there strengths and weaknesses, so you should probably consider your requirements carefully before jumping in.
You can't do this. If you have SSH access to the server you can then run the python script located on the server using your SSH connection. If you want to write websites in python google python web frameworks for examples of how to set up and run websites with Python.
wget http://cs.edu.com/u/user/TEST/one.py
python one.py
You can mount the remote servers directory with some sort of file networking, like NFS or something. That way it becomes local.
But a better idea is that you explain why you are trying to do this, so we can solve the real usecase. There is most likely tons of better solutions, depending on what the real problem is.
The python interpreter doesn't know how to read from a URL. The file needs to be local.
However, if you are trying to get the server to execute the python code, you can use mod_python or various kinds of CGI.
You can't do what you are trying to do the way you are trying to do it.
Maybe something like this?
python -c "import urllib; eval(urllib.urlopen(\"http://cs.edu.com/u/user/TEST/one.py").read())"
OK, now when you explained, here is a new answer.
You run that script with
python one.py
It's a server side-script. It's run on the server. It's also located on the server. Why you try to access it via http is beyond me. Run it from the file system.
Although, you should probably look into running Grok or Django or something. This way you'll just end up writing your own Python web framework, you may just as well use one that exists instead. ;)