I'm reading through the Docker Compose docs and have a question about the first code example under the heading:
Create a Django project
To create a new django project, it states that you should run the following line of code:
docker-compose run web django-admin.py startproject composeexample .
What I'm not understanding is why we should run this command in the context of docker-compose run. It's still creating the folder on our local machine. So why are we going through docker-compose to do this?
The point of Docker here is repeatability. Note that it is not the django-admin.py on your local machine that is executed (or the Python version on your local machine for that matter). It is instead the binaries that are in the container that was built in the preceding steps.
By executing the command though the 'web' container anyone with that container runs exactly the same version of the binaries and libraries. Thus removing the "it-works-on-my-machine" problem.
Of course in this example (for simplicity) the container is built on your machine just before it gets used; In a real world situation you'd share the resulting container using repositories so that everyone in your team can use it.
Related
I'm new to coding and been fiddling around with docker containers and services
I have installed a temporary vscode server on my raspberry and deployed on my local lan for accessing it from various machines
Now i've been trying to create a flask app and run it from the container and trying to figure out how to publish and run the flask web server since i can't figure out what ip should i host it on (the default i always used was host=127.0.0.1 port=8080, but that would bring me to the local machine i'm visiting it from)
So while i was troubleshooting to understand what to do with exposed ports etc i've stopped the container and changed the docker-compose file, (i have a path set for the config's permanent storage, so my vscode setting are actually saved and persistent between deployments)
But I'm having the problem that every time i stop and re deploy the container i loose my python3 installation, and have to re run apt-update, apt-upgrade, apt install python3-pip and every python packages i need for the project
Where am i going wrong?
Silly question, but where does python get installed, and why isn't it persistent since i have my config path set?
I read that python gets installed in usr/local/lib, should i also map those directories to the persistent storage folder?
how should i do that?
thanks
I am attempting to set-up a docker container with php-apache and python. This is primarily for a php web application. For part of the functionality I wrote a python script that is utilizing a python library which fulfills functionality that I couldn't find with php. Otherwise I'd have just tried to stick to php for everything. I run the python script with php's shell_exec command. Everything works in my local development environment; however, when I attempt to push to production problems arise. Anyways, I have been trying for hours (tons of research on the topic) and I cannot figure out how to get Python installed on the same Docker container as php-apache. Here is an example of a Dockerfile I've been using:
FROM python:3.7
RUN apt-get update
RUN apt-get install python3.7
COPY requirements.txt ./
RUN pip3 install -r requirements.txt
FROM php:7.4.13-apache
RUN docker-php-ext-install mysqli pdo pdo_mysql
With this set-up I get "sh: 1: python: not found".
If I remove the last two lines (php-apache) the container keeps restarting continuously (though python is installed in this case). I've tried many other examples of dockerfiles for python and combined with php-apache none have worked.
I ended up going with the suggestion from #DavidMaze and setting up two separate containers, one for php:apache, and one for python. For the python container I built a simple flask application with endpoints that, when a GET request is made, run a specific python function. I used PHP curl to communicate with this flask api from the php:apache container.
I have a python application that I've created an executable of, using pyinstaller. The entire python interpreter is packaged into the executable with all its pip dependencies.
So now my application can run in environments where python or python modules may not be installed, but there are still some dependencies:
1) MongoDB - This is the database my application uses, and it needs to be installed on a system for it to work of course.
2) Mosquitto - This service is required because the application uses MQTT to receive/send commands.
My current method of handling this is to use a shell script which installs mongodb and mosquitto the first time when my application is deployed somewhere. I just discovered docker, and I was wondering if it is capable of packaging these 'external' dependencies into a docker image?
Is it possible for me to have one standalone "thing" which will run in any environment regardless of whether mongoDB or mosquitto are installed there?
And how exactly would I go about doing this?
(Unrelated but this application is meant to run on a raspberry pi)
If you adopted Docker here:
You'd still have to "separately" run the external services; they couldn't be packaged into a single artifact per se. There's a standard tool called Docker Compose that provides this capability, though, and you'd generally distribute a docker-compose.yml file that describes how to run the set of related containers.
It's unusual to distribute a Docker image as files; instead you'd push your built image to a registry (like Docker Hub, but the major public-cloud providers offer this as a hosted service, there are a couple of independent services, or you can run your own). Docker can then retrieve the image via HTTP.
Docker containers can only be run by root-equivalent users. Since you're talking about installing databases as part of your bringup process this probably isn't a concern for you, but you could run a plain-Python or pyinstallered application as an ordinary user. Anyone who can run any Docker command has unrestricted root-level access on the host.
I'm using jupyter notebooks to prototype and I write the majority of my code as python packages using vscode and installed as so:
pip install -e .
This works well as I can test rapidly prototype in jupyter but still maintain reusable / testable code by keeping most of the heavy lifting in the package(s)
I'd like to move my python/jupyter environment to docker. Is there any way to configure vscode to work well with a "remote" development environment running in a docker container?
Since May 2019 (version 1.35), VScode remote development feature is present in the stable release. It splits the VScode program in two:
a server part that can be runned on a remote computer, container, or WSL environment
a client part, mainly the GUI, that is runned locally
When properly configured, debugging/linting/... operations will be executed inside the container. To answer your specific question, you can get a debug experience identical the one of an uncontainerized setup.
See here for a quick overview of this feature. You can find a vscode-issued tutorial on how to setup vscode with docker here.
If you expose the Jupyter instance running in the container to your machine, you may be able to specify it as a remote Jupyter server.
I'm getting started working with Docker. I installed Docker Toolbox on Windows 10 and downloaded the desired container. I need full access to container’s filesystem with the ability to add and edit files. Can I transfer the contents of the container into a Virtual Python Environment in Windows filesystem? How to do it?
Transferring files between Windows and Linux might be a little annoying because of different line endings.
Putting that aside, sounds like you are looking to create a Docker based development environment. There are good tutorials online that walk you through setting one up, I would start with one of these
Running a Rails Development Environment in Docker. This one is about Rails, but the principles will be the same. Section 3 specifically talks about about sharing code between your host machine and the Docker container.
How To Work with Docker Data Volumes on Ubuntu 14.04 includes an brief introduction to Docker containers, different use cases for data volumes, and how to get each one working. Sharing Data Between the Host and the Docker Container section talks about what you are trying to do. This example talks about reading log files created inside the container, but the principle is the same for adding/updating files in the container.