installing python in a docker container - python

I'm new to coding and been fiddling around with docker containers and services
I have installed a temporary vscode server on my raspberry and deployed on my local lan for accessing it from various machines
Now i've been trying to create a flask app and run it from the container and trying to figure out how to publish and run the flask web server since i can't figure out what ip should i host it on (the default i always used was host=127.0.0.1 port=8080, but that would bring me to the local machine i'm visiting it from)
So while i was troubleshooting to understand what to do with exposed ports etc i've stopped the container and changed the docker-compose file, (i have a path set for the config's permanent storage, so my vscode setting are actually saved and persistent between deployments)
But I'm having the problem that every time i stop and re deploy the container i loose my python3 installation, and have to re run apt-update, apt-upgrade, apt install python3-pip and every python packages i need for the project
Where am i going wrong?
Silly question, but where does python get installed, and why isn't it persistent since i have my config path set?
I read that python gets installed in usr/local/lib, should i also map those directories to the persistent storage folder?
how should i do that?
thanks

Related

Python: Question about packaging applications docker vs pyinstaller

I have a python application that I've created an executable of, using pyinstaller. The entire python interpreter is packaged into the executable with all its pip dependencies.
So now my application can run in environments where python or python modules may not be installed, but there are still some dependencies:
1) MongoDB - This is the database my application uses, and it needs to be installed on a system for it to work of course.
2) Mosquitto - This service is required because the application uses MQTT to receive/send commands.
My current method of handling this is to use a shell script which installs mongodb and mosquitto the first time when my application is deployed somewhere. I just discovered docker, and I was wondering if it is capable of packaging these 'external' dependencies into a docker image?
Is it possible for me to have one standalone "thing" which will run in any environment regardless of whether mongoDB or mosquitto are installed there?
And how exactly would I go about doing this?
(Unrelated but this application is meant to run on a raspberry pi)
If you adopted Docker here:
You'd still have to "separately" run the external services; they couldn't be packaged into a single artifact per se. There's a standard tool called Docker Compose that provides this capability, though, and you'd generally distribute a docker-compose.yml file that describes how to run the set of related containers.
It's unusual to distribute a Docker image as files; instead you'd push your built image to a registry (like Docker Hub, but the major public-cloud providers offer this as a hosted service, there are a couple of independent services, or you can run your own). Docker can then retrieve the image via HTTP.
Docker containers can only be run by root-equivalent users. Since you're talking about installing databases as part of your bringup process this probably isn't a concern for you, but you could run a plain-Python or pyinstallered application as an ordinary user. Anyone who can run any Docker command has unrestricted root-level access on the host.

Beginner Docker-Compose & Django

I'm reading through the Docker Compose docs and have a question about the first code example under the heading:
Create a Django project
To create a new django project, it states that you should run the following line of code:
docker-compose run web django-admin.py startproject composeexample .
What I'm not understanding is why we should run this command in the context of docker-compose run. It's still creating the folder on our local machine. So why are we going through docker-compose to do this?
The point of Docker here is repeatability. Note that it is not the django-admin.py on your local machine that is executed (or the Python version on your local machine for that matter). It is instead the binaries that are in the container that was built in the preceding steps.
By executing the command though the 'web' container anyone with that container runs exactly the same version of the binaries and libraries. Thus removing the "it-works-on-my-machine" problem.
Of course in this example (for simplicity) the container is built on your machine just before it gets used; In a real world situation you'd share the resulting container using repositories so that everyone in your team can use it.

Django in Docker using PyCharm

I'm trying to develop a database manager in Django and want to develop and deploy it in Docker. As my IDE, I'd like to continue using PyCharm, but I'm having trouble understanding how it interacts with Docker.
I am new to Docker and its integration in PyCharm. My system runs Windows 10 and Docker for Windows.
I already tried using PyCharm's remote interpreter, but I have to activate the port forwarding manually (using Kitematic), since PyCharm does somehow not forward the exposed port automatically.
I also tried using a "Docker Deployment" run configuration. However, I can't get requests to localhost:8000 to get through to the Django server. All I get are empty response errors.
(Note: The bold issue was addressed in the accepted answer.)
It would really help me to have an explanation of how PyCharm's two options (remote interpreter and docker deployment) really work and ideally have an up-to-date tutorial for setting up Django with it. Unfortunately I could only find outdated tutorials and JetBrain's help pages are either outdated or do not explain it in enough detail.
Could someone help me out and guide me through this or point me to good resources?
Assuming you have the latest Docker (for Mac or for Windows) along with an updated version of PyCharm, you could achieve the port forwarding (binding) this way:
Create a new run configuration
Select your Docker server in the Deployment tab. If nothing shows, create a new one. Test that it actually works by clicking View > Tools Windows > Docker and connecting to the docker server. You should see the existing images and running containers.
In the Container tab, make sure to add the right Ports bindings.
An important note
Make sure that you are running your Django server on 0.0.0.0:8000 and not localhost:8000

Docker container and virtual python environment

I'm getting started working with Docker. I installed Docker Toolbox on Windows 10 and downloaded the desired container. I need full access to container’s filesystem with the ability to add and edit files. Can I transfer the contents of the container into a Virtual Python Environment in Windows filesystem? How to do it?
Transferring files between Windows and Linux might be a little annoying because of different line endings.
Putting that aside, sounds like you are looking to create a Docker based development environment. There are good tutorials online that walk you through setting one up, I would start with one of these
Running a Rails Development Environment in Docker. This one is about Rails, but the principles will be the same. Section 3 specifically talks about about sharing code between your host machine and the Docker container.
How To Work with Docker Data Volumes on Ubuntu 14.04 includes an brief introduction to Docker containers, different use cases for data volumes, and how to get each one working. Sharing Data Between the Host and the Docker Container section talks about what you are trying to do. This example talks about reading log files created inside the container, but the principle is the same for adding/updating files in the container.

Vagrant and Google App Engine are not syncing files

I am currently using Vagrant to spin up a VM to run GAE's dev_appserver in the Virtual Machine.
The sync folder works and I can see all the files.
But, after I run the dev appserver, changes to python files by the host machine are not dynamically updated.
To see updates to my python files, I have to relaunch dev appserver in my Virtual Machine.
Also, I have grunt tasks that watch html/css files. These also do not sync properly when updated by editors outside the Virtual Machine.
I suspect that it's something to do with the way Vagrant syncs files changed on the host machine.
Has anyone found a solution to this problem?
Finally found the answer!
In the latest version of google app engine, there is a new parameter you can pass to dev_appserver.py.
using dev_appserver.py --use_mtime_file_watcher=True works!
Although the change takes 1-2 seconds to detect, but it still works!

Categories

Resources