Spring Boot Application which triggers another App with CLI - python

I am currently developing a Spring Boot Application which triggers a Python program via CLI. I've used Processbuilder to do that and it's been working ok so far.
Now I'm trying to get the Spring Boot Application and the Python program in a Docker container. Since I'm new to Docker I don't know the best way to do this. I've tried using COPY to copy the whole folder to create an image but for some reason the folder pythonapp in the Container is always empty.
Am I missing something or is there a better way to do this?
FROM openjdk:8u151-jdk-slim
EXPOSE 8080
ADD springbootapp-0.0.1.jar app.jar
COPY . /root/pythonapp
RUN sh -c 'touch /app.jar'
RUN apt-get update && apt-get install -y python \
python-gi \
gir1.2-gtk-3.0
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]

Normally the idea of docker is that 1 container does 1 thing and 1 thing good. So it mostly is not a good idea to put two things in 1 docker container. Think about two containers :-)
Other than that it might be a good idea to add files separately or as a tar/zip file and extract it in the image.

Related

Is there a way to modify files inside docker via PyCharm?

I want to modify files inside docker container with PyCharm. Is there possibility of doing such thing?
What you want to obtain is called Bind Mounting and it can be obtained adding -v parameter to your run command, here's an example with an nginx image:
docker run --name=nginx -d -v ~/nginxlogs:/var/log/nginx -p 5000:80 nginx
The specific parameter obtaining this result is -v.
-v ~/nginxlogs:/var/log/nginx sets up a bindmount volume that links the /var/log/nginx directory from inside the Nginx container to the ~/nginxlogs directory on the host machine.
Docker uses a : to split the host’s path from the container path, and the host path always comes first.
In other words the files that you edit on your local filesystem will be synced to the Docker folder immediately.
Source
Yes. There are multiple ways to do this, and you will need to have PyCharm installed inside the container.
Following set of instructions should work -
docker ps - This will show you details of running containers
docker exec -it *<name of container>* /bin/bash
At this point you will oh shell inside the container. If PyCharm is not installed, you will need to install. Following should work -
sudo apt-get install pycharm-community
Good to go!
Note: The installation is not persistence across Docker image builds. You should add the installation step of PyCharm on DockerFile if you need to access it regularly.

Run .sh script from python as sudo

I'm working on a project with python where I want to automate docker containers creation. I have the project folder already with includes all the files required to create the image.
One of these is create_image.sh
docker build -t my_container:latest .
Currently I do:
sudo bash create_image.sh
But now I need to automate this process from python.
I have tried:
import os
import subprocess
subprocess.check_call("bash -c '. create_image.sh'", shell=True)
But I get this error:
CalledProcessError: Command 'bash -c '. create_image.sh'' returned non-zero exit status 1.
EDIT:
The use case is to automate containers creation through an API, I have the code in flask and python until this point, where I got stuck in the images creation from the docker file. The rest is automated from templates.
You can try:
subprocess.call(['sudo', 'bash', 'create_image.sh' ])
which is equivalent of
sudo bash create_image.sh
Note: Let me say that there are better ways of automating docker container creation - please check docker-compose which can build and start the container easily. If you can elaborate more on the use case, we could help you with an elegant solution for docker. It might not be a python problem
EDIT:
Following the comments, it would be better to create a docker-compose and makefile is used to issue docker commands. Inspiration - https://medium.com/#daniel.carlier/how-to-build-a-simple-flask-restful-api-with-docker-compose-2d849d738137
In case that's because your user can't run docker without sudo, probably it's better to grant him docker API access by including him to docker group: https://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo
Simply adding user to docker group:
sudo gpasswd -a $USER docker
Also if You want to automate docker operations on python, I'd recommend to use python library for docker: How to build an Image using Docker API Python Client?

How to persist my notebooks and data in my Docker image/container

I am new to Docker and I am confused about containers and images somehow. I want to sue Docker for Tensorflow development. All I need is to have an easy way to write Jupyter Notebooks and use GPU powered Tensorflow.
I have the latest Tensorflow Jupyter Python 3 Image already. I run the Image with
docker run --rm --runtime=nvidia -v -it -p 8888:8888 tensorflow/tensorflow:latest-gpu-py3-jupyter
How can I make it so that my data when I work in that Image and add and edit my Jupyter Notebooks won't get lost after I exit the process. I know that Docker Images aren't meant to persist state but I am so new to this I just want something to work in with persistent data. Can someone help me guide me through this or point to a resource which will answer all my prayers?
I would also like to move some stuff into the Container that is going to be run so that I can access some custom Python libs because they contain some things that my Notebooks need to import!
Side questions:
--rm removes the container or whatever by default I run it without this flag still my data was lost
-v is for volumes? I tried with -v Bachelor:/app to mount a volume like so. It apparently doesn't make any difference. I don't know how to use the volume Bachelor that I created. Instead there are a multitude of unnamed volumes being created that are not usable whenever I run this
-it does also something no idea what
-p is the port number right?
Use Docker volumes:
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers
Example:
docker run --runtime=nvidia -v ${SOURCE_FOLDER}:${DEST_FOLDER} -p 8888:8888 tensorflow/tensorflow:latest-gpu-py3-jupyter
Change SOURCE_FOLDER and DEST_FOLDER accordingly (use absolute paths!).
Now if you navigate to localhost:8888 and create a notebook on DEST_FOLDER, it also should be available on SOURCE_FOLDER.
As for your side questions:
--it runs a container in interactive mode. You generally add /bin/bash after the run command, so you can start an interactive bash session inside the container.
--rm cleans the container after it exists.
Those options aren't really necessary for your use case. Just remember to use docker ps and docker rm <ID> to clean up your container after you're done.

Saving changes in docker project

I just started a django project with docker, I used the cookiecutter-django template that is discussed in the book Two scoops of django.
I am trying to set it all up in OSX, but I am having some trouble with the following part from the documentation:
Saving changes
If you are using OS X or Windows, you need to create a
/data partition inside the virtual machine that runs the docker deamon
in order make all changes persistent. If you don’t do that your /data
directory will get wiped out on every reboot. To create a persistent
folder, log into the virtual machine by running:
$ docker-machine ssh dev1
$ sudo su
$ echo 'ln -sfn /mnt/sda1/data /data' >>
/var/lib/boot2docker/bootlocal.sh
However, if I execute these commands, and try to start my docker project, I get the following error:
ERROR: Cannot start container 182a38022fbdf65f7a64f1ca5475a3414084d11c91f1cb48bffc6f76491baf4a: mkdir /data: file exists
I'm quite stuck at this point, do you guys have an idea what I could do to get this up and running?
So in the end this was fixed by making the directory in the local machine. I did that with the following code when adding the line to the bootlocal.sh file:
$ mkdir /mnt/sda1/data

Debug Python in Docker Container

I have a docker container running a python server, mounted on my local volume (so it gets updated if I restart the container for instance)
However, this gets tremendously hard to debug. Im using PyCharm professional IDEA.
Ive tried following the guides on how to debug inside docker containers, but it only shows how to do it when you start the container inside PyCharm, in my case I got a big Terraform stuff going on to setup all the environment, so I gotta find a way of attaching to the container python interpreter or something like that.
Would any1 have any ideas or guides on this ?
Thanks !
There are many details missing that would be needed to get a full view, but there are generally two ways to debug containers: 1) debug a running container and 2) debug a container image.
Debugging Container Images and Failed Builds
The latter is much easier because you can look at the history of a particular image and run a layer inside it.
First, we take a look at our locally built images:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 77af4d6b9913 19 hours ago 1.089 GB
committ latest b6fa739cedf5 19 hours ago 1.089 GB
Next, we can pick a particular image and run docker history on it:
$ docker history 77af4d6b9913
IMAGE CREATED CREATED BY SIZE COMMENT
3e23a5875458 8 days ago /bin/sh -c #(nop) ENV LC_ALL=C.UTF-8 0 B
8578938dd170 8 days ago /bin/sh -c dpkg-reconfigure locales && loc 1.245 MB
be51b77efb42 8 days ago /bin/sh -c apt-get update && apt-get install 338.3 MB
4b137612be55 6 weeks ago /bin/sh -c #(nop) ADD jessie.tar.xz in / 121 MB
Then we can pick a layer anywhere in the history of the image and run that interactively:
$ docker run -it --rm 3e23a5875458 /bin/sh
This will dump you into a shell where you can run whatever the next command in the image-build process would be. This is super useful if your docker build command failed and you need to understand why, but it can also be useful if you just want to look at how things are set-up inside a particular container (such as your Python interpreter, dependencies, PATH, etc.).
Attaching to a Running Container
This can be a little more confusing, but similarly, you can run a command inside a runnning container using exec. For instance, I often want to make sure my environment variables are set correctly, so I'll run something like this:
$ docker exec my_container env
You can use this to create a shell inside the running container as well:
$ docker exec -it my_container /bin/sh
This is generic stuff, but useful broadly for debugging containers.
Note: I am using /bin/sh above because a lot of small base images (like Alpine) don't have bash installed.

Categories

Resources