Committing a container on windows breaks python code - python

A strange python script failure happens when running the python script through docker run
Circumstances:
I have an image(was built on linux) which contains a simple python file named /opt/test.py with permission of -rwxr-xr-x
#!/usr/bin/python3
from os import mkdir
print("hello world")
Running the script in the original image
stepping into the image (docker run -it --entrypoint bash myimage) then run: bash-4.4# /opt/test.py
result: hello world
using docker run: docker run myimage /opt/test.py result: hello world
Modify the image in the following way
commit the container without any change on windows docker commit container_id newimage
Running the script in the new image
stepping into the image (docker run -it --entrypoint bash newimage) then run: bash-4.4# /opt/test.py
result: hello world
so far so good...
using docker run: docker run newimage /opt/test.py
result: /opt/test.py: line 2: from: command not found'
This error means that the OS don't know how to run the file, thats why we have to provide the python command as #!/usr/bin/python3 Which we already did. The permission is still -rwxr-xr-x
And as you can see it can run the script successfully when you step into the container created from the image.

Docker commit has a magical property- it carries over all of the properties on an existing container to the new image.
You did docker run --entrypoint /bin/bash
This means that when you do docker run newimage /some/file.py you are executing that file AS A SHELL SCRIPT. And, generally, python scripts don't work very well in bash. The equivalent is /bin/bash /opt/test.py
Read your error message- its literally saying that there's a syntax error on line 2 of /opt/test.py

I managed to 'solve' this, however I still don't understand the behaviour behind this.
The keypoint is how the container was created from which we commit the new image
1.Committing a container which was created by stepping into in it
(like docker run -it --entrypoint bash myimage) will break the image
2.Committing a container which was created by any other way (like docker run myimage /some_command) won't break it

Related

Run python script using a docker image

I downloaded a python script and a docker image containing commands to install all the dependencies. How can I run the python script using the docker image?
Copy python file in Docker image then execute -
docker run image-name PATH-OF-SCRIPT-IN-IMAGE/script.py
Or you can also build the DockerFile by using the RUN python PATH-OF-SCRIPT-IN-IMAGE/script.py inside DockerFile.
How to copy container to host
docker cp <containerId>:/file/path/within/container /host/path/target
How to copy host to the container
docker cp /host/local/path/file <containerId>:/file/path/in/container/file
Run in interactive mode:
docker run -it image_name python filename.py
or if you want host and port to be specified:
docker run -it -v filename.py:filename.py -p 8888:8888 image_name python filename.py
Answer
First, Copy your python script and other required files to your docker container.
docker cp /path_to_file <containerId>:/path_where_you_want_to_save
Second, open the container cli using docker desktop and run your python script.
The best way, I think, is to make your own image that contains the dependencies and the script.
When you say you've been given an image, I'm guessing that you've been given a Dockerfile, since you talk about it containing commands.
Place the Dockerfile and the script in the same directory. Add the following lines to the bottom of the Dockerfile.
# Existing part of Dockerfile goes here
COPY my-script.py .
CMD ["python", "my-script.py"]
Replace my-script.py with the name of the script.
Then build and run it with these commands
docker build -t my-image .
docker run my-image

How to run ray docker on M1?

Attempt to run ray docker image on M1 results in
$ docker run -p 10001:10001 -p 8265:8265 -p 33963:33963 rayproject/ray:latest
> WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
I've tried to use DOCKER_DEFAULT_PLATFORM=linux/amd64, but then nothing happens:
$ DOCKER_DEFAULT_PLATFORM=linux/amd64 docker run -p 10001:10001 -p 8265:8265 -p 33963:33963 rayproject/ray:latest
>
$ docker ps
> CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
The latest tag has digest 744f499644cc
The image has /bin/bash defined as the command to run when it starts. When you run it, you don't attach a TTY, so the container exits immediately.
I'm not familiar with the image, so I don't know the way to run it correctly and your port mappings confuse me a bit. But a way to run it is
docker run -it rayproject/ray:latest
That will put you at a prompt inside the container and you can explore the contents.

How to make docker container running continuously?

I have a Docker image that is actually a server for a device. It is started from a Python script, and I made .sh to run it. However, whenever I run it, it says that it is executed and it ends (server exited with code 0). The only way I made it work is via docker-compose when I run it as detached container, then enter the container via bin/bash and execute the run script (beforementioned .sh) from it manually, then exit the container.
After that everything works as intended, but the issue arises when the server is rebooted. I have to do it manually all over again.
Did anyone else experience anything similar? If yes how can I fix this?
File that starts server (start.sh):
#!/bin/sh
python source/server/main.pyc &
python source/server/main_socket.pyc &
python source/server/main_monitor_server.pyc &
python source/server/main_status_server.pyc &
python source/server/main_events_server.pyc &
Dockerfile:
FROM ubuntu:trusty
RUN mkdir -p /home/server
COPY server /home/server/
EXPOSE 8854
CMD [ /home/server/start.sh ]
Docker Compose:
version: "3.9"
services:
server:
tty: yes
image: deviceserver:latest
container_name: server
restart: always
ports:
- "8854:8854"
deploy:
resources:
limits:
memory: 3072M
It's not a problem with docker-compose. Your docker container should not return (i.e block) even when launched with a simple docker run.
For that your CMD should run in the foreground.
I think the issue is that you're start.sh returns instead of blocking. Have you tried to remove the last '&' from your script (I'm not familiar with python and what these different processes are)?
In general you should run only one process per container. If you have five separate processes you need to run, you would typically run five separate containers.
The corollaries to this are that the main container command should be a foreground process; but also that you can run multiple containers off of the same image with different commands. In Compose you can override the command: separately for each container. So, for example, you can specify:
version: '3.8'
services:
main:
image: deviceserver:latest
command: ./main.py
socket:
image: deviceserver:latest
command: ./main_socket.py
et: cetera
If you're trying to copy-and-paste this exact docker-compose.yml file, make sure to set a WORKDIR in the Dockerfile so that the scripts are in the current directory, make sure the scripts are executable (chmod +x in your source repository), and make sure they start with a "shebang" line #!/usr/bin/env python3. You shouldn't need to explicitly say python anywhere.
FROM python:3.9 # not a bare Ubuntu image
WORKDIR /home/server # creates the directory too
COPY server ./ # don't need to duplicate the directory name here
RUN pip install -r requirements.txt
EXPOSE 8854 # optional, does almost nothing
CMD ["./main.py"] # valid JSON-array syntax; can be overridden
There are two major issues in the setup you show. The CMD is not a syntactically valid JSON array (the command itself is not "quoted") and so Docker will run it as a shell command; [ is an alias for test(1) and will exit immediately. If you do successfully run the script, the script launches a bunch of background processes and then exits, but since the script is the main container command, that will cause the container to exit as well. Running a set of single-process containers is generally easier to manage and scale than trying to squeeze multiple processes into a single container.
You can add sleep command in the end of your start.sh.
#!/bin/sh
python source/server/main.pyc &
python source/server/main_socket.pyc &
python source/server/main_monitor_server.pyc &
python source/server/main_status_server.pyc &
python source/server/main_events_server.pyc &
while true
do
sleep 1;
done

Docker From python:2 containarized simple code not running nohup parallel commands

I have sample code to run some mode in python which needs to run 4000 times to complete the process. I have been created a docker build using the docker file below.
FROM python:2
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
RUN pip install fbprophet
CMD ["python", "./startup.py"]
Inside the startup.py file I am creating one shell script file which having 4000 nohup commands need to run as python scripts,
here is the example of sample nohup command will start at the end of "startup.py" script.
nohup python `runprocess.py` arg1 arg2
My problem is if I start the build using docker run command, let say docker build name is startup-build
docker run startup-build
This will create the shell script inside the container and start only 2 or 3 nohup commands from the file, not entire commands. ideally, it should start 100 processes at a time because the script file has 'wait' command on after every 100 lines.
I don't know why is this happening. I am running this docker image in GCP container optimized OS VM, The actual problem is the container starting while 'docker run' not using the entire resources available in the VM and not completing the process on time.
Is it because of docker image can't run shell command inside container parallel ? or does there nohup command have any limitation?

Docker interactive mode and executing script

I have a Python script in my docker container that needs to be executed, but I also need to have interactive access to the container once it has been created ( with /bin/bash ).
I would like to be able to create my container, have my script executed and be inside the container to see the changes/results that have occurred (no need to manually execute my python script).
The current issue I am facing is that if I use the CMD or ENTRYPOINT commands in the docker file I am unable to get back into the container once it has been created. I tried using docker start and docker attach but I'm getting the error:
sudo docker start containerID
sudo docker attach containerID
"You cannot attach to a stepped container, start it first"
Ideally, something close to this:
sudo docker run -i -t image /bin/bash python myscript.py
Assume my python script contains something like (It's irrelevant what it does, in this case it just creates a new file with text):
open('newfile.txt','w').write('Created new file with text\n')
When I create my container I want my script to execute and I would like to be able to see the content of the file. So something like:
root#66bddaa892ed# sudo docker run -i -t image /bin/bash
bash4.1# ls
newfile.txt
bash4.1# cat newfile.txt
Created new file with text
bash4.1# exit
root#66bddaa892ed#
In the example above my python script would have executed upon creation of the container to generate the new file newfile.txt. This is what I need.
My way of doing it is slightly different with some advantages.
It is actually multi-session server rather than script but could be even more usable in some scenarios:
# Just create interactive container. No start but named for future reference.
# Use your own image.
docker create -it --name new-container <image>
# Now start it.
docker start new-container
# Now attach bash session.
docker exec -it new-container bash
Main advantage is you can attach several bash sessions to single container. For example I can exec one session with bash for telling log and in another session do actual commands.
BTW when you detach last 'exec' session your container is still running so it can perform operations in background
You can run a docker image, perform a script and have an interactive session with a single command:
sudo docker run -it <image-name> bash -c "<your-script-full-path>; bash"
The second bash will keep the interactive terminal session open, irrespective of the CMD command in the Dockerfile the image has been created with, since the CMD command is overwritten by the bash - c command above.
There is also no need to appending a command like local("/bin/bash") to your Python script (or bash in case of a shell script).
Assuming that the script has not yet been transferred from the Docker host to the docker image by an ADD Dockerfile command, we can map the volumes and run the script from there:
sudo docker run -it -v <host-location-of-your-script>:/scripts <image-name> bash -c "/scripts/<your-script-name>; bash"
Example: assuming that the python script in the original question is already on the docker image, we can omit the -v option and the command is as simple as follows:
sudo docker run -it image bash -c "python myscript.py; bash"
Why not this?
docker run --name="scriptPy" -i -t image /bin/bash python myscript.py
docker cp scriptPy:/path/to/newfile.txt /path/to/host
vim /path/to/host
Or if you want it to stay on the container
docker run --name="scriptPy" -i -t image /bin/bash python myscript.py
docker start scriptPy
docker attach scriptPy
Hope it was helpful.
I think this is what you mean.
Note: THis uses Fabric (because I'm too lazy and/or don't have the time to work out how to wire up stdin/stdout/stderr to the terminal properly but you could spend the time and use straight subprocess.Popen):
Output:
$ docker run -i -t test
Entering bash...
[localhost] local: /bin/bash
root#66bddaa892ed:/usr/src/python# cat hello.txt
Hello World!root#66bddaa892ed:/usr/src/python# exit
Goodbye!
Dockerfile:
# Test Docker Image
FROM python:2
ADD myscript.py /usr/bin/myscript
RUN pip install fabric
CMD ["/usr/bin/myscript"]
myscript.py:
#!/usr/bin/env python
from __future__ import print_function
from fabric.api import local
with open("hello.txt", "w") as f:
f.write("Hello World!")
print("Entering bash...")
local("/bin/bash")
print("Goodbye!")
Sometimes, your python script may call different files in your folder, like another python scripts, CSV files, JSON files etc.
I think the best approach would be sharing the dir with your container, which would make easier to create one environment that has access to all the required files
Create one text script
sudo nano /usr/local/bin/dock-folder
Add this script as its content
#!/bin/bash
echo "IMAGE = $1"
## image name is the first param
IMAGE="$1"
## container name is created combining the image and the folder address hash
CONTAINER="${IMAGE}-$(pwd | md5sum | cut -d ' ' -f 1)"
echo "${IMAGE} ${CONTAINER}"
# remove the image from this dir, if exists
## rm remove container command
## pwd | md5 get the unique code for the current folder
## "${IMAGE}-$(pwd | md5sum)" create a unique name for the container based in the folder and image
## --force force the container be stopped and removed
if [[ "$2" == "--reset" || "$3" == "--reset" ]]; then
echo "## removing previous container ${CONTAINER}"
docker rm "${CONTAINER}" --force
fi
# create one special container for this folder based in the python image and let this folder mapped
## -it interactive mode
## pwd | md5 get the unique code for the current folder
## --name="${CONTAINER}" create one container with unique name based in the current folder and image
## -v "$(pwd)":/data create ad shared volume mapping the current folder to the /data inside your container
## -w /data define the /data as the working dir of your container
## -p 80:80 some port mapping between the container and host ( not required )
## pyt#hon name of the image used as the starting point
echo "## creating container ${CONTAINER} as ${IMAGE} image"
docker create -it --name="${CONTAINER}" -v "$(pwd)":/data -w /data -p 80:80 "${IMAGE}"
# start the container
docker start "${CONTAINER}"
# enter in the container, interactive mode, with the shared folder and running python
docker exec -it "${CONTAINER}" bash
# remove the container after exit
if [[ "$2" == "--remove" || "$3" == "--remove" ]]; then
echo "## removing container ${CONTAINER}"
docker rm "${CONTAINER}" --force
fi
Add execution permission
sudo chmod +x /usr/local/bin/dock-folder
Then you can call the script into your project folder calling:
# creates if not exists a unique container for this folder and image. Access it using ssh.
dock-folder python
# destroy if the container already exists and replace it
dock-folder python --replace
# destroy the container after closing the interactive mode
dock-folder python --remove
This call will create a new python container sharing your folder. This makes accessible all the files in the folder as CSVs or binary files.
Using this strategy, you can quickly test your project in a container and interact with the container to debug it.
One issue with this approach is about reproducibility. That is, you may install something using your shell script that is required to your application run. But, this change just happened inside of your container. So, anyone that will try to run your code will have to figure out what you have done to run it and do the same.
So, if you can run your project without installing anything special, this approach may suits you well. But, if you had to install or change some things in your container to be able to run your project, probably you need to create a Dockerfile to save these commands. That will make all the steps from loading the container, making the required changes and loading the files easy to replicate.

Categories

Resources