Attempt to run ray docker image on M1 results in
$ docker run -p 10001:10001 -p 8265:8265 -p 33963:33963 rayproject/ray:latest
> WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
I've tried to use DOCKER_DEFAULT_PLATFORM=linux/amd64, but then nothing happens:
$ DOCKER_DEFAULT_PLATFORM=linux/amd64 docker run -p 10001:10001 -p 8265:8265 -p 33963:33963 rayproject/ray:latest
>
$ docker ps
> CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
The latest tag has digest 744f499644cc
The image has /bin/bash defined as the command to run when it starts. When you run it, you don't attach a TTY, so the container exits immediately.
I'm not familiar with the image, so I don't know the way to run it correctly and your port mappings confuse me a bit. But a way to run it is
docker run -it rayproject/ray:latest
That will put you at a prompt inside the container and you can explore the contents.
Related
I am doing an encryption service that everytime a user goes on the server it changes the key , the problem is that when i run the python file alone in works like this
when it works
but when I am dockerizing it by the below code
FROM python:3
RUN mkdir -p "C:\Users\joel\Desktop\mcast-freshers-week-devops-main\mcast-freshers-week-devops-main\encryption-service"
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
CMD [ "python", "app.py" ]
The build and run are succesful even the container it is being created , even the output from the python code is shown
enter image description here
but when i go to the server it shows this
enter image description here
I tried everything but I don't know what to do.
I treid changing the code many times but i still cant solve it , I narrowed it down because i tried another python application and it worked.
When you run your command with docker run, you can always expose the internal port on the docker container to match your local host network port, use this command instead for running your docker container
docker run -it --rm -p 8080:8080 --name my-running-app my-python-app
You can then access your server by visiting localhost:8080
I'm using a python script for send websocket notification,
as suggested here.
The script is _wsdump.py and I have a script script.sh that is:
#!/bin/sh
set -o allexport
. /root/.env set
env
python3 /utils/_wsdump.py "wss://mywebsocketserver:3000/message" -t "message" &
If I try to dockerizing this script with this Dockerfile:
FROM python:3.8-slim-buster
RUN set -xe \
pip install --upgrade pip wheel && \
pip3 install websocket-client
ENV TZ="Europe/Rome"
ADD utils/_wsdump.py /utils/_wsdump.py
ADD .env /root/.env
ADD script.sh /
ENTRYPOINT ["./script.sh"]
CMD []
I have a strange behaviour:
if I execute docker run -it --entrypoint=/bin/bash mycontainer and after that I call the script.sh everything works fine and I receive the notification.
if I run mycontainer with docker run mycontainer I see no errors but the notification doesn't arrive.
What could be the cause?
Your script doesn't launch a long-running process; it tries to start something in the background and then completes. Since the script completes, and it's the container's ENTRYPOINT, the container exits as well.
The easy fix is to remove the & from the end of the last line of the script to cause the Python process to run in the foreground, and the container will stay alive until the process completes.
There's a more general pattern of an entrypoint wrapper script that I'd recommend adopting here. If you look at your script, it does two things: (1) set up the environment, then (2) run the actual main container command. I'd suggest using the Docker CMD for that actual command
# end of Dockerfile
ENTRYPOINT ["./script.sh"]
CMD python3 /utils/_wsdump.py "wss://mywebsocketserver:3000/message" -t "message"
You can end the entrypoint script with the magic line exec "$#" to run the CMD as the actual main container process. (Technically, it replaces the current shell script with a command constructed by replaying the command-line arguments; in a Docker context the CMD is passed as arguments to the ENTRYPOINT.)
#!/bin/sh
# script.sh
# set up the environment
. /root/.env set
# run the main container command
exec "$#"
With this use you can debug the container setup by replacing the command part (only), like
docker run --rm your-image env
to print out its environment. The alternate command env will replace the Dockerfile CMD but the ENTRYPOINT will remain in place.
You install script.sh to the root dir /, but your ENTRYPOINT is defined to run the relative path ./script.sh.
Try changing ENTRYPOINT to reference the absolute path /script.sh instead.
I am trying to mount a directory from host to container and at the same time running jupyter from that directory. What am I doing wrong here that docker is complaining as file now found please?
docker run -it --rm -p 8888:8888 tensorflow/tensorflow:nightly-jupyter -v $HOME/mytensor:/tensor --name TensorFlow python:3.9 bash
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "-v": executable file not found in $PATH: unknown.
I tried removing the version of python but still same problem. I searched extensively online and couldnt get an answer.
Basically i want to mount that directory which is is a git clone where I have tensor files. At the same time, I want to run jupyter notebook where I can see the files and run them. With so many issues with apple:m1 processor and tensorflow, i thought going the docker route would be better but am i not surprised :)
Appreciate the help
Docker run command syntax is
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
image name tensorflow/tensorflow:nightly-jupyter should be after options (-v, -p --name et.al.) and before the command.
docker run -it --rm -p 8888:8888 -v $HOME/mytensor:/tensor --name TensorFlow tensorflow/tensorflow:nightly-jupyter bash
I downloaded a python script and a docker image containing commands to install all the dependencies. How can I run the python script using the docker image?
Copy python file in Docker image then execute -
docker run image-name PATH-OF-SCRIPT-IN-IMAGE/script.py
Or you can also build the DockerFile by using the RUN python PATH-OF-SCRIPT-IN-IMAGE/script.py inside DockerFile.
How to copy container to host
docker cp <containerId>:/file/path/within/container /host/path/target
How to copy host to the container
docker cp /host/local/path/file <containerId>:/file/path/in/container/file
Run in interactive mode:
docker run -it image_name python filename.py
or if you want host and port to be specified:
docker run -it -v filename.py:filename.py -p 8888:8888 image_name python filename.py
Answer
First, Copy your python script and other required files to your docker container.
docker cp /path_to_file <containerId>:/path_where_you_want_to_save
Second, open the container cli using docker desktop and run your python script.
The best way, I think, is to make your own image that contains the dependencies and the script.
When you say you've been given an image, I'm guessing that you've been given a Dockerfile, since you talk about it containing commands.
Place the Dockerfile and the script in the same directory. Add the following lines to the bottom of the Dockerfile.
# Existing part of Dockerfile goes here
COPY my-script.py .
CMD ["python", "my-script.py"]
Replace my-script.py with the name of the script.
Then build and run it with these commands
docker build -t my-image .
docker run my-image
I have a Python script in my docker container that needs to be executed, but I also need to have interactive access to the container once it has been created ( with /bin/bash ).
I would like to be able to create my container, have my script executed and be inside the container to see the changes/results that have occurred (no need to manually execute my python script).
The current issue I am facing is that if I use the CMD or ENTRYPOINT commands in the docker file I am unable to get back into the container once it has been created. I tried using docker start and docker attach but I'm getting the error:
sudo docker start containerID
sudo docker attach containerID
"You cannot attach to a stepped container, start it first"
Ideally, something close to this:
sudo docker run -i -t image /bin/bash python myscript.py
Assume my python script contains something like (It's irrelevant what it does, in this case it just creates a new file with text):
open('newfile.txt','w').write('Created new file with text\n')
When I create my container I want my script to execute and I would like to be able to see the content of the file. So something like:
root#66bddaa892ed# sudo docker run -i -t image /bin/bash
bash4.1# ls
newfile.txt
bash4.1# cat newfile.txt
Created new file with text
bash4.1# exit
root#66bddaa892ed#
In the example above my python script would have executed upon creation of the container to generate the new file newfile.txt. This is what I need.
My way of doing it is slightly different with some advantages.
It is actually multi-session server rather than script but could be even more usable in some scenarios:
# Just create interactive container. No start but named for future reference.
# Use your own image.
docker create -it --name new-container <image>
# Now start it.
docker start new-container
# Now attach bash session.
docker exec -it new-container bash
Main advantage is you can attach several bash sessions to single container. For example I can exec one session with bash for telling log and in another session do actual commands.
BTW when you detach last 'exec' session your container is still running so it can perform operations in background
You can run a docker image, perform a script and have an interactive session with a single command:
sudo docker run -it <image-name> bash -c "<your-script-full-path>; bash"
The second bash will keep the interactive terminal session open, irrespective of the CMD command in the Dockerfile the image has been created with, since the CMD command is overwritten by the bash - c command above.
There is also no need to appending a command like local("/bin/bash") to your Python script (or bash in case of a shell script).
Assuming that the script has not yet been transferred from the Docker host to the docker image by an ADD Dockerfile command, we can map the volumes and run the script from there:
sudo docker run -it -v <host-location-of-your-script>:/scripts <image-name> bash -c "/scripts/<your-script-name>; bash"
Example: assuming that the python script in the original question is already on the docker image, we can omit the -v option and the command is as simple as follows:
sudo docker run -it image bash -c "python myscript.py; bash"
Why not this?
docker run --name="scriptPy" -i -t image /bin/bash python myscript.py
docker cp scriptPy:/path/to/newfile.txt /path/to/host
vim /path/to/host
Or if you want it to stay on the container
docker run --name="scriptPy" -i -t image /bin/bash python myscript.py
docker start scriptPy
docker attach scriptPy
Hope it was helpful.
I think this is what you mean.
Note: THis uses Fabric (because I'm too lazy and/or don't have the time to work out how to wire up stdin/stdout/stderr to the terminal properly but you could spend the time and use straight subprocess.Popen):
Output:
$ docker run -i -t test
Entering bash...
[localhost] local: /bin/bash
root#66bddaa892ed:/usr/src/python# cat hello.txt
Hello World!root#66bddaa892ed:/usr/src/python# exit
Goodbye!
Dockerfile:
# Test Docker Image
FROM python:2
ADD myscript.py /usr/bin/myscript
RUN pip install fabric
CMD ["/usr/bin/myscript"]
myscript.py:
#!/usr/bin/env python
from __future__ import print_function
from fabric.api import local
with open("hello.txt", "w") as f:
f.write("Hello World!")
print("Entering bash...")
local("/bin/bash")
print("Goodbye!")
Sometimes, your python script may call different files in your folder, like another python scripts, CSV files, JSON files etc.
I think the best approach would be sharing the dir with your container, which would make easier to create one environment that has access to all the required files
Create one text script
sudo nano /usr/local/bin/dock-folder
Add this script as its content
#!/bin/bash
echo "IMAGE = $1"
## image name is the first param
IMAGE="$1"
## container name is created combining the image and the folder address hash
CONTAINER="${IMAGE}-$(pwd | md5sum | cut -d ' ' -f 1)"
echo "${IMAGE} ${CONTAINER}"
# remove the image from this dir, if exists
## rm remove container command
## pwd | md5 get the unique code for the current folder
## "${IMAGE}-$(pwd | md5sum)" create a unique name for the container based in the folder and image
## --force force the container be stopped and removed
if [[ "$2" == "--reset" || "$3" == "--reset" ]]; then
echo "## removing previous container ${CONTAINER}"
docker rm "${CONTAINER}" --force
fi
# create one special container for this folder based in the python image and let this folder mapped
## -it interactive mode
## pwd | md5 get the unique code for the current folder
## --name="${CONTAINER}" create one container with unique name based in the current folder and image
## -v "$(pwd)":/data create ad shared volume mapping the current folder to the /data inside your container
## -w /data define the /data as the working dir of your container
## -p 80:80 some port mapping between the container and host ( not required )
## pyt#hon name of the image used as the starting point
echo "## creating container ${CONTAINER} as ${IMAGE} image"
docker create -it --name="${CONTAINER}" -v "$(pwd)":/data -w /data -p 80:80 "${IMAGE}"
# start the container
docker start "${CONTAINER}"
# enter in the container, interactive mode, with the shared folder and running python
docker exec -it "${CONTAINER}" bash
# remove the container after exit
if [[ "$2" == "--remove" || "$3" == "--remove" ]]; then
echo "## removing container ${CONTAINER}"
docker rm "${CONTAINER}" --force
fi
Add execution permission
sudo chmod +x /usr/local/bin/dock-folder
Then you can call the script into your project folder calling:
# creates if not exists a unique container for this folder and image. Access it using ssh.
dock-folder python
# destroy if the container already exists and replace it
dock-folder python --replace
# destroy the container after closing the interactive mode
dock-folder python --remove
This call will create a new python container sharing your folder. This makes accessible all the files in the folder as CSVs or binary files.
Using this strategy, you can quickly test your project in a container and interact with the container to debug it.
One issue with this approach is about reproducibility. That is, you may install something using your shell script that is required to your application run. But, this change just happened inside of your container. So, anyone that will try to run your code will have to figure out what you have done to run it and do the same.
So, if you can run your project without installing anything special, this approach may suits you well. But, if you had to install or change some things in your container to be able to run your project, probably you need to create a Dockerfile to save these commands. That will make all the steps from loading the container, making the required changes and loading the files easy to replicate.