Executing shell command using docker-py - python

I'm trying to run a shell command with docker-py on an already-running container, but get an error:
exec: "export": executable file not found in $PATH
here's how I wrote the script:
exe = client.exec_create(container=my_container, cmd='export MYENV=1')
res = client.exec_start(exec_id=exe)
so my question is how can I run a shell command (inside the container) using docker-py?

You did it quite right. But you confused shell commands with linux executables. exec_create and and exec_start are all about running executables. Like for example bash. export in your example is a shell command. You can only use it in a shell like bash running inside the container.
Additionally what you are trying to achieve (setting a environment variable) is not going to work. As soon as your exec finishes (where you set the env var) the exec process will finish and its environment is been torn down.
You can only create global container environment variables upon creation of a container. If you want to change the env vars, you have to tear down the container and recreate it with your new vars. As you probably know, all data in the container is lost upon a remove unless you use volumes to store your data. Reconnect the volumes on container creation.
That said your example was nearly correct. This should work and create an empty /somefile.
exe = client.exec_create(container=my_container, cmd=['touch', '/somefile'])
res = client.exec_start(exec_id=exe)
To execute shell commands, use this example. It calls sh and tells it to run the interpreter on the given command string (-c)
exe = client.exec_create(container=my_container,
cmd=['/bin/sh', '-c', 'touch /somefile && mv /somefile /bla'])
res = client.exec_start(exec_id=exe)

For actually , when execute cmd docker exec in docker container export MYENV=1. It will fail and report this error
exec: "export": executable file not found in $PATH
Because export is a shell builtin, could run the cmd in shell.
whereis export
type export
can not find export in /usr/bin/ or somewhere else.
There is some ways to pass through this problem.
case1: use -c parameter
/bin/bash -c 'export MYENV=1 ; /bin/bash'
case2: append export cmds to a rcfile, then use this file.
echo "exprot MYENV=1" >> <some_file_path> ; /bin/bash --rcfile <some_file_path>
case3: open a terminal, then enter the cmds to export env parameters , then open a new terminal, the env parameters will work.
/bin/bash
exprot MYENV=1
/bin/bash # open a new terminal

Related

Linux run shell cmd from python, Failed to load config file

I have installed a backup program called rclone on my raspberry pi which is running Debian, I have successfully ran the cmd in the shell to backup a folder to google drive but I really need to be able to do so each time a take a photo with my python script, I have little experience in Linux compared to others and I thought that if I made a shell script with a basic shebang of
#!/bin/sh
or
#!/bin/bash
then the cmd below
rclone copy /var/www/html/camera_images pictures::folder1
I then made the .sh file executable, and this works if I just click it in the folder and execute but if I try to call that .sh script from python with
os.system('sh /home/pi/py/upload.sh')
or
os.system(' rclone copy /var/www/html/camera_images pictures::folder1 ')
I get an error in the shell saying
Failed to load config file "/root/.rclone.conf" using default - no such directory.
But the .conf is located in /home/pi as it should be. and if i try
os.system(' sh rclone copy /var/www/html/camera_images pictures::folder1 ')
I get
sh: 0: Cant open rclone.
How can I can run the copy cmd or a script to do so from python?
this is how i installed rclone
cd
wget http://downloads.rclone.org/rclone-v1.34-linux-arm.zip
unzip rclone-v1.34-linux-arm.zip
cd rclone-v1.34-linux-arm
sudo cp rclone /usr/sbin/
sudo chown root:root /usr/sbin/rclone
sudo chmod 755 /usr/sbin/rclone
sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone.1 /usr/local/share/man/man1/
sudo mandb
rclone config
Use --config in your rclone command
From docs:
--config string Config file. (default /home/ncw/.rclone.conf")
Your command should looks like:
os.system(' sh rclone copy --config /home/pi/.rclone.conf /var/www/html/camera_images pictures::folder1 ')
You should be using subprocess module instead of os.system.
You can use subprocess.Popen to create a process and give it a working directory.
subprocess.Popen(your_command, cwd=path_to_your_executable_dir, shell=True)
(Use shell=True to pass a simple string command among other conveniences).
The shell argument (which defaults to False) specifies whether to use
the shell as the program to execute. If shell is True, it is
recommended to pass args as a string rather than as a sequence.
On Unix with shell=True, the shell defaults to /bin/sh. If args is a
string, the string specifies the command to execute through the shell.
This means that the string must be formatted exactly as it would be
when typed at the shell prompt. This includes, for example, quoting or
backslash escaping filenames with spaces in them. If args is a
sequence, the first item specifies the command string, and any
additional items will be treated as additional arguments to the shell
itself. That is to say, Popen does the equivalent of: ....
Thank you every one :)
I have it working now with
os.system(' rclone copy --config /home/pi/.rclone.conf /var/www/html/camera_images pictures::folder1 ')
Note that if i put sh at the start i got the error sh: 0: Can't open rclone though i read yesterday about putting something like ,:0 at the end as a return value ? either way it works without the sh.
and the subprocess works too which i shall use instead.
subprocess.Popen('rclone copy --config /home/pi/.rclone.conf /var/www/html/camera_images pictures::folder1', shell=True)

Running a Python script automatically when launching a Docker container

Is it possible to run a python script automatically upon starting a Docker container?
My command to attach to an image is:
docker run -i -t --entrypoint /bin/bash myimage -s
Is there a way to add an additional command that runs a script upon launching it?
I would prefer not to use a Dockerfile as some of the python modules I use are from private repos and need to be downloaded manually, so a Dockerfile would not completely build the image I want.
As a matter of fact there is. Just don't use --entrypoint. Instead:
docker run -it myimage /bin/bash -c /run.sh
Obviously, this assumes that the image itself contains a simple Bash script at the location /run.sh.
#!/bin/bash
command1
command2
command3
...
If you don't want that, you can mount the current folder inside the running container and run a local script:
docker run -it -v $(pwd):/mnt myimage /bin/bash -c /mnt/run.sh
ENTRYPOINT vs. CMD seems to be a common cause of confusion.
Think about it this way:
ENTRYPOINT is a way to hard-code a certain behavior that cannot be changed after setting it up.
CMD is the default way to supply a command to be run.
Docker containers can be set up to run as self-contained applications. If you're so inclined, you could create throwaway containers that accept command line arguments (a file for example), pull that in, work their magic and return you a processed file. Some people use this to set up build environments with different configurations and just run them on demand, not cluttering up their host machine.
However, your usage scenario feels tedious, since you are apparently doing the setup by hand. It would be easier to set the download credentials as environment variables, like this:
docker run -d -e "DEEP=purple" -e "LED=zeppelin" myimage /bin/bash -c /run.sh
You can then use those within the script as placeholders. This way, you get the best of both worlds. For added security, your run.sh should unset the environment variables once they have been used, like this:
#!/bin/bash
command1
command2
command3
...
unset DEEP
unset LED

Docker interactive mode and executing script

I have a Python script in my docker container that needs to be executed, but I also need to have interactive access to the container once it has been created ( with /bin/bash ).
I would like to be able to create my container, have my script executed and be inside the container to see the changes/results that have occurred (no need to manually execute my python script).
The current issue I am facing is that if I use the CMD or ENTRYPOINT commands in the docker file I am unable to get back into the container once it has been created. I tried using docker start and docker attach but I'm getting the error:
sudo docker start containerID
sudo docker attach containerID
"You cannot attach to a stepped container, start it first"
Ideally, something close to this:
sudo docker run -i -t image /bin/bash python myscript.py
Assume my python script contains something like (It's irrelevant what it does, in this case it just creates a new file with text):
open('newfile.txt','w').write('Created new file with text\n')
When I create my container I want my script to execute and I would like to be able to see the content of the file. So something like:
root#66bddaa892ed# sudo docker run -i -t image /bin/bash
bash4.1# ls
newfile.txt
bash4.1# cat newfile.txt
Created new file with text
bash4.1# exit
root#66bddaa892ed#
In the example above my python script would have executed upon creation of the container to generate the new file newfile.txt. This is what I need.
My way of doing it is slightly different with some advantages.
It is actually multi-session server rather than script but could be even more usable in some scenarios:
# Just create interactive container. No start but named for future reference.
# Use your own image.
docker create -it --name new-container <image>
# Now start it.
docker start new-container
# Now attach bash session.
docker exec -it new-container bash
Main advantage is you can attach several bash sessions to single container. For example I can exec one session with bash for telling log and in another session do actual commands.
BTW when you detach last 'exec' session your container is still running so it can perform operations in background
You can run a docker image, perform a script and have an interactive session with a single command:
sudo docker run -it <image-name> bash -c "<your-script-full-path>; bash"
The second bash will keep the interactive terminal session open, irrespective of the CMD command in the Dockerfile the image has been created with, since the CMD command is overwritten by the bash - c command above.
There is also no need to appending a command like local("/bin/bash") to your Python script (or bash in case of a shell script).
Assuming that the script has not yet been transferred from the Docker host to the docker image by an ADD Dockerfile command, we can map the volumes and run the script from there:
sudo docker run -it -v <host-location-of-your-script>:/scripts <image-name> bash -c "/scripts/<your-script-name>; bash"
Example: assuming that the python script in the original question is already on the docker image, we can omit the -v option and the command is as simple as follows:
sudo docker run -it image bash -c "python myscript.py; bash"
Why not this?
docker run --name="scriptPy" -i -t image /bin/bash python myscript.py
docker cp scriptPy:/path/to/newfile.txt /path/to/host
vim /path/to/host
Or if you want it to stay on the container
docker run --name="scriptPy" -i -t image /bin/bash python myscript.py
docker start scriptPy
docker attach scriptPy
Hope it was helpful.
I think this is what you mean.
Note: THis uses Fabric (because I'm too lazy and/or don't have the time to work out how to wire up stdin/stdout/stderr to the terminal properly but you could spend the time and use straight subprocess.Popen):
Output:
$ docker run -i -t test
Entering bash...
[localhost] local: /bin/bash
root#66bddaa892ed:/usr/src/python# cat hello.txt
Hello World!root#66bddaa892ed:/usr/src/python# exit
Goodbye!
Dockerfile:
# Test Docker Image
FROM python:2
ADD myscript.py /usr/bin/myscript
RUN pip install fabric
CMD ["/usr/bin/myscript"]
myscript.py:
#!/usr/bin/env python
from __future__ import print_function
from fabric.api import local
with open("hello.txt", "w") as f:
f.write("Hello World!")
print("Entering bash...")
local("/bin/bash")
print("Goodbye!")
Sometimes, your python script may call different files in your folder, like another python scripts, CSV files, JSON files etc.
I think the best approach would be sharing the dir with your container, which would make easier to create one environment that has access to all the required files
Create one text script
sudo nano /usr/local/bin/dock-folder
Add this script as its content
#!/bin/bash
echo "IMAGE = $1"
## image name is the first param
IMAGE="$1"
## container name is created combining the image and the folder address hash
CONTAINER="${IMAGE}-$(pwd | md5sum | cut -d ' ' -f 1)"
echo "${IMAGE} ${CONTAINER}"
# remove the image from this dir, if exists
## rm remove container command
## pwd | md5 get the unique code for the current folder
## "${IMAGE}-$(pwd | md5sum)" create a unique name for the container based in the folder and image
## --force force the container be stopped and removed
if [[ "$2" == "--reset" || "$3" == "--reset" ]]; then
echo "## removing previous container ${CONTAINER}"
docker rm "${CONTAINER}" --force
fi
# create one special container for this folder based in the python image and let this folder mapped
## -it interactive mode
## pwd | md5 get the unique code for the current folder
## --name="${CONTAINER}" create one container with unique name based in the current folder and image
## -v "$(pwd)":/data create ad shared volume mapping the current folder to the /data inside your container
## -w /data define the /data as the working dir of your container
## -p 80:80 some port mapping between the container and host ( not required )
## pyt#hon name of the image used as the starting point
echo "## creating container ${CONTAINER} as ${IMAGE} image"
docker create -it --name="${CONTAINER}" -v "$(pwd)":/data -w /data -p 80:80 "${IMAGE}"
# start the container
docker start "${CONTAINER}"
# enter in the container, interactive mode, with the shared folder and running python
docker exec -it "${CONTAINER}" bash
# remove the container after exit
if [[ "$2" == "--remove" || "$3" == "--remove" ]]; then
echo "## removing container ${CONTAINER}"
docker rm "${CONTAINER}" --force
fi
Add execution permission
sudo chmod +x /usr/local/bin/dock-folder
Then you can call the script into your project folder calling:
# creates if not exists a unique container for this folder and image. Access it using ssh.
dock-folder python
# destroy if the container already exists and replace it
dock-folder python --replace
# destroy the container after closing the interactive mode
dock-folder python --remove
This call will create a new python container sharing your folder. This makes accessible all the files in the folder as CSVs or binary files.
Using this strategy, you can quickly test your project in a container and interact with the container to debug it.
One issue with this approach is about reproducibility. That is, you may install something using your shell script that is required to your application run. But, this change just happened inside of your container. So, anyone that will try to run your code will have to figure out what you have done to run it and do the same.
So, if you can run your project without installing anything special, this approach may suits you well. But, if you had to install or change some things in your container to be able to run your project, probably you need to create a Dockerfile to save these commands. That will make all the steps from loading the container, making the required changes and loading the files easy to replicate.

From a python script, change user, set environment and run a couple of commands

I need to run a python script that changes user, sets a enviroment variable and executes a command and return the output.
1.) The way I am currently doing this is I am creating a shell script that does this for me:
tmpshell.sh
su - grid -c "echo +ASM1 | . oraenv; asmcmd volinfo -a"
The command fails because the environment is not being set.
2.) The second way I tried was by changing user is python script itself and then creating the shell script.
tmp.py
os.system('su - grid')
TMPFILE="/tmp/tmpfile.sh"
filehandle=open(TMPFILE,'w')
filehandle.write('+ASM1|. oraenv')
filehandle.write('asmcmd volinfo -a')
filehandle.close()
os.chmmod(TMPFILE,0755)
Here the problem is that the python script changes the user but the rest of the script doesn't run until I enter exit.
OUTPUT
[root#odadev1 oakvmclientlib]# python test.py
[grid#odadev1 ~]$ exit
[root#odadev1 oakvmclientlib]#
Any suggestions/better ways to do this ??
p.s.(edit) ". oraenv" is for setting the environment and +ASM1 is the environment variable it expects.
Try something like this:
$ sudo -u grid sh -c ". oraenv; echo +ASM1|asmcmd volinfo -a"
This will launch a shell as user grid, set up the environment in it and execute the command. I'm not sure what the second part of your command does, though - I suspect you want to pipe +ASM1 into the standard input of asmcmd, but you haven't given enough context to be sure.

Running "IDLE3.2 -s" from the "Finder" in OS X 10.6

I want to run IDLE3.2 with the argument "-s" so it can read ".pythonstartup" and export relevant modules, change the working directory and etc. Here is what I have tried:
Created a shell script:
/usr/local/bin/idle3.2 -s
this works allright, however running the script from the Finder opens up the Terminal, which is not the desired behavior.
Created an applescript:
do shell script "/bin/bash; cd /usr/local/bin/; ./idle3.2 -s"
this get rids of the terminal however fails to pass "-s" argument to idle3.2 so the configuration file is not loaded.
any suggestions?
EDIT: turns out environment variables are not properly set even though /bin/bash is called. so the following solves the problem:
do shell script "/bin/bash; source ~/.profile; /usr/local/bin/idle3.2 -s"
I think your do shell script "/bin/bash; cd /usr/local/bin; ./idle3.2 -s" is doing extra work, and can probably be done more simply. Try:
do shell script "/usr/local/bin/idle3.2 -s"
thanks to #lain the following applescript solves the problem:
do shell script "source ~/.profile; idle3.2 -s"
where ~/.profile points the shell (in this case /bin/sh) the path for .PYTHONSTARTUP and the path for idle3.2

Categories

Resources