I want to run a set of docker commands from python.
I tried creating a script like below and run the script from python using paramiko ssh_client to connect to the machine where the docker is running:
#!/bin/bash
# Get container ID
container_id="$(docker ps | grep hello | awk '{print $1}')"
docker exec -it $container_id sh -c "cd /var/opt/bin/ && echo $1 &&
echo $PWD && ./test.sh -q $1"
But docker exec ... never gets executed.
So I tried to run the below python script below, directly in the machine where the docker is running:
import subprocess
docker_run = "docker exec 7f34a9c1b78f /bin/bash -c \"cd
/var/opt/bin/ && ls -a\"".split()
subprocess.call(docker_run, shell=True)
I get a message: "Usage: docker COMMAND..."
But I get the expected results if I run the command
docker exec 7f34a9c1b78f /bin/bash -c "cd /var/opt/bin/ && ls -a"
directly in the machine
How to run multiple docker commands from the python script? Thanks!
You have a mistake in your call to subprocess.call. subprocess.call expects a command with a series of parameters. You've given it a list of parameter pieces.
This code:
docker_run = "docker exec 7f34a9c1b78f /bin/bash -c \"cd
/var/opt/bin/ && ls -a\"".split()
subprocess.call(docker_run, shell=True)
Runs this:
subprocess.call([
'docker', 'exec', '7f34a9c1b78f', '/bin/bash', '-c',
'"cd', '/var/opt/bin/', '&&', 'ls', '-a"'
], shell=True)
Instead, I believe you want:
subprocess.call([
'docker', 'exec', '7f34a9c1b78f', '/bin/bash', '-c',
'"cd /var/opt/bin/ && ls -a"' # Notice how this is only one argument.
], shell=True)
You might need to tweak that second call. I suspect you don't need the quotes ('cd /var/opt/bin/ && ls -a' might work instead of '"cd /var/opt/bin/ && ls -a"'), but I haven't tested it.
Following are a few methods worked:
Remove double quotes:
subprocess.call([
'docker', 'exec', '7f34a9c1b78f', '/bin/bash', '-c',
'cd /opt/teradata/tdqgm/bin/ && ./support-archive.sh -q 6b171e7a-7071-4975-a3ac-000000000241'
])
If you are not sure of how the command should be split up to pass it as an argument of subprocess method, shlex module:
https://docs.python.org/2.7/library/shlex.html#shlex.split
Related
I have to run a docker and then a command inside the workdir using a python script.
i'm triyng to do it as follows:
command = ['gnome-terminal', '-e', "bash -c 'sudo /home/mpark/Escriptori/SRTConverter/shell_docker.sh; echo b; exec $SHELL'"]
p = subprocess.Popen(command)
where 'sudo /home/mpark/Escriptori/SRTConverter/shell_docker.sh' is a shell script with the docker run with root privileges
the fisrt command 'sudo /home/mpark/Escriptori/SRTConverter/shell_docker.sh' works good, but the second one 'echo b' that has to run inside the container doesn't work..
Thank you!
I am trying to execute a bash command from python script that is wrapped in a docker exec call as the command needs to be executed inside a container.
This script is being executing on the host machine:
command_line_string = f"java -cp {omnisci_utility_path}:{driver_path} com.mapd.utility.SQLImporter" \
f" -u {omni_user} -p {omni_pass} -db {database_name} --port {omni_port}" \
f" -t {self.table_name} -su {denodo_user} -sp {denodo_pass}" \
f" -c {self.reader.connection_string}"\
f" -ss \"{read_data_query}\""
# in prod we have docker so we wrap it in docker exec:
if(args.env_type == "prod"):
command_line_string = f"docker exec -t {args.container_id} /bin/bash -c \"{command_line_string}\""
command_line_args = shlex.split(command_line)
command_line_process = subprocess.Popen(
command_line_args,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
process_output, _ = command_line_process.communicate()
However, when I execute the command, supplying the arguments I get a "Java Usage" response suggesting that the java command I am invoking did not have the correct parameters:
2021-09-01:09:19:09 [default_omnisci_ingestion.py:64] INFO - docker exec -t 5d874bffcdf8 /bin/bash -c "java -cp /omnisci/bin/omnisci-utility-5.6.5.jar
:/root/denodo-8-vdp-jdbcdriver.jar com.mapd.utility.SQLImporter -u admin -p mypass -db omnisci --port 6274 -t MyTable -su sourceDBuser -sp sourceDBpass -c jdbc:vdb://sourceDBURL -ss "SELECT
basin as Basin,
reservoir as Reservoir, cast(case when wkt like '%M%' Then wkt Else replace(wkt, 'POLYGON ', 'MULTIPOLYGON (') || ')' End as varchar(999999)) as wkt
FROM
schema.myTable;""
2021-09-01:09:19:10 [command_executor.py:10] INFO - Usage: java [options] <mainclass> [args...]
2021-09-01:09:19:10 [command_executor.py:10] INFO - (to execute a class)2021-09-01:09:19:10 [command_executor.py:10] INFO - or java [options] -jar <jarfile> [args...]
2021-09-01:09:19:10 [command_executor.py:10] INFO - (to execute a jar file)
...
I know that the problem is due to the use of quotes but I just don't understand how to go about them.
For example, the java command I am nesting inside bin/bash -c needs to be wrapped with quotes like sl
bin/bash -c "java -cp ..."
Note: the command works fine if I execute it in our dev env where we do not have the "docker setup" and we execute the command as it is but on Stage we have the system running in a container thus the reason why I need to use docker exec` to invoke the same command in the contaner
I am trying to do the following in a Makefile recipe. Get the server container-ip using a python script. Build the command to run within the docker container. Run the command in the docker container.
test:
SIP=$(shell python ./scripts/script.py get-server-ip)
CMD="iperf3 -c ${SIP} -p 33445"
docker exec server ${CMD}
I get this
$ make test
SIP=172.17.0.6
CMD="iperf3 -c -p 33445"
docker exec server
"docker exec" requires at least 2 arguments.
See 'docker exec --help'.
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
make: *** [test] Error 1
I ended up with something like this.
SERVER_IP=$(shell python ./scripts/script.py get-server-ip); \
SERVER_CMD="iperf3 -s -p ${PORT} -4 --logfile s.out"; \
CLIENT_CMD="iperf3 -c $${SERVER_IP} -p ${PORT} -t 1000 -4 --logfile c.out"; \
echo "Server Command: " $${SERVER_CMD}; \
echo "Client Command: " $${CLIENT_CMD}; \
docker exec -d server $${SERVER_CMD}; \
docker exec -d client $${CLIENT_CMD};
This seems to work ok. Would love to hear if there are other ways of doing this.
You could write something like this. Here I used a target-specific variable assuming
the IP address is required only in this rule. iperf_command is defined as a variable
as the format looks rather fixed except the IP address which is injected by call function.
Also, as the rule doesn't seem to be supposed to produce the target as a file, I put .PHONY target as well.
iperf_command = iperf3 -c $1 -p 33445
.PHONY: test
test: iperf_server_ip = $(shell python ./scripts/script.py get-server-ip)
test:
docker exec server $(call iperf_command,$(iperf_server_ip))
I am trying to port:
https://coderwall.com/p/ewk0mq/stop-remove-all-docker-containers
to a python script. So far I have:
def remove_all_containers():
subprocess.call(['docker', 'stop','$(docker ps -a -q)'])
subprocess.call(['docker', 'rm','$(docker ps -a -q)'])
return;
But get:
Error response from daemon: No such container: $(docker ps -a -q)
I have also tried:
def remove_all_containers():
subprocess.call(['docker', 'stop',$(docker ps -a -q)])
subprocess.call(['docker', 'rm',$(docker ps -a -q)])
return;
But that gives:
subprocess.call(['docker', 'stop',$(docker ps -a -q)])
SyntaxError: invalid syntax
it seems I need to nest another subprocess call into the parent subprocess call. Or is there a simpler way to do this?
TL;DR: Command substitution $(...) is a shell feature, therefore you must run your commands on a shell:
subprocess.call('docker stop $(docker ps -a -q)', shell=True)
subprocess.call('docker rm $(docker ps -a -q)', shell=True)
Additional improvements:
It's not required, but I would suggest using check_call (or run(..., check=True), see below) instead of call(), so that if an error occurs it doesn't go unnoticed:
subprocess.check_call('docker stop $(docker ps -a -q)', shell=True)
subprocess.check_call('docker rm $(docker ps -a -q)', shell=True)
You can also go another route: parse the output of docker ps -a -q and then pass to stop and rm:
container_ids = subprocess.check_output(['docker', 'ps', '-aq'], encoding='ascii')
container_ids = container_ids.strip().split()
if container_ids:
subprocess.check_call(['docker', 'stop'] + container_ids])
subprocess.check_call(['docker', 'rm'] + container_ids])
If you're using Python 3.5+, you can also use the newer run() function:
# With shell
subprocess.run('docker stop $(docker ps -a -q)', shell=True, check=True)
subprocess.run('docker rm $(docker ps -a -q)', shell=True, check=True)
# Without shell
proc = subprocess.run(['docker', 'ps', '-aq'], check=True, stdout=PIPE, encoding='ascii')
container_ids = proc.stdout.strip().split()
if container_ids:
subprocess.run(['docker', 'stop'] + container_ids], check=True)
subprocess.run(['docker', 'rm'] + container_ids], check=True)
There is nice official library for python, that helps with Docker.
https://docker-py.readthedocs.io/en/stable/index.html
import docker
client = docker.DockerClient(Config.DOCKER_BASE_URL)
docker_containers = client.containers.list(all=True)
for dc in docker_containers:
dc.remove(force=True)
We've received all containers and remove them all doesn't matter container status is 'started' or not.
The library could be useful if you can import it into code.
I have a cgi script and it's calling another python script but while running via browser the second script not working.
subprocess.Popen([python, 'symantec.py', '-k', "test3.example.com.com.key", '-c',
"test3.example.com.com.csr", '-n', "test3.example.com.com",
'-o', "'IT'", '-p', "war.eagle", '-t', "Server", '-s', "F5",
'-y', "1", '-f', "test3.example.com.com.crt", '-g', "johny", '-l',
"mon", '-e', "johny.mon#example.com.com", '-b', "test3.example.com.com"],
shell=True)
Can someone help me to identify ?
Path to second script symantec.py
https://github.com/ericjmcalvin/verisign_purchase_ssl_certificate
You should first test the command.
python symantec.py -k test3.example.com.com.key -c test3.example.com.com.csr -n test3.example.com.com -o 'IT' -p war.eagle -t Server -s F5 -y 1 -f test3.example.com.com.crt -g johny -l mon -e johny.mon#example.com.com -b test3.example.com.com`
The script is expecting absolute paths as arguments to -k and -c arguments. The help specifies:
-h, --help show this help message and exit
-k KEY_FILE, --key-file=KEY_FILE
Full location to save key file. REQUIRED.
Once you made sure it is working, call it from your script:
command = "python symantec.py -k test3.example.com.com.key -c test3.example.com.com.csr -n test3.example.com.com -o 'IT' -p war.eagle -t Server -s F5 -y 1 -f test3.example.com.com.crt -g johny -l mon -e johny.mon#example.com.com -b test3.example.com.com"
subprocess.Popen(command.split(), shell=False)
Another way to accomplish this is to import the Symantec script and call its main function, providing, naturally, the expected arguments.