How to check if a docker instance is running? - python

I am using Python to start docker instances.
How can I identify if they are running? I can pretty easily use docker ps from terminal like:
docker ps | grep myimagename
and if this returns anything, the image is running. If it returns an empty string, the image is not running.
However, I cannot understand how to get subprocess.Popen to work with this - it requires a list of arguments so something like:
p = subprocess.Popen(['docker', 'ps', '|', 'grep', 'myimagename'], stdout=subprocess.PIPE)
print p.stdout
does not work because it tries to take the "docker ps" and make it "docker" and "ps" commands (which docker doesn't support).
It doesn't seem I can give it the full command, either, as Popen tries to run the entire first argument as the executable, so this fails:
p = subprocess.Popen('docker ps | grep myimagename', stdout=subprocess.PIPE)
print p.stdout
Is there a way to actually run docker ps from Python? I don't know if trying to use subprocess is the best route or not. It is what I am using to run the docker containers, however, so it seemed to be the right path.
How can I determine if a docker instance is running from a Python script?

You can use the python docker client:
import docker
DOCKER_CLIENT = docker.DockerClient(base_url='unix://var/run/docker.sock')
RUNNING = 'running'
def is_running(container_name):
"""
verify the status of a sniffer container by it's name
:param container_name: the name of the container
:return: Boolean if the status is ok
"""
container = DOCKER_CLIENT.containers.get(container_name)
container_state = container.attrs['State']
container_is_running = container_state['Status'] == RUNNING
return container_is_running
my_container_name = "asdf"
print(is_running(my_container_name))

One option is to use subprocess.check_output setting shell=True (thanks slezica!):
s = subprocess.check_output('docker ps', shell=True)
print 'Results of docker ps' + s
if the docker ps command fails (for example you don't start your docker-machine) then check_output will throw an exception.
A simple find can then verify your container is found / not-found:
if s.find('containername') != -1:
print 'found!'
else:
print 'not found.'
I would recommend using the container hash id and not container name in this case, too, as the name may be duplicated in the image name or other results of the docker ps.

Even though it seems like you are on your way, I would recommend you use docker-py as it accesses the socket created by docker to issue API request. I use this library currently use this library and it is real time saver.

Related

Use container params in docker-py run

a basic question that I can't found in the docs, how to pass the container params to the docker-py run function:
https://docker-py.readthedocs.io/en/stable/
We can run in a terminal the next line and will works:
docker run -e POSTGRES_DB="db" -e POSTGRES_PASSWORD="postgres" -e POSTGRES_HOST_AUTH_METHOD="trust" -e POSTGRES_USER="postgres" postgis/postgis -c max_worker_processes=15
If we try to use docker-py we can do:
import docker
client = docker.from_env()
container = client.containers.run(
"postgis/postgis:latest",
environment = {
'POSTGRES_DB': "db",
'POSTGRES_USER': "postgres",
'POSTGRES_PASSWORD': "password",
'POSTGRES_HOST_AUTH_METHOD': "trust"
}
)
There we can send almost all params to the container creation, but still can't found how to pass the -c max_worker_processes=15. How can we send that params to the container?
The run function has a command params, but does not work. I tried concat that to the image name, nothing. I can't found examples too D:
Thx!
Anything that appears after the image name in the docker run command is interpreted as the "command" part of the container setup; it overrides the Dockerfile CMD, which may be specially interpreted by the image's ENTRYPOINT. In the various Docker SDKs, you'd pass this as a command argument.
In docker-py specifically, the client.containers.run() method takes a command keyword argument. While the documentation says it accepts either a string or a list, you'll get the most consistent behavior if you split the command into a list of words yourself, and pass that list as arguments. (There are potentially significant security risks from assembling a command line via string interpolation, and using a list avoids many of these as well.)
container = client.containers.run(
"postgis/postgis:latest",
command=['-c', 'max_worker_processes=15'],
environment={...}
)

Docker-compose with python command handler

I am trying to the python cmd library in combination with docker.
This is my minimal command class setup.
class Commands(cmd.Cmd):
intro = 'Welcome to the shell.\nType help or ? to list commands.\n'
prompt = 'Shell: '
#staticmethod
def do_stop(arg):
"""
Stops the servers gracefully
:param arg:
:return:
"""
logger.info("Stopping server...")
# Do stuff
If I start the app without docker, the shell is just working fine. I can interact with it, without issues. However, if I use docker-compose up I get an endless loop of error messages, like in the image below.
My application main.py looks like the following:
if __name__ == "__main__":
Commands().cmdloop()
Why is docker complaining about an unknown syntax? is something writing to the stdout or stderr, I am not aware of?
Using:
stdin_open: true
in the docker compose file, will fix the issue.

Passing Windows cmd to the Python subprocess

I'm trying to execute aws cli command using Pyton's subprocess
windows cmd:
aws --profile some_profile --region some_region ec2 describe-instances --filters Name=tag:some_tag,Values=some_value --query "Reservations[*].Instances[*].{AvailabilityZone:Placement.AvailabilityZone,Status:State.Name,Name:Tags[?Key=='Name']|[0].Value}" --output=table
and that's how I try to do it:
profile = "some_profile"
region = "some_region"
ec2_filters = "Name=tag:some_tag,Values=some_value"
ec2_query = "Reservations[*].Instances[*].{AvailabilityZone:Placement.AvailabilityZone,Status:State.Name,Name:Tags[?Key=='Name']|[0].Value}"
ec2_output_type = "table"
proc = subprocess.Popen(["aws", "--profile", profile, "--region", region, "ec2", "describe-instances", "--filters", ec2_filters, "--query", ec2_query, "--output", ec2_output_type], stdout=subprocess.PIPE, shell=True)
This is the error message:
'[0].Value}' is not recognized as an internal or external command,
operable program or batch file.
I don't have aws installed, so I created a mock batch file to spit back what it received. I did try my initial guesses and you're right, it often makes it difficult, but I figured it out. Sorry for not testing what I asked you to try.
Anyway, aws.bat contains a single line, echo %*, which prints back whatever the batch file receives as arguments, so we know it's working.
Then, I tried to use your command. I got the same error you got, so I modified it to:
.\aws.bat --profile some_profile --region some_region ec2 describe-instances --filters Name=tag:some_tag,Values=some_value --query '"Reservations[*].Instances[*].{AvailabilityZone:Placement.AvailabilityZone,Status:State.Name,Name:Tags[?Key=='Name']|[0].Value}"' --output=table
This outputted the command back, meaning it got executed correctly.
Then, I modified your code to make sure there's quotes over all the query. I used simple string concatenation to do that.
import subprocess
profile = "some_profile"
region = "some_region"
ec2_filters = "Name=tag:some_tag,Values=some_value"
ec2_query = (
'"Reservations[*].Instances[*].{AvailabilityZone:Placement.AvailabilityZone,Status:State.Name,Name:Tags[?Key=='
"'Name'"
']|[0].Value}"'
)
ec2_output_type = "table"
proc = subprocess.Popen(["aws.bat", "--profile", profile, "--region", region, "ec2", "describe-instances", "--filters", ec2_filters, "--query", ec2_query, "--output", ec2_output_type])
This worked. Funnily, if I used triple quotes in an unorthodox manner, it worked as well.
ec2_query = ' '''"Reservations[*].Instances[*].{AvailabilityZone:Placement.AvailabilityZone,Status:State.Name,Name:Tags[?Key=='Name']|[0].Value}"' '''
Note the start, ' '''". I don't really know what's going on.
Anyway, the easier solution is to break up your string so the quotes don't get confusing.

How do I embed my shell scanning-script into a Python script?

Iv'e been using the following shell command to read the image off a scanner named scanner_name and save it in a file named file_name
scanimage -d <scanner_name> --resolution=300 --format=tiff --mode=Color 2>&1 > <file_name>
This has worked fine for my purposes.
I'm now trying to embed this in a python script. What I need is to save the scanned image, as before, into a file and also capture any std output (say error messages) to a string
I've tried
scan_result = os.system('scanimage -d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} '.format(scanner, file_name))
But when I run this in a loop (with different scanners), there is an unreasonably long lag between scans and the images aren't saved until the next scan starts (the file is created as an empty file and is not filled until the next scanning command). All this with scan_result=0, i.e. indicating no error
The subprocess method run() has been suggested to me, and I have tried
with open(file_name, 'w') as scanfile:
input_params = '-d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} '.format(scanner, file_name)
scan_result = subprocess.run(["scanimage", input_params], stdout=scanfile, shell=True)
but this saved the image in some kind of an unreadable file format
Any ideas as to what may be going wrong? Or what else I can try that will allow me to both save the file and check the success status?
subprocess.run() is definitely preferred over os.system() but neither of them as such provides support for running multiple jobs in parallel. You will need to use something like Python's multiprocessing library to run several tasks in parallel (or painfully reimplement it yourself on top of the basic subprocess.Popen() API).
You also have a basic misunderstanding about how to run subprocess.run(). You can pass in either a string and shell=True or a list of tokens and shell=False (or no shell keyword at all; False is the default).
with_shell = subprocess.run(
"scanimage -d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} ".format(
scanner, file_name), shell=True)
with open(file_name) as write_handle:
no_shell = subprocess.run([
"scanimage", "-d", scanner, "--resolution=300", "--format=tiff",
"--mode=Color"], stdout=write_handle)
You'll notice that the latter does not support redirection (because that's a shell feature) but this is reasonably easy to implement in Python. (I took out the redirection of standard error -- you really want error messages to remain on stderr!)
If you have a larger working Python program this should not be awfully hard to integrate with a multiprocessing.Pool(). If this is a small isolated program, I would suggest you peel off the Python layer entirely and go with something like xargs or GNU parallel to run a capped number of parallel subprocesses.
I suspect the issue is you're opening the output file, and then running the subprocess.run() within it. This isn't necessary. The end result is, you're opening the file via Python, then having the command open the file again via the OS, and then closing the file via Python.
JUST run the subprocess, and let the scanimage 2>&1> filename command create the file (just as it would if you ran the scanimage at the command line directly.)
I think subprocess.check_output() is now the preferred method of capturing the output.
I.e.
from subprocess import check_output
# Command must be a list, with all parameters as separate list items
command = ['scanimage',
'-d{}'.format(scanner),
'--resolution=300',
'--format=tiff',
'--mode=Color',
'2>&1>{}'.format(file_name)]
scan_result = check_output(command)
print(scan_result)
However, (with both run and check_output) that shell=True is a big security risk ... especially if the input_params come into the Python script externally. People can pass in unwanted commands, and have them run in the shell with the permissions of the script.
Sometimes, the shell=True is necessary for the OS command to run properly, in which case the best recommendation is to use an actual Python module to interface with the scanner - versus having Python pass an OS command to the OS.

How to pass an Unix command output to Python function

I have a requirement where i need to run one docker command on my local machine and send this list to remote server and check whether those images are existing or not. I need to rerun list of images that are not existing on remote server to local server. I need to do it with python. I have written some code by mixing shell and python as below.
List=$(docker images -q | grep "docker pull" | awk '{print $3}') #this command is mandatory to get exact docker name.
fab remote_sync_system_spec_docker_to_aws_artifactory:List -u ${USERNAME} -H 1.2.3.4
I am tryting pass output of shell command i.e List to pyhon function through fab as above.That function looks like below.
def remote_sync_system_spec_docker_to_aws_artifactory(List):
for line in List:
if( os.popen("docker images -q $line") == none )
List=... #need to prepare list and return back to calling function.
once i get the list on remote server, i need to return back it to calling function and i can do some manipulations there. basically i can use shell but problem is with connecting to remote server with sshpass is not accepted in my project so looking for python script.
As a simple way to transport a list, I would suggest a pipeline rather than a variable.
docker images -q | awk '/docker pull/ { print $3 }' |
fab remote_sync_system_spec_docker_to_aws_artifactory_stdin -u ${USERNAME} -H 1.2.3.4
where the function is something like
import sys, subprocess
def remote_sync_system_spec_docker_to_aws_artifactory_stdin (handle=sys.stdin):
"""
Read Docker image identifiers from file handle; check which
ones are available here, and filter those out; return the rest.
"""
missing = ()
for line in handle:
repo = line.rstrip('\n')
if subprocess.run(['docker', 'images', '-q', repo],
stdout=subprocess.PIPE, universal_newlines=True).stdout == "":
missing.append(repo)
return missing
os.popen()
will return and object in memory, what you should do is
def remote_sync_system_spec_docker_to_aws_artifactory(List):
for line in List:
if( os.popen("docker images -q $line").read() == none ):
List=... #need to prepare list and return back to calling function.
You should avoid os.popen() and even its replacement subprocess.Popen() if all you need is to obtain the output from a shell command.
For recent Python 3.x, use subprocess.run():
import subprocess
List = ()
for result in subprocess.run(["docker", "images", "-q"],
stdout=subprocess.PIPE, universal_newlines=True).stdout.split('\n'):
if 'docker pull' in result:
List.append(result.split()[3])
In Python 2.x the corresponding function was subprocess.check_output().
Maybe you'll want to replace the grep with something a bit more focused; 'docker pull' in result will look for the string anywhere in the line, but you would probably like to confine it to just a particular column, for example.

Categories

Resources