Use container params in docker-py run - python

a basic question that I can't found in the docs, how to pass the container params to the docker-py run function:
https://docker-py.readthedocs.io/en/stable/
We can run in a terminal the next line and will works:
docker run -e POSTGRES_DB="db" -e POSTGRES_PASSWORD="postgres" -e POSTGRES_HOST_AUTH_METHOD="trust" -e POSTGRES_USER="postgres" postgis/postgis -c max_worker_processes=15
If we try to use docker-py we can do:
import docker
client = docker.from_env()
container = client.containers.run(
"postgis/postgis:latest",
environment = {
'POSTGRES_DB': "db",
'POSTGRES_USER': "postgres",
'POSTGRES_PASSWORD': "password",
'POSTGRES_HOST_AUTH_METHOD': "trust"
}
)
There we can send almost all params to the container creation, but still can't found how to pass the -c max_worker_processes=15. How can we send that params to the container?
The run function has a command params, but does not work. I tried concat that to the image name, nothing. I can't found examples too D:
Thx!

Anything that appears after the image name in the docker run command is interpreted as the "command" part of the container setup; it overrides the Dockerfile CMD, which may be specially interpreted by the image's ENTRYPOINT. In the various Docker SDKs, you'd pass this as a command argument.
In docker-py specifically, the client.containers.run() method takes a command keyword argument. While the documentation says it accepts either a string or a list, you'll get the most consistent behavior if you split the command into a list of words yourself, and pass that list as arguments. (There are potentially significant security risks from assembling a command line via string interpolation, and using a list avoids many of these as well.)
container = client.containers.run(
"postgis/postgis:latest",
command=['-c', 'max_worker_processes=15'],
environment={...}
)

Related

Using ssh and sed within a python script with os.system properly

I am trying to run an ssh command within a python script using os.system to add a 0 at the end of a fully matched string in a remote server using ssh and sed.
I have a file called nodelist in a remote server that's a list that looks like this.
test-node-1
test-node-2
...
test-node-11
test-node-12
test-node-13
...
test-node-21
I want to use sed to make the following modification, I want to search test-node-1, and when a full match is found I want to add a 0 at the end, the file must end up looking like this.
test-node-1 0
test-node-2
...
test-node-11
test-node-12
test-node-13
...
test-node-21
However, when I run the first command,
hostname = 'test-node-1'
function = 'nodelist'
os.system(f"ssh -i ~/.ssh/my-ssh-key username#serverlocation \"sed -i '/{hostname}/s/$/ 0/' ~/{function}.txt\"")
The result becomes like this,
test-node-1 0
test-node-2
...
test-node-11 0
test-node-12 0
test-node-13 0
...
test-node-21
I tried adding a \b to the command like this,
os.system(f"ssh -i ~/.ssh/my-ssh-key username#serverlocation \"sed -i '/\b{hostname}\b/s/$/ 0/' ~/{function}.txt\"")
The command doesn't work at all.
I have to manually type in the node name instead of using a variable like so,
os.system(f"ssh -i ~/.ssh/my-ssh-key username#serverlocation \"sed -i '/\btest-node-1\b/s/$/ 0/' ~/{function}.txt\"")
to make my command work.
What's wrong with my command, why can't I do what I want it to do?
This code has serious security problems; fixing them requires reengineering it from scratch. Let's do that here:
#!/usr/bin/env python3
import os.path
import shlex # note, quote is only here in Python 3.x; in 2.x it was in the pipes module
import subprocess
import sys
# can set these from a loop if you choose, of course
username = "whoever"
serverlocation = "whereever"
hostname = 'test-node-1'
function = 'somename'
desired_cmd = ['sed', '-i',
f'/\\b{hostname}\\b/s/$/ 0/',
f'{function}.txt']
desired_cmd_str = ' '.join(shlex.quote(word) for word in desired_cmd)
print(f"Remote command: {desired_cmd_str}", file=sys.stderr)
# could just pass the below direct to subprocess.run, but let's log what we're doing:
ssh_cmd = ['ssh', '-i', os.path.expanduser('~/.ssh/my-ssh-key'),
f"{username}#{serverlocation}", desired_cmd_str]
ssh_cmd_str = ' '.join(shlex.quote(word) for word in ssh_cmd)
print(f"Local command: {ssh_cmd_str}", file=sys.stderr) # log equivalent shell command
subprocess.run(ssh_cmd) # but locally, run without a shell
If you run this (except for the subprocess.run at the end, which would require a real SSH key, hostname, etc), output looks like:
Remote command: sed -i '/\btest-node-1\b/s/$/ 0/' somename.txt
Local command: ssh -i /home/yourname/.ssh/my-ssh-key whoever#whereever 'sed -i '"'"'/\btest-node-1\b/s/$/ 0/'"'"' somename.txt'
That's correct/desired output; the funny '"'"' idiom is how one safely injects a literal single quote inside a single-quoted string in a POSIX-compliant shell.
What's different? Lots:
We're generating the commands we want to run as arrays, and letting Python do the work of converting those arrays to strings where necessary. This avoids shell injection attacks, a very common class of security vulnerability.
Because we're generating lists ourselves, we can change how we quote each one: We can use f-strings when it's appropriate to do so, raw strings when it's appropriate, etc.
We aren't passing ~ to the remote server: It's redundant and unnecessary because ~ is the default place for a SSH session to start; and the security precautions we're using (to prevent values from being parsed as code by a shell) prevent it from having any effect (as the replacement of ~ with the active value of HOME is not done by sed itself, but by the shell that invokes it; because we aren't invoking any local shell at all, we also needed to use os.path.expanduser to cause the ~ in ~/.ssh/my-ssh-key to be honored).
Because we aren't using a raw string, we need to double the backslashes in \b to ensure that they're treated as literal rather than syntactic by Python.
Critically, we're never passing data in a context where it could be parsed as code by any shell, either local or remote.

Fabrics 2.x ssh connection using identity fails to work

Trying to connect to the host described in ssh config using fabrics 2 and identity file.
con = Connection('my_host')
#task
def tt(c):
con.run('uname -a')
~/.ssh/config :
Host my_host
HostName 123.144.76.84
User ubuntu
IdentityFile ~/.keys/somekey
It fails with
paramiko.ssh_exception.AuthenticationException: Authentication failed.
While $ ssh my_host from the terminal works.
I've tried to do fab -i ~/.keys/somekey tt with same result.
Fabric accepts a hosts iterable as parameters in tasks. Per the documentation:
An iterable of host-connection specifiers appropriate for eventually instantiating a Connection. The existence of this argument will trigger automatic parameterization of the task when invoked from the CLI, similar to the behavior of --hosts.
One of the members of which could be:
A string appropriate for being the first positional argument to Connection - see its docs for details, but these are typically shorthand-only convenience strings like hostname.example.com or user#host:port.
As for your example, please try this for fabfile.py:
host_list = ["my_host"]
#task(hosts=host_list)
def tt(c):
c.run('uname -a')
Alternatively, you can omit the host declaration from the fabfile altogether. If you don't specify the host in fabfile.py, you can simply specify it as a host when invoking the fab cli utility. If your fabfile.py is this:
#task
def tt(c):
c.run('uname -a')
You would now run fab -H my_host tt to run it on the alias tt from your SSH client config.
Hope this helps.
There seems to be something afoot with paramiko. Without digging deeper I don't know if it's a bug or not. In any case, I had the same issue, and even a plain paramiko call got me the same error.
Following another SO question I was able to make it work by disabling rsa-sha2-256 and rsa-sha2-512 as mentioned.
Luckily, fabric exposes access to the paramiko arguments like so:
con = Connection(
'my_host',
connect_kwargs={
"disabled_algorithms": {"pubkeys": ["rsa-sha2-256", "rsa-sha2-512"]}
}
)
I find it unlucky that this is required in the fabfile. If someone else has a better/cleaner solution feel free to comment.
Same problem.
You can try add -d for more detail when fabric run:
fab2 -d tt
I found the exception: paramiko.ssh_exception.SSHException: Invalid key, then regenerate key from server, problem solved.

Airflow SSHExecuteOperator() with env=... not setting remote environment

I am modifying the environment of the calling process and appending to it's PATH along with setting some new environment variables. However, when I print os.environ in the child process, these changes are not reflected. Any idea what may be happening?
My call to the script on the instance:
ssh_hook = SSHHook(conn_id=ssh_conn_id)
temp_env = os.environ.copy()
temp_env["PATH"] = "/somepath:"+temp_env["PATH"]
run = SSHExecuteOperator(
bash_command="python main.py",
env=temp_env,
ssh_hook=ssh_hook,
task_id="run",
dag=dag)
Explanation: Implementation Analysis
If you look at the source to Airflow's SSHHook class, you'll see that it doesn't incorporate the env argument into the command being remotely run at all. The SSHExecuteOperator implementation passes env= through to the Popen() call on the hook, but that only passes it through to the local subprocess.Popen() implementation, not to the remote operation.
Thus, in short: Airflow does not support passing environment variables over SSH. If it were to have such support, it would need to either incorporate them into the command being remotely executed, or to add the SendEnv option to the ssh command being locally executed for each command to be sent (which even then would only work if the remote sshd were configured with AcceptEnv whitelisting the specific environment variable names to be received).
Workaround: Passing Environment Variables On The Command Line
from pipes import quote # in Python 3, make this "from shlex import quote"
def with_prefix_from_env(env_dict, command=None):
result = 'set -a; '
for (k,v) in env_dict.items():
result += '%s=%s; ' % (quote(k), quote(v))
if command:
result += command
return result
SSHExecuteOperator(bash_command=prefix_from_env(temp_env, "python main.py"),
ssh_hook=ssh_hook, task_id="run", dag=dag)
Workaround: Remote Sourcing
If your environment variables are sensitive and you don't want them to be logged with the command, you can transfer them out-of-band and source the remote file containing them.
from pipes import quote
def with_env_from_remote_file(filename, command):
return "set -a; . %s; %s" % (quote(filename), command)
SSHExecuteOperator(bash_command=with_env_from_remote_file(envfile, "python main.py"),
ssh_hook=ssh_hook, task_id="run", dag=dag)
Note that set -a directs the shell to export all defined variables, so the file being executed need only define variables with key=val declarations; they'll be automatically exported. If generating this file from your Python script, be sure to quote both keys and values with pipes.quote() to ensure that it only performs assignments and does not run other commands. The . keyword is a POSIX-compliant equivalent to the bash source command.

How to pass an Unix command output to Python function

I have a requirement where i need to run one docker command on my local machine and send this list to remote server and check whether those images are existing or not. I need to rerun list of images that are not existing on remote server to local server. I need to do it with python. I have written some code by mixing shell and python as below.
List=$(docker images -q | grep "docker pull" | awk '{print $3}') #this command is mandatory to get exact docker name.
fab remote_sync_system_spec_docker_to_aws_artifactory:List -u ${USERNAME} -H 1.2.3.4
I am tryting pass output of shell command i.e List to pyhon function through fab as above.That function looks like below.
def remote_sync_system_spec_docker_to_aws_artifactory(List):
for line in List:
if( os.popen("docker images -q $line") == none )
List=... #need to prepare list and return back to calling function.
once i get the list on remote server, i need to return back it to calling function and i can do some manipulations there. basically i can use shell but problem is with connecting to remote server with sshpass is not accepted in my project so looking for python script.
As a simple way to transport a list, I would suggest a pipeline rather than a variable.
docker images -q | awk '/docker pull/ { print $3 }' |
fab remote_sync_system_spec_docker_to_aws_artifactory_stdin -u ${USERNAME} -H 1.2.3.4
where the function is something like
import sys, subprocess
def remote_sync_system_spec_docker_to_aws_artifactory_stdin (handle=sys.stdin):
"""
Read Docker image identifiers from file handle; check which
ones are available here, and filter those out; return the rest.
"""
missing = ()
for line in handle:
repo = line.rstrip('\n')
if subprocess.run(['docker', 'images', '-q', repo],
stdout=subprocess.PIPE, universal_newlines=True).stdout == "":
missing.append(repo)
return missing
os.popen()
will return and object in memory, what you should do is
def remote_sync_system_spec_docker_to_aws_artifactory(List):
for line in List:
if( os.popen("docker images -q $line").read() == none ):
List=... #need to prepare list and return back to calling function.
You should avoid os.popen() and even its replacement subprocess.Popen() if all you need is to obtain the output from a shell command.
For recent Python 3.x, use subprocess.run():
import subprocess
List = ()
for result in subprocess.run(["docker", "images", "-q"],
stdout=subprocess.PIPE, universal_newlines=True).stdout.split('\n'):
if 'docker pull' in result:
List.append(result.split()[3])
In Python 2.x the corresponding function was subprocess.check_output().
Maybe you'll want to replace the grep with something a bit more focused; 'docker pull' in result will look for the string anywhere in the line, but you would probably like to confine it to just a particular column, for example.

How to check if a docker instance is running?

I am using Python to start docker instances.
How can I identify if they are running? I can pretty easily use docker ps from terminal like:
docker ps | grep myimagename
and if this returns anything, the image is running. If it returns an empty string, the image is not running.
However, I cannot understand how to get subprocess.Popen to work with this - it requires a list of arguments so something like:
p = subprocess.Popen(['docker', 'ps', '|', 'grep', 'myimagename'], stdout=subprocess.PIPE)
print p.stdout
does not work because it tries to take the "docker ps" and make it "docker" and "ps" commands (which docker doesn't support).
It doesn't seem I can give it the full command, either, as Popen tries to run the entire first argument as the executable, so this fails:
p = subprocess.Popen('docker ps | grep myimagename', stdout=subprocess.PIPE)
print p.stdout
Is there a way to actually run docker ps from Python? I don't know if trying to use subprocess is the best route or not. It is what I am using to run the docker containers, however, so it seemed to be the right path.
How can I determine if a docker instance is running from a Python script?
You can use the python docker client:
import docker
DOCKER_CLIENT = docker.DockerClient(base_url='unix://var/run/docker.sock')
RUNNING = 'running'
def is_running(container_name):
"""
verify the status of a sniffer container by it's name
:param container_name: the name of the container
:return: Boolean if the status is ok
"""
container = DOCKER_CLIENT.containers.get(container_name)
container_state = container.attrs['State']
container_is_running = container_state['Status'] == RUNNING
return container_is_running
my_container_name = "asdf"
print(is_running(my_container_name))
One option is to use subprocess.check_output setting shell=True (thanks slezica!):
s = subprocess.check_output('docker ps', shell=True)
print 'Results of docker ps' + s
if the docker ps command fails (for example you don't start your docker-machine) then check_output will throw an exception.
A simple find can then verify your container is found / not-found:
if s.find('containername') != -1:
print 'found!'
else:
print 'not found.'
I would recommend using the container hash id and not container name in this case, too, as the name may be duplicated in the image name or other results of the docker ps.
Even though it seems like you are on your way, I would recommend you use docker-py as it accesses the socket created by docker to issue API request. I use this library currently use this library and it is real time saver.

Categories

Resources