I have a requirement where i need to run one docker command on my local machine and send this list to remote server and check whether those images are existing or not. I need to rerun list of images that are not existing on remote server to local server. I need to do it with python. I have written some code by mixing shell and python as below.
List=$(docker images -q | grep "docker pull" | awk '{print $3}') #this command is mandatory to get exact docker name.
fab remote_sync_system_spec_docker_to_aws_artifactory:List -u ${USERNAME} -H 1.2.3.4
I am tryting pass output of shell command i.e List to pyhon function through fab as above.That function looks like below.
def remote_sync_system_spec_docker_to_aws_artifactory(List):
for line in List:
if( os.popen("docker images -q $line") == none )
List=... #need to prepare list and return back to calling function.
once i get the list on remote server, i need to return back it to calling function and i can do some manipulations there. basically i can use shell but problem is with connecting to remote server with sshpass is not accepted in my project so looking for python script.
As a simple way to transport a list, I would suggest a pipeline rather than a variable.
docker images -q | awk '/docker pull/ { print $3 }' |
fab remote_sync_system_spec_docker_to_aws_artifactory_stdin -u ${USERNAME} -H 1.2.3.4
where the function is something like
import sys, subprocess
def remote_sync_system_spec_docker_to_aws_artifactory_stdin (handle=sys.stdin):
"""
Read Docker image identifiers from file handle; check which
ones are available here, and filter those out; return the rest.
"""
missing = ()
for line in handle:
repo = line.rstrip('\n')
if subprocess.run(['docker', 'images', '-q', repo],
stdout=subprocess.PIPE, universal_newlines=True).stdout == "":
missing.append(repo)
return missing
os.popen()
will return and object in memory, what you should do is
def remote_sync_system_spec_docker_to_aws_artifactory(List):
for line in List:
if( os.popen("docker images -q $line").read() == none ):
List=... #need to prepare list and return back to calling function.
You should avoid os.popen() and even its replacement subprocess.Popen() if all you need is to obtain the output from a shell command.
For recent Python 3.x, use subprocess.run():
import subprocess
List = ()
for result in subprocess.run(["docker", "images", "-q"],
stdout=subprocess.PIPE, universal_newlines=True).stdout.split('\n'):
if 'docker pull' in result:
List.append(result.split()[3])
In Python 2.x the corresponding function was subprocess.check_output().
Maybe you'll want to replace the grep with something a bit more focused; 'docker pull' in result will look for the string anywhere in the line, but you would probably like to confine it to just a particular column, for example.
Related
I have a batch file which is data.sh . so I am running this using python
and data.sh code is something like this
configure()
{
read -p '--> Enter key ID: ' key_id
read -p '--> Enter secret key: ' secret_key
echo "$key_id"
echo "$secret_key"
}
I am running this something like this
import subprocess
rc = subprocess.call("data.sh")
I want to pass these both pass key_id and secret_key using python code. how I can achieve this ?
Like MattDMo said in the comment, configure does not use any command line args. If you change it to use command line args ($1 is the first command line arg, $2 is the second etc...), you can change the subprocess.call() function call to use a list of arguments. The first arg passed in the list will be the executable and the other arguments will be passed to the the executable as command line args. See the docs for subprocess.call here. There's a good example calling the ls executable with a command line arg of -l. You wouldn't need to use any dashes(like -l in the ls example), you could just pass the key id as the first argument, and the secret key as the second. This would look like the following:
Python:
rc = subprocess.call(["data.sh", key_id, secret_key])
Shell:
configure()
{
key_id=$1
secret_key=$2
echo "$key_id"
echo "$secret_key"
}
If you don't want to change the script to take command-line arguments, you can do
subprocess.Popen(["./data.sh"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=True).communicate('key123\nabcd')
I have this shell command:
$ docker run -it --env-file=.env -e "CONFIG=$(cat /path/to/your/config.json | jq -r tostring)" algolia/docsearch-scraper
And I want to run it as a python subprocess.
I thought I'll only need an equivalent of the jq -r tostring, but if I use the config.json as a normal string the " don't get escaped. I also escaped them by using json.load(config.json).
With the original jq command the " don't get escaped either and it's just returning the json string.
When I use the json returned as a string in python subprocess i get always a FileNotFoundError on the subprocess line.
#main.command()
def algolia_scrape():
with open(f"{WORKING_DIR}/conf_dev.json") as conf:
CONFIG = json.load(conf)
subprocess.Popen(f'/usr/local/bin/docker -it --env-file={WORKING_DIR}/algolia.env -e "CONFIG={json.dumps(CONFIG)}" algolia/docsearch-scraper')
You get "file not found" because (without shell=True) you are trying to run a command whose name is /usr/local/bin/docker -it ... when you want to run /usr/local/bin/docker with some arguments. And of course it would be pretty much a nightmare to try to pass the JSON through the shell because you need to escape any shell metacharacters from the string; but just break up the command string into a list of strings, like the shell would.
def algolia_scrape():
with open(f"{WORKING_DIR}/conf_dev.json") as conf:
CONFIG = json.load(conf)
p = subprocess.Popen(['/usr/local/bin/docker', '-it',
f'--env-file={WORKING_DIR}/algolia.env',
'-e', f'CONFIG={json.dumps(CONFIG)}',
'algolia/docsearch-scraper'])
You generally want to save the result of subprocess.Popen() because you will need to wait for the process when it terminates.
I am trying to run an ssh command within a python script using os.system to add a 0 at the end of a fully matched string in a remote server using ssh and sed.
I have a file called nodelist in a remote server that's a list that looks like this.
test-node-1
test-node-2
...
test-node-11
test-node-12
test-node-13
...
test-node-21
I want to use sed to make the following modification, I want to search test-node-1, and when a full match is found I want to add a 0 at the end, the file must end up looking like this.
test-node-1 0
test-node-2
...
test-node-11
test-node-12
test-node-13
...
test-node-21
However, when I run the first command,
hostname = 'test-node-1'
function = 'nodelist'
os.system(f"ssh -i ~/.ssh/my-ssh-key username#serverlocation \"sed -i '/{hostname}/s/$/ 0/' ~/{function}.txt\"")
The result becomes like this,
test-node-1 0
test-node-2
...
test-node-11 0
test-node-12 0
test-node-13 0
...
test-node-21
I tried adding a \b to the command like this,
os.system(f"ssh -i ~/.ssh/my-ssh-key username#serverlocation \"sed -i '/\b{hostname}\b/s/$/ 0/' ~/{function}.txt\"")
The command doesn't work at all.
I have to manually type in the node name instead of using a variable like so,
os.system(f"ssh -i ~/.ssh/my-ssh-key username#serverlocation \"sed -i '/\btest-node-1\b/s/$/ 0/' ~/{function}.txt\"")
to make my command work.
What's wrong with my command, why can't I do what I want it to do?
This code has serious security problems; fixing them requires reengineering it from scratch. Let's do that here:
#!/usr/bin/env python3
import os.path
import shlex # note, quote is only here in Python 3.x; in 2.x it was in the pipes module
import subprocess
import sys
# can set these from a loop if you choose, of course
username = "whoever"
serverlocation = "whereever"
hostname = 'test-node-1'
function = 'somename'
desired_cmd = ['sed', '-i',
f'/\\b{hostname}\\b/s/$/ 0/',
f'{function}.txt']
desired_cmd_str = ' '.join(shlex.quote(word) for word in desired_cmd)
print(f"Remote command: {desired_cmd_str}", file=sys.stderr)
# could just pass the below direct to subprocess.run, but let's log what we're doing:
ssh_cmd = ['ssh', '-i', os.path.expanduser('~/.ssh/my-ssh-key'),
f"{username}#{serverlocation}", desired_cmd_str]
ssh_cmd_str = ' '.join(shlex.quote(word) for word in ssh_cmd)
print(f"Local command: {ssh_cmd_str}", file=sys.stderr) # log equivalent shell command
subprocess.run(ssh_cmd) # but locally, run without a shell
If you run this (except for the subprocess.run at the end, which would require a real SSH key, hostname, etc), output looks like:
Remote command: sed -i '/\btest-node-1\b/s/$/ 0/' somename.txt
Local command: ssh -i /home/yourname/.ssh/my-ssh-key whoever#whereever 'sed -i '"'"'/\btest-node-1\b/s/$/ 0/'"'"' somename.txt'
That's correct/desired output; the funny '"'"' idiom is how one safely injects a literal single quote inside a single-quoted string in a POSIX-compliant shell.
What's different? Lots:
We're generating the commands we want to run as arrays, and letting Python do the work of converting those arrays to strings where necessary. This avoids shell injection attacks, a very common class of security vulnerability.
Because we're generating lists ourselves, we can change how we quote each one: We can use f-strings when it's appropriate to do so, raw strings when it's appropriate, etc.
We aren't passing ~ to the remote server: It's redundant and unnecessary because ~ is the default place for a SSH session to start; and the security precautions we're using (to prevent values from being parsed as code by a shell) prevent it from having any effect (as the replacement of ~ with the active value of HOME is not done by sed itself, but by the shell that invokes it; because we aren't invoking any local shell at all, we also needed to use os.path.expanduser to cause the ~ in ~/.ssh/my-ssh-key to be honored).
Because we aren't using a raw string, we need to double the backslashes in \b to ensure that they're treated as literal rather than syntactic by Python.
Critically, we're never passing data in a context where it could be parsed as code by any shell, either local or remote.
I am trying to convert a shell script to python to access the MongoDB.
I have the code :
primport = []
primport = call('/opt/mongodb/bin/mongo localhost27017 --eval "printjson(rs.isMaster())" | grep "primary"', shell = True)
and when I'm trying to print primport, the whole value gets printed :
`"primary" : "1404Base:27017"`,
. which is not what i want. I want only the host ID 27017. I tried using python split function but it says int object has no attribute split. I need only the ID as I have to pass it as an argument in the upcoming code.
Assuming call is from the subprocess module, call returns the shell return code of the command, not the STDOUT. You want to get STDOUT of the command, and for that you should use subprocess.check_output.
Try using:
primport = subprocess.check_output('/opt/mongodb/bin/mongo localhost27017 --eval "printjson(rs.isMaster())" | grep "primary" | cut -d ":" -f 2,3', shell = True)
EDIT:
Also, added a cut to your system call so you don't have to do any cleaning in Python.
I am using Python to start docker instances.
How can I identify if they are running? I can pretty easily use docker ps from terminal like:
docker ps | grep myimagename
and if this returns anything, the image is running. If it returns an empty string, the image is not running.
However, I cannot understand how to get subprocess.Popen to work with this - it requires a list of arguments so something like:
p = subprocess.Popen(['docker', 'ps', '|', 'grep', 'myimagename'], stdout=subprocess.PIPE)
print p.stdout
does not work because it tries to take the "docker ps" and make it "docker" and "ps" commands (which docker doesn't support).
It doesn't seem I can give it the full command, either, as Popen tries to run the entire first argument as the executable, so this fails:
p = subprocess.Popen('docker ps | grep myimagename', stdout=subprocess.PIPE)
print p.stdout
Is there a way to actually run docker ps from Python? I don't know if trying to use subprocess is the best route or not. It is what I am using to run the docker containers, however, so it seemed to be the right path.
How can I determine if a docker instance is running from a Python script?
You can use the python docker client:
import docker
DOCKER_CLIENT = docker.DockerClient(base_url='unix://var/run/docker.sock')
RUNNING = 'running'
def is_running(container_name):
"""
verify the status of a sniffer container by it's name
:param container_name: the name of the container
:return: Boolean if the status is ok
"""
container = DOCKER_CLIENT.containers.get(container_name)
container_state = container.attrs['State']
container_is_running = container_state['Status'] == RUNNING
return container_is_running
my_container_name = "asdf"
print(is_running(my_container_name))
One option is to use subprocess.check_output setting shell=True (thanks slezica!):
s = subprocess.check_output('docker ps', shell=True)
print 'Results of docker ps' + s
if the docker ps command fails (for example you don't start your docker-machine) then check_output will throw an exception.
A simple find can then verify your container is found / not-found:
if s.find('containername') != -1:
print 'found!'
else:
print 'not found.'
I would recommend using the container hash id and not container name in this case, too, as the name may be duplicated in the image name or other results of the docker ps.
Even though it seems like you are on your way, I would recommend you use docker-py as it accesses the socket created by docker to issue API request. I use this library currently use this library and it is real time saver.