I'm trying to execute aws cli command using Pyton's subprocess
windows cmd:
aws --profile some_profile --region some_region ec2 describe-instances --filters Name=tag:some_tag,Values=some_value --query "Reservations[*].Instances[*].{AvailabilityZone:Placement.AvailabilityZone,Status:State.Name,Name:Tags[?Key=='Name']|[0].Value}" --output=table
and that's how I try to do it:
profile = "some_profile"
region = "some_region"
ec2_filters = "Name=tag:some_tag,Values=some_value"
ec2_query = "Reservations[*].Instances[*].{AvailabilityZone:Placement.AvailabilityZone,Status:State.Name,Name:Tags[?Key=='Name']|[0].Value}"
ec2_output_type = "table"
proc = subprocess.Popen(["aws", "--profile", profile, "--region", region, "ec2", "describe-instances", "--filters", ec2_filters, "--query", ec2_query, "--output", ec2_output_type], stdout=subprocess.PIPE, shell=True)
This is the error message:
'[0].Value}' is not recognized as an internal or external command,
operable program or batch file.
I don't have aws installed, so I created a mock batch file to spit back what it received. I did try my initial guesses and you're right, it often makes it difficult, but I figured it out. Sorry for not testing what I asked you to try.
Anyway, aws.bat contains a single line, echo %*, which prints back whatever the batch file receives as arguments, so we know it's working.
Then, I tried to use your command. I got the same error you got, so I modified it to:
.\aws.bat --profile some_profile --region some_region ec2 describe-instances --filters Name=tag:some_tag,Values=some_value --query '"Reservations[*].Instances[*].{AvailabilityZone:Placement.AvailabilityZone,Status:State.Name,Name:Tags[?Key=='Name']|[0].Value}"' --output=table
This outputted the command back, meaning it got executed correctly.
Then, I modified your code to make sure there's quotes over all the query. I used simple string concatenation to do that.
import subprocess
profile = "some_profile"
region = "some_region"
ec2_filters = "Name=tag:some_tag,Values=some_value"
ec2_query = (
'"Reservations[*].Instances[*].{AvailabilityZone:Placement.AvailabilityZone,Status:State.Name,Name:Tags[?Key=='
"'Name'"
']|[0].Value}"'
)
ec2_output_type = "table"
proc = subprocess.Popen(["aws.bat", "--profile", profile, "--region", region, "ec2", "describe-instances", "--filters", ec2_filters, "--query", ec2_query, "--output", ec2_output_type])
This worked. Funnily, if I used triple quotes in an unorthodox manner, it worked as well.
ec2_query = ' '''"Reservations[*].Instances[*].{AvailabilityZone:Placement.AvailabilityZone,Status:State.Name,Name:Tags[?Key=='Name']|[0].Value}"' '''
Note the start, ' '''". I don't really know what's going on.
Anyway, the easier solution is to break up your string so the quotes don't get confusing.
Related
Iv'e been using the following shell command to read the image off a scanner named scanner_name and save it in a file named file_name
scanimage -d <scanner_name> --resolution=300 --format=tiff --mode=Color 2>&1 > <file_name>
This has worked fine for my purposes.
I'm now trying to embed this in a python script. What I need is to save the scanned image, as before, into a file and also capture any std output (say error messages) to a string
I've tried
scan_result = os.system('scanimage -d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} '.format(scanner, file_name))
But when I run this in a loop (with different scanners), there is an unreasonably long lag between scans and the images aren't saved until the next scan starts (the file is created as an empty file and is not filled until the next scanning command). All this with scan_result=0, i.e. indicating no error
The subprocess method run() has been suggested to me, and I have tried
with open(file_name, 'w') as scanfile:
input_params = '-d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} '.format(scanner, file_name)
scan_result = subprocess.run(["scanimage", input_params], stdout=scanfile, shell=True)
but this saved the image in some kind of an unreadable file format
Any ideas as to what may be going wrong? Or what else I can try that will allow me to both save the file and check the success status?
subprocess.run() is definitely preferred over os.system() but neither of them as such provides support for running multiple jobs in parallel. You will need to use something like Python's multiprocessing library to run several tasks in parallel (or painfully reimplement it yourself on top of the basic subprocess.Popen() API).
You also have a basic misunderstanding about how to run subprocess.run(). You can pass in either a string and shell=True or a list of tokens and shell=False (or no shell keyword at all; False is the default).
with_shell = subprocess.run(
"scanimage -d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} ".format(
scanner, file_name), shell=True)
with open(file_name) as write_handle:
no_shell = subprocess.run([
"scanimage", "-d", scanner, "--resolution=300", "--format=tiff",
"--mode=Color"], stdout=write_handle)
You'll notice that the latter does not support redirection (because that's a shell feature) but this is reasonably easy to implement in Python. (I took out the redirection of standard error -- you really want error messages to remain on stderr!)
If you have a larger working Python program this should not be awfully hard to integrate with a multiprocessing.Pool(). If this is a small isolated program, I would suggest you peel off the Python layer entirely and go with something like xargs or GNU parallel to run a capped number of parallel subprocesses.
I suspect the issue is you're opening the output file, and then running the subprocess.run() within it. This isn't necessary. The end result is, you're opening the file via Python, then having the command open the file again via the OS, and then closing the file via Python.
JUST run the subprocess, and let the scanimage 2>&1> filename command create the file (just as it would if you ran the scanimage at the command line directly.)
I think subprocess.check_output() is now the preferred method of capturing the output.
I.e.
from subprocess import check_output
# Command must be a list, with all parameters as separate list items
command = ['scanimage',
'-d{}'.format(scanner),
'--resolution=300',
'--format=tiff',
'--mode=Color',
'2>&1>{}'.format(file_name)]
scan_result = check_output(command)
print(scan_result)
However, (with both run and check_output) that shell=True is a big security risk ... especially if the input_params come into the Python script externally. People can pass in unwanted commands, and have them run in the shell with the permissions of the script.
Sometimes, the shell=True is necessary for the OS command to run properly, in which case the best recommendation is to use an actual Python module to interface with the scanner - versus having Python pass an OS command to the OS.
I'm trying to execute the next bash command either from Python or Perl:
googlesamples-assistant-pushtotalk --credentials /home/jsti/.config/google-oauthlib-tool/credentials.json
--device-model-id 'pbx-assista' --device-id 'pbx' -i /tmp/google_audio1314_in.wav -o /tmp/google_audio1314_out.wav -v
Basically the idea is to send an audio to Google Assistant, after that, it should answer me the audio with another audio. I should receive an audio file as a response from Google Assistant but I don't receive it. There is no errors but the file does not arrive.
The command works properly if I execute it in the terminal.
Does anyone know what it is happening with this command?
This is the code:
#!/usr/bin/env python
import sys
from asterisk.agi import *
import subprocess
command = "googlesamples-assistant-pushtotalk"
oauth_dir = "/home/jsti/.config/google-oauthlib-tool/credentials.json"
audio_in = "/tmp/google_audio1314_in.wav"
audio_out = "google_audio1314_out.wav"
agi = AGI()
agi.verbose("python agi started")
callerId = agi.env['agi_callerid']
agi.verbose("call from %s" % callerId)
while True:
args = [command, '--credentials', oauth_dir, '--device-model-id', '"pbx-assista"', '--device-id', '"pbx"', '-i', audio_in, '-o', audio_out, '-v' ]
subprocess.Popen(args)
Get rid of the double quotes around "pbx-assista" and "pbx".
args = [command, '--credentials', oauth_dir, '--device-model-id', 'pbx-assista', '--device-id', 'pbx', '-i', audio_in, '-o', audio_out, '-v']
The code in use here doesn't actually wait to allow the subprocess to exit (and doesn't look at whether it succeeded or not, and so can't detect and report errors).
Change:
subprocess.Popen(args)
...to...
subprocess.check_call(args)
...or...
p = subprocess.Popen(args)
p.wait()
Also, you'll want to change '"pbx"' to just 'pbx'; the double quotes in the original bash version are syntactic, just like the single quotes in the Python version are -- you don't need literal quotes in addition to the syntactic ones. (Bash optionally allows syntactic quotes to be left out when they aren't needed to prevent unwanted expansion, make otherwise-syntactically-significant characters literal, or the like; with Python, they're always mandatory when defining a string)
I have a requirement where i need to run one docker command on my local machine and send this list to remote server and check whether those images are existing or not. I need to rerun list of images that are not existing on remote server to local server. I need to do it with python. I have written some code by mixing shell and python as below.
List=$(docker images -q | grep "docker pull" | awk '{print $3}') #this command is mandatory to get exact docker name.
fab remote_sync_system_spec_docker_to_aws_artifactory:List -u ${USERNAME} -H 1.2.3.4
I am tryting pass output of shell command i.e List to pyhon function through fab as above.That function looks like below.
def remote_sync_system_spec_docker_to_aws_artifactory(List):
for line in List:
if( os.popen("docker images -q $line") == none )
List=... #need to prepare list and return back to calling function.
once i get the list on remote server, i need to return back it to calling function and i can do some manipulations there. basically i can use shell but problem is with connecting to remote server with sshpass is not accepted in my project so looking for python script.
As a simple way to transport a list, I would suggest a pipeline rather than a variable.
docker images -q | awk '/docker pull/ { print $3 }' |
fab remote_sync_system_spec_docker_to_aws_artifactory_stdin -u ${USERNAME} -H 1.2.3.4
where the function is something like
import sys, subprocess
def remote_sync_system_spec_docker_to_aws_artifactory_stdin (handle=sys.stdin):
"""
Read Docker image identifiers from file handle; check which
ones are available here, and filter those out; return the rest.
"""
missing = ()
for line in handle:
repo = line.rstrip('\n')
if subprocess.run(['docker', 'images', '-q', repo],
stdout=subprocess.PIPE, universal_newlines=True).stdout == "":
missing.append(repo)
return missing
os.popen()
will return and object in memory, what you should do is
def remote_sync_system_spec_docker_to_aws_artifactory(List):
for line in List:
if( os.popen("docker images -q $line").read() == none ):
List=... #need to prepare list and return back to calling function.
You should avoid os.popen() and even its replacement subprocess.Popen() if all you need is to obtain the output from a shell command.
For recent Python 3.x, use subprocess.run():
import subprocess
List = ()
for result in subprocess.run(["docker", "images", "-q"],
stdout=subprocess.PIPE, universal_newlines=True).stdout.split('\n'):
if 'docker pull' in result:
List.append(result.split()[3])
In Python 2.x the corresponding function was subprocess.check_output().
Maybe you'll want to replace the grep with something a bit more focused; 'docker pull' in result will look for the string anywhere in the line, but you would probably like to confine it to just a particular column, for example.
So i have a script from Python that connects to the client servers then get some data that i need.
Now it will work in this way, my bash script from the client side needs input like the one below and its working this way.
client.exec_command('/apps./tempo.sh' 2016 10 01 02 03))
Now im trying to get the user input from my python script then transfer it to my remotely called bash script and thats where i get my problem. This is what i tried below.
Below is the method i tried that i have no luck working.
import sys
client.exec_command('/apps./tempo.sh', str(sys.argv))
I believe you are using Paramiko - which you should tag or include that info in your question.
The basic problem I think you're having is that you need to include those arguments inside the string, i.e.
client.exec_command('/apps./tempo.sh %s' % str(sys.argv))
otherwise they get applied to the other arguments of exec_command. I think your original example is not quite accurate in how it works;
Just out of interest, have you looked at "fabric" (http://www.fabfile.org ) - this has lots of very handy funcitons like "run" which will run a command on a remote server (or lots of remote servers!) and return you the response.
It also gives you lots of protection by wrapping around popen and paramiko for hte ssh login etcs, so it can be much more secure then trying to make web services or other things.
You should always be wary of injection attacks - Im unclear how you are injecting your variables, but if a user calls your script with something like python runscript "; rm -rf /" that would have very bad problems for you It would instead be better to have 'options' on the command, which are programmed in, limiting the users input drastically, or at least a lot of protection around the input variables. Of course if this is only for you (or trained people), then its a little easier.
I recommend using paramiko for the ssh connection.
import paramiko
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(server, username=user,password=password)
...
ssh_client.close()
And If you want to simulate a terminal, as if a user was typing:
chan=ssh_client.invoke_shell()
chan.send('PS1="python-ssh:"\n')
def exec_command(cmd):
"""Gets ssh command(s), execute them, and returns the output"""
prompt='python-ssh:' # the command line prompt in the ssh terminal
buff=''
chan.send(str(cmd)+'\n')
while not chan.recv_ready():
time.sleep(1)
while not buff.endswith(prompt):
buff+=ssh_client.chan.recv(1024)
return buff[:len(prompt)]
Example usage: exec_command('pwd')
And the result would even be returned to you via ssh
Assuming that you are using paramiko you need to send the command as a string. It seems that you want to pass the command line arguments passed to your Python script as arguments for the remote command, so try this:
import sys
command = '/apps./tempo.sh'
args = ' '.join(sys.argv[1:]) # all args except the script's name!
client.exec_command('{} {}'.format(command, args))
This will collect all the command line arguments passed to the Python script, except the first argument which is the script's file name, and build a space separated string. This argument string is them concatenated with the bash script command and executed remotely.
I am using Python to start docker instances.
How can I identify if they are running? I can pretty easily use docker ps from terminal like:
docker ps | grep myimagename
and if this returns anything, the image is running. If it returns an empty string, the image is not running.
However, I cannot understand how to get subprocess.Popen to work with this - it requires a list of arguments so something like:
p = subprocess.Popen(['docker', 'ps', '|', 'grep', 'myimagename'], stdout=subprocess.PIPE)
print p.stdout
does not work because it tries to take the "docker ps" and make it "docker" and "ps" commands (which docker doesn't support).
It doesn't seem I can give it the full command, either, as Popen tries to run the entire first argument as the executable, so this fails:
p = subprocess.Popen('docker ps | grep myimagename', stdout=subprocess.PIPE)
print p.stdout
Is there a way to actually run docker ps from Python? I don't know if trying to use subprocess is the best route or not. It is what I am using to run the docker containers, however, so it seemed to be the right path.
How can I determine if a docker instance is running from a Python script?
You can use the python docker client:
import docker
DOCKER_CLIENT = docker.DockerClient(base_url='unix://var/run/docker.sock')
RUNNING = 'running'
def is_running(container_name):
"""
verify the status of a sniffer container by it's name
:param container_name: the name of the container
:return: Boolean if the status is ok
"""
container = DOCKER_CLIENT.containers.get(container_name)
container_state = container.attrs['State']
container_is_running = container_state['Status'] == RUNNING
return container_is_running
my_container_name = "asdf"
print(is_running(my_container_name))
One option is to use subprocess.check_output setting shell=True (thanks slezica!):
s = subprocess.check_output('docker ps', shell=True)
print 'Results of docker ps' + s
if the docker ps command fails (for example you don't start your docker-machine) then check_output will throw an exception.
A simple find can then verify your container is found / not-found:
if s.find('containername') != -1:
print 'found!'
else:
print 'not found.'
I would recommend using the container hash id and not container name in this case, too, as the name may be duplicated in the image name or other results of the docker ps.
Even though it seems like you are on your way, I would recommend you use docker-py as it accesses the socket created by docker to issue API request. I use this library currently use this library and it is real time saver.