How do I pass applescript arguments from a python script? - python

I have an applescript that takes in two parameters on execution.
on run {targetBuddyPhone, targetMessage}
tell application "Messages"
set targetService to 1st service whose service type = iMessage
set targetBuddy to buddy targetBuddyPhone of targetService
send targetMessage to targetBuddy
end tell
end run
I then want this script to execute from within a python script. I know how to execute a applescript from python, but how do I also give it arguments? Here is the python script that I currently have written out.
#!/usr/bin/env python3
import subprocess
def run_applescript(script, *args):
p = subprocess.Popen(['arch', '-i386', 'osascript', '-e', script] +
[unicode(arg).encode('utf8') for arg in args],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
err = p.wait()
if err:
raise RuntimeError(err, p.stderr.read()[:-1].decode('utf8'))
return p.stdout.read()[:-1].decode('utf8')
The error I receive after trying to execute this code in the terminal is:
Traceback (most recent call last):
File "messageExecuter.py", line 14, in <module>
run_applescript("sendMessage.scpt",1111111111,"hello")
File "messageExecuter.py", line 11, in run_applescript
raise RuntimeError(err, p.stderr.read()[:-1].decode('utf8'))
RuntimeError: (1, u'arch: posix_spawnp: osascript: Bad CPU type in executable')

Clue is in the error message. Delete 'arch', '-i386' from arguments list, as osascript is 64-bit only.

Related

How to stop airodump-ng subprocess in Python?

What I am trying to do is record the output of airodump-ng every 10 seconds.
First attempt:
Going through the airodump-ng documentation they mention such a command --write-interval
When I tried using it:sudo airodump-ng mon0 -w testOutput --write-interval 10 -o csv, I got the error that --write-interval is an unrecognized option.
Second attempt:
I tried doing this myself in Python. I then came accross the issue of trying to stop the process. The closest I got was this solution.
airodump = subprocess.Popen(['sudo', 'airodump-ng', 'mon0', '-w', 'pythonTest'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
o_airodump, unused_stderr = airodump.communicate(timeout=15)
airodump.kill()
This does stop the process, and I do have the relevant output files, BUT what happens is that all my running programs close and I get logged out of Ubuntu.
Additional Info:
Just before everything closes and I got logged out, I saw an error message on the terminal. I quickly screenshot it to see what it said:
Traceback (most recent call last):
File "airodump-call.py", line 3, in <module> o_airodump, unused_stderr = airodump.communicate(timeout=15)
File "/usr/lib/python3.5/subprocess.py", line 1072, in communicate stdout, stderr = self.communicate(input, endtime, timeout)
File "usr/lib/python3.5/subprocess.py", line 1713, in _communicate raise TimeoutExpired(self.args, orig_timeout)
subprocess.TimeoutExpired: Command '['airodump-ng', 'mon0', '-w', 'pythonTest']' timed out after 15 seconds
I've run into the same problem. Despite this being an old post, I post my solution since it could help someone searching for this.
Let's say I run airodump-ng like the OP:
proc = subprocess.Popen(['airodump-ng', 'wlan0mon'])
This can be terminated by sending a SIGINT signal for the pid of the process:
os.kill(proc.pid, signal.SIGINT)
Note: you need import os and import signal

how to prevent failure of subprocess stopping the main process in python

I wrote a python script to run a command called "gtdownload" on a bunch of files with multiprocessing. The function "download" is where I am having trouble with.
#/usr/bin/env python
import os, sys, subprocess
from multiprocessing import Pool
def cghub_dnld_file(file1, file2, file3, np):
<open files>
<read in lines>
p = Pool(int(np))
map_args = [(line.rstrip(),name_lines[i].rstrip(),bar_lines[i].rstrip()) for i, line in enumerate(id_lines)]
p.map(download_wrapper,map_args)
def download(id, name, bar):
<check if file has been downloaded, if not download>
<.....>
link = "https://cghub.ucsc.edu/cghub/data/analysis/download/" + id
dnld_cmd = "gtdownload -c ~/.cghub.key --max-children 4 -vv -d " + link + " > gt.out 2>gt.err"
subprocess.call(dnld_cmd,shell=True)
def download_wrapper(args):
return download(*args)
def main():
<read in arguments>
<...>
cghub_dnld_file(file1,file2,file3,threads)
if __name__ == "__main__":
main()
If this file does not exist in the database, gtdownload would quit which also kills my python job with the following error:
Traceback (most recent call last):
File "/rsrch1/rists/djiao/bin/cghub_dnld.py", line 102, in <module>
main()
File "/rsrch1/rists/djiao/bin/cghub_dnld.py", line 98, in main
cghub_dnld_file(file1,file2,file3,threads)
File "/rsrch1/rists/djiao/bin/cghub_dnld.py", line 22, in cghub_dnld_file
p.map(download_wrapper,map_args)
File "/rsrch1/rists/apps/x86_64-rhel6/anaconda/lib/python2.7/multiprocessing/pool.py", line 250, in map
return self.map_async(func, iterable, chunksize).get()
File "/rsrch1/rists/apps/x86_64-rhel6/anaconda/lib/python2.7/multiprocessing/pool.py", line 554, in get
raise self._value
OSError: [Errno 2] No such file or directory
The actual error message from gtdownload :
Welcome to gtdownload-3.8.5a.
Ready to download
Communicating with GT Executive ...
Headers received from the client: 'HTTP/1.1 100 Continue
HTTP/1.1 404 Not Found
Date: Tue, 29 Jul 2014 18:49:57 GMT
Server: Apache/2.2.15 (Red Hat and CGHub)
Strict-Transport-Security: max-age=31536000
X-Powered-By: PHP/5.3.3
Content-Length: 669
Connection: close
Content-Type: text/xml
'
Error: You have requested to download a uuid which either does not exist within the system, or has not yet reached the 'live' state. The requested action will not be performed. Please double check the supplied uuid or contact thelpdesk for further assistance.
I would like the script to skip the one that does not exist and start gtdownload on the next one. I tried to output the stderr of subprocess.call to a pipe and see if there is the "error" keyword. But it seems it stops at the exact subprocess.call command. Same thing with os.system.
I made a MCV case without the multiprocessing and subprocess did not kill the main process at all. Looks like multiprocessing messes things up although I had it run with 1 thread just for testing.
#!/usr/bin/env python
import subprocess
#THis is the id that gtdownload had problem with
id = "df1e073f-4485-4d5f-8659-cd8eaac17329"
link = "https://cghub.ucsc.edu/cghub/data/analysis/download/" + id
dlnd_cmd = "gtdownload -c ~/.cghub.key --max-children 4 -vv -d " + link + " > gt.out 2>gt.err"
print dlnd_cmd
subprocess.call(dlnd_cmd,shell=True)
print "done"
Clearly multiprocessing conflicts subprocess.call but it is not clear to me why.
What is the best way to avoid the failure of subprocess killing the main process?
Handle the exception in some appropriate way and move on, of course.
try:
subprocess.call(dlnd_cmd)
except OSError as e:
print 'failed to download: {!r}'.format(e)
However, this may not be appropriate here. The kinds of exceptions that subprocess.call raises are usually not transient things that you can just log and work around; if it's not working now, it will continue to not work forever until you fix the underlying problem (a bug in your script, or gtdownload not being installed right, or whatever).
For example, if the code you showed us is your actual code:
dlnd_cmd = "gtdownload -c ~/.cghub.key --max-children 4 -vv -d " + link + " > gt.out 2>gt.err"
subprocess.call(dlnd_cmd)
… then this is guaranteed to raise an OSError for the reason explained in dano's answer: call (without shell=True) will try to take that entire string—spaces, shell-redirection, etc.—as the name of an executable program to find on your $PATH. And there is no such program. So it will raise an OSError(errno.ENOENT). (Which is exactly what you're seeing.) Just logging that doesn't do you any good; it's a good thing that your entire process is exiting, so you can debug that problem.
subprocess.call should not kill the main process. Something else must be wrong with your script or your conclusions about the script's behaviour is wrong. Did you try printing some trace output after the subprocess call?
You have to use shell=True to use subprocess.call with a string for an argument (and with shell redirection):
subprocess.call(dlnd_cmd, shell=True)
Without shell=True, subprocess tries to treat your entire command string like a single executable name, which of course doesn't exist, and leads to the No such file or directory exception.
See this answer for more info on when to use a string vs. when to use a sequence with subprocess.

Python subprocess Exec format error

Sorry if this question is dumb. I am using python subprocess statement to call a .bat file in Ubuntu (Natty 11.04), however, I got error messages:
Traceback (most recent call last):
File "pfam_picloud.py", line 40, in <module>
a=subprocess.Popen(src2, shell=0)
File "/usr/lib/python2.7/subprocess.py", line 672, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1213, in _execute_child
raise child_exception
run this python file
$python pfam_picloud.py
Python code (pfam_picloud.py)
#!/usr/bin/python
#
met="wTest.dvf"
run="run_pfam.bat"
inp="pfam_input.PFA"
import os
import stat
import shutil
import subprocess
import string
import random
# Generate a random ID for file save
def id_generator(size=6, chars=string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for x in range(size))
name_temp=id_generator()
cwd=os.getcwd()
src=cwd
src1=cwd+'/'+name_temp
if not os.path.exists(src1):
os.makedirs(src1)
else:
shutil.rmtree(src1)
os.makedirs(src1)
##
shutil.copy(src+"/"+run,src1)
shutil.copy(src+"/"+met,src1)
shutil.copy(cwd+"/pfam_pi.exe",src1)
shutil.copy(src+"/"+inp,src1)
#
src2=src1+"/run_pfam.bat"
os.chdir(src1)
a=subprocess.Popen(src2, shell=0)
a.wait()
bash file (run_pfam.bat)
#!/bin/sh
./pfam_pi.exe pfam_input.PFA
I can successfully run this bash file in Ubuntu. So I guess, I messed up something in my Python script. Could anyone give me some suggestions? Thanks for any inputs.
EDIT
the file pfam_pi.exe is a Linux executable. I compiled it in Ubuntu. Sorry for the confusion.
update
Well, I got different types of error now.
1. With #!/bin/sh, it said No such file or directory.
2. With /bin/sh, it said exec format error.
3. If I sent everything as arguments a=subprocess.Popen(['./pfam_pi.exe', 'inp', 'src1'], shell=0), it said end of line symbol error
Since feature requests to mark a comment as an answer remain declined, I copy the above solution here.
#Ellioh: Thanks for your comments. I found once I changed the shell=1, problem is solved. – tao.hong
Try running wine (you should have it installed) and pass pfam_pi.exe to it as a parameter. Maybe pfam_pi.exe is not a Linux executable. :-) Certainly, executable file extensions are not meaningful on Linux, but probably it really is a Windows program, otherwise I hardly can imagine it named pfam_pi.exe.
However, if it is a Linux executable, note subprocess.Popen accepts a list of args (the first element is the program itself), not a command line:
>>> import shlex, subprocess
>>> command_line = raw_input()
/bin/vikings -input eggs.txt -output "spam spam.txt" -cmd "echo '$MONEY'"
>>> args = shlex.split(command_line)
>>> print args
['/bin/vikings', '-input', 'eggs.txt', '-output', 'spam spam.txt', '-cmd', "echo '$MONEY'"]
>>> p = subprocess.Popen(args) # Success!

Exit code when python script has unhandled exception

I need a method to run a python script file, and if the script fails with an unhandled exception python should exit with a non-zero exit code. My first try was something like this:
import sys
if __name__ == '__main__':
try:
import <unknown script>
except:
sys.exit(-1)
But it breaks a lot of scripts, due to the __main__ guard often used. Any suggestions for how to do this properly?
Python already does what you're asking:
$ python -c "raise RuntimeError()"
Traceback (most recent call last):
File "<string>", line 1, in <module>
RuntimeError
$ echo $?
1
After some edits from the OP, perhaps you want:
import subprocess
proc = subprocess.Popen(['/usr/bin/python', 'script-name'])
proc.communicate()
if proc.returncode != 0:
# Run failure code
else:
# Run happy code.
Correct me if I am confused here.
if you want to run a script within a script then import isn't the way; you could use exec if you only care about catching exceptions:
namespace = {}
f = open("script.py", "r")
code = f.read()
try:
exec code in namespace
except Exception:
print "bad code"
you can also compile the code first with
compile(code,'<string>','exec')
if you are planning to execute the script more than once and exec the result in the namespace
or use subprocess as described above, if you need to grab the output generated by your script.

how to interact with an external script(program)

there is a script that expects keyboard input,
i can call that script with os.system('./script') in python,
how is it possible to send back an input to the script from another calling script?
update:
the script is:
$ cat script
#!/usr/bin/python
for i in range(4):
name=raw_input('enter your name')
print 'Welcome %s :) ' % name
when i try without a for loop, it works but it shows the output only when the script quits.
>>> p = subprocess.Popen('./script',stdin=subprocess.PIPE)
>>> p.communicate('navras')
enter your nameWelcome navras :)
when i try it with the foor loop, it throws error, How to display the statements interactive as and when the stdout is updated with new print statements
>>> p.communicate('megna')
enter your nameWelcome megna :)
enter your nameTraceback (most recent call last):
File "./script", line 3, in <module>
name=raw_input('enter your name')
EOFError: EOF when reading a line
(None, None)
You can use subprocess instead of os.system:
p = subprocess.Popen('./script',stdin=subprocess.PIPE)
p.communicate('command')
its not testet
In fact, os.system and os.popen are now deprecated and subprocess is the recommended way to handle all sub process interaction.

Categories

Resources