Fabric used as library not working - python

I cannot get fabric working when used as a library within my own python scripts. I made a very short example fabfile.py to demonstrate my problem:
#!/usr/bin/env python
from fabric.api import *
print("Hello")
def test():
with settings(host_string='myIp', user="myUser", password="myPassword"):
run("hostname")
if __name__ == '__main__':
test()
Running fab works like a charm:
$ fab test
Hello
[myIp] run: hostname
[myIp] out: ThisHost
[myIp] out:
Done.
Disconnecting from myUser#myIp... done.
Ok, now, running the python script without fab seems to break somewhere:
$ python fabfile.py
Hello
[myIp] run: hostname
It immediatly returns, so it does not even seem to wait for a response. Maybe there are errors, but I don't see how to output those.
I am running this script inside my vagrant virtual machine. As fab executes without any errors, I guess this should not be a problem!
UPDATE
The script seems to crash as it does not execute anything after the first run. local on the other hand works!
We executed the script on a co-workers laptop and it runs without any issues. I am using Python 2.6.5 on Ubuntu 10.04 with fabric 1.5.1, so I guess there is a problem with some of this! Is there any way to debug this properly?

I've experienced a similar issue, that the fab command exited without error but just a blank line on the first run()/sudo() command.
So I put the run() command into a try: except: block and printed the traceback:
def do_something():
print(green("Executing on %(host)s as %(user)s" % env))
try:
run("uname -a")
except:
import traceback
tb = traceback.format_exc()
print(tb)
I saw that it the script exited in the fabfile/network.py at line 419 when it caught an EOFError or TypeError. I modified the script to:
...
except (EOFError, TypeError) as err:
print err
# Print a newline (in case user was sitting at prompt)
print('')
sys.exit(0)
...
which then printed out:
connect() got an unexpected keyword argument 'sock'
So I remove the sock keyword argument in the connect method a few lines above and it worked like charm. I guess it is a problem with a paramiko version, that does not allow the sock keyword.
Versions:
Python 2.7.3
Fabric >= 1.5.3
paramiko 1.10.0

if you look at the fab command it looks like this:
sys.exit(
load_entry_point('Fabric==1.4.3', 'console_scripts', 'fab')()
)
this means it looks for a block labeled console_scripts in a file called entry_points.txt in the Fabric package and executes the methods listed there, in this case fabric.main:main
when we look at this method we see argument parsing, interesting fabfile importing and then:
if fabfile:
docstring, callables, default = load_fabfile(fabfile)
state.commands.update(callables)
....
for name, args, kwargs, arg_hosts, arg_roles, arg_exclude_hosts in commands_to_run:
execute(
name,
hosts=arg_hosts,
roles=arg_roles,
exclude_hosts=arg_exclude_hosts,
*args, **kwargs
)
with some experimentation we can come up with something like:
from fabric import state
from fabric.api import *
from fabric.tasks import execute
from fabric.network import disconnect_all
def test():
with settings(host_string='host', user="user", password="password"):
print run("hostname")
if __name__ == '__main__':
state.commands.update({'test': test})
execute("test")
if state.output.status:
print("\nDone.")
disconnect_all()
which is obviously very incomplete, but perhaps you only need to add the
disconnect_all()
line at the end of your script

Related

subprocess popen not executing command, return code 1, network namespace

I'm trying to use the Popen command from the subprocess module to create a network namespace. There is a difference in the output I see from the interpreter and the output I see when my program is run through the GUI (via a lighthttpd app).
Here's the simplified function:
import subprocess
from pyroute2 import netns
def addNamespace(namespace):
setNs = "ip netns add %s"%(namespace)
print(setNs)
proc = subprocess.Popen(setNs.split(' '))
ret = proc.communicate()
print("Return Code:%d STDOUT/STDERR:%s"%(proc.returncode, str(ret)))
print(netns.listnetns())
When I run this code >>> addNamespace("b0ns") from the python interpreter, I get:
ip netns add b0ns
Return Code:0 STDOUT/STDERR:(None, None)
['b0ns']
However, when I run the same function from the program, I get:
ip netns add b0ns
Return Code:1 STDOUT/STDERR:(None, None)
['']
The return code here is 1 and the namespace does not get added. What could be the cause for it not executing successfully? Root privileges? I tried executing the command with adding a sudo before ip netns add.. but that didn't work.
I tried giving the shell=True argument in the program and got a return code = 255.
I've tried using the netns module to directly create a namespace using netns.create() but I receive a OSError:mount rundir failed
System details: Python 2.7.5 CentOS 7.2
EDIT:
I added the function to a simple test.py file and ran it - it worked ok. There is only a problem when the function is invoked through my GUI based application.
Since you have not added much details on how you are executing your program from command line interface, so I am making a guess.
Try using:
sudo python <programname.py>
ip netns add NetworkName required root access
so you shall have to provide root accesses to your program while executing.
Why its working from interactive shell!!! Well I guess you must have opened your interactive shell by typing something like following:
sudo python
Cheers!!
import subprocess
from time import sleep
from pyroute2 import netns
def addNamespace(namespace):
setNs = "ip netns add %s"%(namespace)
print(setNs)
proc = subprocess.Popen(setNs.split(' '))
ret = proc.communicate()
print("Return Code:%d STDOUT/STDERR:%s"%(proc.returncode, str(ret)))
print(netns.listnetns())
addNamespace('b2ns')
if __name__ == '__main__':
addNamespace('b3ns')
sleep(3)
Giving me:
pradeep#localVM:~$ sudo python add_network.py
ip netns add b2ns
Return Code:0 STDOUT/STDERR:(None, None)
['b2ns', 'b1ns', 'b0ns']
ip netns add b3ns
Return Code:0 STDOUT/STDERR:(None, None)
['b3ns', 'b2ns', 'b1ns', 'b0ns']
Note: 'b1ns', 'b0ns' added with previous runs

How can os.popen take argument externaly to run another script

I am trying to write a python CLI program using module python cmd. When I try to execute another python script in my CLI program my objective is I have some python script in other folder and CLI program in other folder. I am trying to execute those python script using CLI program.
Below is the os.popen method used to execute other script there is CLI program:
import cmd
import os
import sys
class demo(cmd.Cmd):
def do_shell(self,line,args):
"""hare is function to execute the other script"""
output = os.popen('xterm -hold -e python %s' % args).read()
output(sys.argv[1])
def do_quit(self,line):
return True
if __name__ == '__main__':
demo().cmdloop()
and hare is error:
(Cmd) shell demo-test.py
Traceback (most recent call last):
File "bemo.py", line 18, in <module>
demo().cmdloop()
File "/usr/lib/python2.7/cmd.py", line 142, in cmdloop
stop = self.onecmd(line)
File "/usr/lib/python2.7/cmd.py", line 221, in onecmd
return func(arg)
TypeError: do_shell() takes exactly 3 arguments (2 given)
there is some link to other cmd CLI program
1 = cmd – Create line-oriented command processors
2 = Console built with Cmd object (Python recipe)
and some screen shot's for more information:
Please run above code in your system.
As specified in the doc:
https://pymotw.com/2/cmd/index.html
do_shell is defined as such:
do_shell(self, args):
But you are defining it as
do_shell(self, line, args):
I think the intended use is define it as specified from the documentation.
I ran your code and followed your example. I replicated your error. I then, as specified in the documentation for do_shell, I changed the method to the as expected:
do_shell(self, args):
From there, the sys module was missing, so you need to import that as well (unless you did not copy it from your source). After that, I got an error for index out of range, probably because of the expectation of extra parameters needing to be passed.
Furthermore, because you are talking about Python scripts, I don't see the need for the extra commands you are adding, I simply changed the line to this:
output = os.popen('python %s' % args).read()
However, if there is a particular reason you need the xterm command, then you can probably put that back and it will work for your particular case.
I also, did not see the use case for this:
output(sys.argv[1])
I commented that out. I ran your code, and everything worked. I created a test file that just did a simple print and it ran successfully.
So, the code actually looks like this:
def do_shell(self, args):
"""hare is function to execute the other script"""
output = os.popen('python %s' % args).read()
print output
The full code should look like this:
import cmd
import os
import sys
class demo(cmd.Cmd):
def do_shell(self, args):
"""hare is function to execute the other script"""
output = os.popen('python %s' % args).read()
print output
def do_quit(self,line):
return True
if __name__ == '__main__':
demo().cmdloop()

How can python wait for a batch SGE script finish execution?

I have a problem I'd like you to help me to solve.
I am working in Python and I want to do the following:
call an SGE batch script on a server
see if it works correctly
do something
What I do now is approx the following:
import subprocess
try:
tmp = subprocess.call(qsub ....)
if tmp != 0:
error_handler_1()
else:
correct_routine()
except:
error_handler_2()
My problem is that once the script is sent to SGE, my python script interpret it as a success and keeps working as if it finished.
Do you have any suggestion about how could I make the python code wait for the actual processing result of the SGE script ?
Ah, btw I tried using qrsh but I don't have permission to use it on the SGE
Thanks!
From your code you want the program to wait for job to finish and return code, right? If so, the qsub sync option is likely what you want:
http://gridscheduler.sourceforge.net/htmlman/htmlman1/qsub.html
Additional Answer for an easier processing:
By using the python drmaa module : link which allows a more complete processing with SGE.
A functioning code provided in the documentation is here: [provided you put a sleeper.sh script in the same directory]
please notice that the -b n option is needed to execute a .sh script, otherwise it expects a binary by default like explained here
import drmaa
import os
def main():
"""Submit a job.
Note, need file called sleeper.sh in current directory.
"""
s = drmaa.Session()
s.initialize()
print 'Creating job template'
jt = s.createJobTemplate()
jt.remoteCommand = os.getcwd()+'/sleeper.sh'
jt.args = ['42','Simon says:']
jt.joinFiles=False
jt.nativeSpecification ="-m abe -M mymail -q so-el6 -b n"
jobid = s.runJob(jt)
print 'Your job has been submitted with id ' + jobid
retval = s.wait(jobid, drmaa.Session.TIMEOUT_WAIT_FOREVER)
print('Job: {0} finished with status {1}'.format(retval.jobId, retval.hasExited))
print 'Cleaning up'
s.deleteJobTemplate(jt)
s.exit()
if __name__=='__main__':
main()

Trouble with subprocess.check_output()

I'm having some strange issues using subprocess.check_output(). At first I was just using subprocess.call() and everything was working fine. However when I simply switch out call() for check_output(), I receive a strange error.
Before code (works fine):
def execute(hosts):
''' Using psexec, execute the batch script on the list of hosts '''
successes = []
wd = r'c:\\'
file = r'c:\\script.exe'
for host in hosts:
res = subprocess.call(shlex.split(r'psexec \\\\%s -e -s -d -w %s %s' % (host,wd,file)))
if res.... # Want to check the output here
successes.append(host)
return successes
After code (doesn't work):
def execute(hosts):
''' Using psexec, execute the batch script on the list of hosts '''
successes = []
wd = r'c:\\'
file = r'c:\\script.exe'
for host in hosts:
res = subprocess.check_output(shlex.split(r'psexec \\\\%s -e -s -d -w %s %s' % (host,wd,file)))
if res.... # Want to check the output here
successes.append(host)
return successes
This gives the error:
I couldnt redirect this because the program hangs here and I can't ctrl-c out. Any ideas why this is happening? What's the difference between subprocess.call() and check_output() that could be causing this?
Here is the additional code including the multiprocessing portion:
PROCESSES = 2
host_sublists_execute = [.... list of hosts ... ]
poolE = multiprocessing.Pool(processes=PROCESSES)
success_executions = poolE.map(execute,host_sublists_execute)
success_executions = [entry for sub in success_executions for entry in sub]
poolE.close()
poolE.join()
Thanks!
You are encountering Python Issue 9400.
There is a key distinction you have to understand about subprocess.call() vs subprocess.check_output(). subprocess.call() will execute the command you give it, then provide you with the return code. On the other hand, subprocess.check_output() returns the program's output to you in a string, but it tries to do you a favor and check the program's return code and will raise an exception (subprocess.CalledProcessError) if the program did not execute successfully (returned a non-zero return code).
When you call pool.map() with a multiprocessing pool, it will try to propagate exceptions in the subprocesses back to main and raise an exception there. Apparently there is an issue with how the subprocess.CalledProcessError exception class is defined, so the pickling fails when the multiprocessing library tries to propagate the exception back to main.
The program you're calling is returning a non-zero return code, which makes subprocess.check_output() raise an exception, and pool.map() can't handle it properly, so you get the TypeError that results from the failed attempt to retrieve the exception.
As a side note, the definition of subprocess.CalledProcessError must be really screwed up, because if I open my 2.7.6 terminal, import subprocess, and manuallly raise the error, I still get the TypeError, so I don't think it's merely a pickling problem.

Exit code when python script has unhandled exception

I need a method to run a python script file, and if the script fails with an unhandled exception python should exit with a non-zero exit code. My first try was something like this:
import sys
if __name__ == '__main__':
try:
import <unknown script>
except:
sys.exit(-1)
But it breaks a lot of scripts, due to the __main__ guard often used. Any suggestions for how to do this properly?
Python already does what you're asking:
$ python -c "raise RuntimeError()"
Traceback (most recent call last):
File "<string>", line 1, in <module>
RuntimeError
$ echo $?
1
After some edits from the OP, perhaps you want:
import subprocess
proc = subprocess.Popen(['/usr/bin/python', 'script-name'])
proc.communicate()
if proc.returncode != 0:
# Run failure code
else:
# Run happy code.
Correct me if I am confused here.
if you want to run a script within a script then import isn't the way; you could use exec if you only care about catching exceptions:
namespace = {}
f = open("script.py", "r")
code = f.read()
try:
exec code in namespace
except Exception:
print "bad code"
you can also compile the code first with
compile(code,'<string>','exec')
if you are planning to execute the script more than once and exec the result in the namespace
or use subprocess as described above, if you need to grab the output generated by your script.

Categories

Resources