Incremental stdout out of fabric - python

I'm new to fabric and want to run a long-running script on a remote computer, so far, I have been using something like this:
import fabric
c = fabric.Connection("192.168.8.16") # blocking
result = c.run("long-running-script-outputing-state-information-into-stdout.py")
Is there a way to read stdout as it comes asynchronously instead of using the 'result' object that can be used only after the command has finished?

If you want to use fabric to do some stuff remotely, you have first of all follow this structure to make a connection:
#task(hosts=["servername"])
def do_things(c):
with connection(host=host, user=user,) as c:
c.run("long-running-script-outputing-state-information-into-stdout.py")
This will output the whole output regardless what you are doing!
you have to use with connection(host=host, user=user,) as c: to ensure that everything you run will run within that connection context!

Related

Instantiate object and run methods from command line py

Is there a way I can instantiate an object from the command line and run its methods from the instantiated object separately. Example of what I want to achieve below:
import sys
class Calculate(object):
def __init__(self):
self.xDim = sys.argv[1]
self.yDim = sys.argv[2]
def add(self):
print(self.xDim + self.yDim)
def multiply(self):
print(self.xDim * self.yDim)
if __name__ == '__main__':
calculate = Calculate()
calculate.add()
calculate.multiply()
The way I have written this code if I do : python calculate.py 2 2
It will run the two methods add and multiply but I want to be able to create an instance and then do something like:
python calculate.py 2 2 calculateObject
then do calculateObject.add or calculateObject.multiply
This is needed for a project in php that needs to call the methods separately. I understand I can put add and multiply into different files, instantiate Calculate from within and then run them individually but this will be doing so much. I can consider this if there is no better way.
It is impossible to do so. When you run python some_script.py in your terminal, command prompt or through a screen, it will search for and launch Python interpreter as a new process. And after the interpreter finishes running the script, it will terminate returning an exit code. All the variables will be cleared from memory after this time. And when you run python some_script.py again, a new process is started with prior knowledge of prior runs whatsoever. So if you want the same instance as before, in order to store variable information, you'll need to modify your script to force python to never terminate and keep waiting for new instructions. For example, the python shell will wait for a command because it is listening to user input. Then, in your PHP code, instead of running python some_script.py to create a new instance of python, you should be able to invoke the previously running python script which is waiting for some kind of signal. I suggest you to use network sockets because they are highly supported and not that complex to learn and use.

python pexpect connected to server how to get the exact output of commands

In python I am connecting to a different server using
child = pexpect.spawn('ssh username#systemname')
once it is connected, I would like to execute some other commands. how to get the exact output of other child commands like?
child.sendline("hostname")
or Let me know if there is another way to do this.
You may try the following and it would write to the filename mentioned:
child.sendline('hostname')
child.logfile_read = open("<filename>", 'a')
child.expect('<what ever you expect after the command execution>')

get a return value from a Daemon Process in Python

I have written a python daemon process that can be started and stopped using the following commands
/usr/local/bin/daemon.py start
/usr/local/bin/daemon.py stop
I can achieve the same results by calling these commands from a python script
os.system('/usr/local/bin/daemon.py start')
os.system('/usr/local/bin/daemon.py stop')
this works perfectly fine, but now I wish to add a functionality to the daemon process such that when I run the command
os.system('/usr/local/bin/daemon.py foo')
the daemon returns a Python object. So, something like :
foobar = os.sytem('/usr/local/bin/daemon.py foo')
just to be clear, I have all the logic ready in the daemon to return a Python object, I just can't figure out how to pass this object to the calling python script. Is there some way?
Don't you mean you want to implement simple serialization and deserialization?
In that case I'd propose to look at pickle (https://docs.python.org/2/library/pickle.html) to transform your data into a generic text format at the daemon side and transform it back to Python code at the client side.
I think, marshaling is what you need: https://docs.python.org/2.7/library/marshal.html & https://docs.python.org/2/library/pickle.html

Python script run through supervisord output buffering

I have a script which make a request to api get data and create pandas dataframe and if certain condition is fulfilled it send another request to api and prints result.
Simple version would like this:
request= api_reguest(data)
table = json_normalize()
table = table[table['field']>1]
if table.empty:
pass
else:
var1,var2,var3 = table[['var1','var2','var3']]
another_request = api_request2(var1,var2,var3)
print var1,var2,var3
threading.Timer(1, main).start()
It all works fine, but when I run it as a process in supervisord it stops logging and sending requests to api after about 12 hours. It is clearly the problem of output buffering, because if I restart the process it starts working again.
I already tried all possible solutions for output buffering in python:
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
sys.stdout.flush()
Running script with -u option
Running script through stdbuf
My feeling is that it is something to do with supervisord buffer rather then with the script itself, but I can't figure out how to turn off supervisord buffer.
Can you advise me please?

Python script running in one system will perform the result in another system

I am trying to write a python script which when executes will open a Maya file in another computer and creates its playblast there. Is this possible? Also I would like to add one more thing that the systems I use are all Windows. Thanks
Yes it is possible, i do this all the time on several computers. First you need to access the computer. This has been answered. Then call maya from within your shell as follows:
maya -command myblast -file filetoblast.ma
you will need myblast.mel somewhere in your script path
myblast.mel:
global proc myblast(){
playblast -widthHeight 1920 1080 -percent 100
-fmt "movie" -v 0 -f (`file -q -sn`+".avi");
evalDeferred("quit -f");
}
Configure what you need in this file such as shading options etc. Please note calling Maya GUI uses up one license and playblast need that GUI (you could shave some seconds by not doing the default GUI)
In order to execute something on a remote computer, you've got to have some sort of service running there.
If it is a linux machine, you can simply connect via ssh and run the commands. In python you can do that using paramiko:
import paramiko
ssh = paramiko.SSHClient()
ssh.connect('127.0.0.1', username='foo', password='bar')
stdin, stdout, stderr = ssh.exec_command("echo hello")
Otherwise, you can use a python service, but you'll have to run it beforehand.
You can use Celery as previously mentioned, or ZeroMQ, or more simply use RPyC:
Simply run the rpyc_classic.py script on the target machine, and then you can run python on it:
conn = rpyc.classic.connect("my_remote_server")
conn.modules.os.system('echo foo')
Alternatively, you can create a custom RPyC service (see documentation).
A final option is using an HTTP server like previously suggested. This may be easiest if you don't want to start installing everything. You can use Bottle which is a simple HTTP framework in python:
Server-side:
from bottle import route, run
#route('/run_maya')
def index(name):
# Do whatever
return 'kay'
run(host='localhost', port=8080)
Client-side:
import requests
requests.get('http://remote_server/run_maya')
One last option for cheap rpc is to run maya.standalone from a a maya python ("mayapy", usually installed next to the maya binary). The standalone is going to be running inside a regular python script so it can uses any of the remote procedure tricks in KimiNewts answer.
You can also create your own mini-server using basic python. The server could use the maya command port, or a wsgi server using the built in wsgiref module. Here is an example which uses wsgiref running inside a standalone to control a maya remotely via http.
We've been dealing with the same issue at work. We're using Celery as the task manager and have code like this inside of the Celery task for playblasting on the worker machines. This is done on Windows and uses Python.
import os
import subprocess
import tempfile
import textwrap
MAYA_EXE = r"C:\Program Files\Autodesk\Maya2016\bin\maya.exe"
def function_name():
# the python code you want to execute in Maya
pycmd = textwrap.dedent('''
import pymel.core as pm
# Your code here to load your scene and playblast
# new scene to remove quicktimeShim which sometimes fails to quit
# with Maya and prevents the subprocess from exiting
pm.newFile(force=True)
# wait a second to make sure quicktimeShim is gone
time.sleep(1)
pm.evalDeferred("pm.mel.quit('-f')")
''')
# write the code into a temporary file
tempscript = tempfile.NamedTemporaryFile(delete=False, dir=temp_dir)
tempscript.write(pycmd)
tempscript.close()
# build a subprocess command
melcmd = 'python "execfile(\'%s\')";' % tempscript.name.replace('\\', '/')
cmd = [MAYA_EXE, '-command', melcmd]
# launch the subprocess
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
proc.wait()
# when the process is done, remove the temporary script
try:
os.remove(tempscript.name)
except WindowsError:
pass

Categories

Resources