Instantiate object and run methods from command line py - python

Is there a way I can instantiate an object from the command line and run its methods from the instantiated object separately. Example of what I want to achieve below:
import sys
class Calculate(object):
def __init__(self):
self.xDim = sys.argv[1]
self.yDim = sys.argv[2]
def add(self):
print(self.xDim + self.yDim)
def multiply(self):
print(self.xDim * self.yDim)
if __name__ == '__main__':
calculate = Calculate()
calculate.add()
calculate.multiply()
The way I have written this code if I do : python calculate.py 2 2
It will run the two methods add and multiply but I want to be able to create an instance and then do something like:
python calculate.py 2 2 calculateObject
then do calculateObject.add or calculateObject.multiply
This is needed for a project in php that needs to call the methods separately. I understand I can put add and multiply into different files, instantiate Calculate from within and then run them individually but this will be doing so much. I can consider this if there is no better way.

It is impossible to do so. When you run python some_script.py in your terminal, command prompt or through a screen, it will search for and launch Python interpreter as a new process. And after the interpreter finishes running the script, it will terminate returning an exit code. All the variables will be cleared from memory after this time. And when you run python some_script.py again, a new process is started with prior knowledge of prior runs whatsoever. So if you want the same instance as before, in order to store variable information, you'll need to modify your script to force python to never terminate and keep waiting for new instructions. For example, the python shell will wait for a command because it is listening to user input. Then, in your PHP code, instead of running python some_script.py to create a new instance of python, you should be able to invoke the previously running python script which is waiting for some kind of signal. I suggest you to use network sockets because they are highly supported and not that complex to learn and use.

Related

Incremental stdout out of fabric

I'm new to fabric and want to run a long-running script on a remote computer, so far, I have been using something like this:
import fabric
c = fabric.Connection("192.168.8.16") # blocking
result = c.run("long-running-script-outputing-state-information-into-stdout.py")
Is there a way to read stdout as it comes asynchronously instead of using the 'result' object that can be used only after the command has finished?
If you want to use fabric to do some stuff remotely, you have first of all follow this structure to make a connection:
#task(hosts=["servername"])
def do_things(c):
with connection(host=host, user=user,) as c:
c.run("long-running-script-outputing-state-information-into-stdout.py")
This will output the whole output regardless what you are doing!
you have to use with connection(host=host, user=user,) as c: to ensure that everything you run will run within that connection context!

Print output from python to C# application

I have made a C# app which calls a python script.
C# app uses Process object to call python script.
I also have redirected the sub-process standard output so I can process the output from python script.
But the problem is:
The output(via print function) from python will always arrive at once when the script terminates.
I want the output to arrive in real time while script running.
I can say I have tried almost all of method can get from google, like add flush of sys.out, redirect sysout in python, C# event driven message receiving or just using while to wait message etc,.
How to flush output of print function?
PyInstaller packaged application works fine in Console mode, crashes in Window mode
I am very wondering that like PyCharm or other python IDE, they run python script inside, but they can print the output one by one without hacking original python script, how they do that?
The python version is 2.7.
Hope to have advise.
Thank you!
I just use very stupid but working method to resolve it:
using thread to periodically flush the sys.out, the code piece is like this:
import sys
import os
import threading
import time
run_thread = False
def flush_print():
while run_thread:
# print 'something'
sys.stdout.flush()
time.sleep(1)
in main function:
if __name__ == '__main__':
thread = threading.Thread(target=flush_print)
run_thread = True
thread.start()
# my big functions with some prints, the function will block until completed
run_thread = False
thread.join()
Apparently this is ugly, but I have no better method to make work done .

Binding / piping output of run() on/into function in python3 (lynux)

I am trying to use output of external program run using the run function.
this program regularly throws a row of data which i need to use in mine script
I have found a subprocess library and used its run()/check_output()
Example:
def usual_process():
# some code here
for i in subprocess.check_output(['foo','$$']):
some_function(i)
Now assuming that foo is already in a PATH variable and it outputs a string in semi-random periods.
I want the program to do its own things, and run some_function(i)every time foo sends new row to its output.
which boiles to two problems. piping the output into a for loop and running this as a background subprocess
Thank you
Update: I have managed to get the foo output onto some_function using This
with os.popen('foo') as foos_output:
for line in foos_output:
some_function(line)
According to this os.popen is to be deprecated, but I am yet to figure out how to pipe internal processes in python
Now just need to figure out how to run this function in a background
SO, I have solved it.
First step was to start the external script:
proc=Popen('./cisla.sh', stdout=PIPE, bufsize=1)
Next I have started a function that would read it and passed it a pipe
def foo(proc, **args):
for i in proc.stdout:
'''Do all I want to do with each'''
foo(proc).start()`
Limitations are:
If your wish t catch scripts error you would have to pipe it in.
second is that it leaves a zombie if you kill parrent SO dont forget to kill child in signal-handling

get a return value from a Daemon Process in Python

I have written a python daemon process that can be started and stopped using the following commands
/usr/local/bin/daemon.py start
/usr/local/bin/daemon.py stop
I can achieve the same results by calling these commands from a python script
os.system('/usr/local/bin/daemon.py start')
os.system('/usr/local/bin/daemon.py stop')
this works perfectly fine, but now I wish to add a functionality to the daemon process such that when I run the command
os.system('/usr/local/bin/daemon.py foo')
the daemon returns a Python object. So, something like :
foobar = os.sytem('/usr/local/bin/daemon.py foo')
just to be clear, I have all the logic ready in the daemon to return a Python object, I just can't figure out how to pass this object to the calling python script. Is there some way?
Don't you mean you want to implement simple serialization and deserialization?
In that case I'd propose to look at pickle (https://docs.python.org/2/library/pickle.html) to transform your data into a generic text format at the daemon side and transform it back to Python code at the client side.
I think, marshaling is what you need: https://docs.python.org/2.7/library/marshal.html & https://docs.python.org/2/library/pickle.html

How to structure code that distributes jobs to threads/nodes in Python?

I have python code that takes a bunch of tasks and distributes them to either different threads or different nodes on a cluster. I always end up writing a main script driver.py, that takes two command line arguments: --run-all and --run-task. The first is just a wrapper that iterates through all tasks and then calls driver.py --run-task with each task passed as argument. Example:
== driver.py ==
# Determine the current script
DRIVER = os.path.abspath(__file__)
(opts, args) = parser.parse_args()
if opts.run_all is not None:
# Run all tasks
for task in opts.run_all.split(","):
# Call driver.py again with a specific task
cmd = "python %s --run-task %s" %(DRIVER, task)
# Execute on system
distribute_cmd(cmd)
elif opts.run_task is not None:
# Run on an individual task
# code here for processing a task...
The user would then call:
$ driver.py --run-all task1,task2,task3,task4
And each task would get distributed.
The function distribute_cmd takes a shell executable command and sends in a system-specific way to either a node or a thread. The reason driver.py has to find its own name and call itself is because distribute_cmd needs an executable shell command; it cannot take a function name for example.
This consideration led me to this design, of a driver script having two modes and having to call itself. This has two complications: (1) the script has to find out its own path via __file__ and (2) when making this into a Python package, it's unclear where driver.py should go. It's meant to be an executable scripts, but if I put it in setup.py's scripts=, then I will have to find out where the scripts live (see correct way to find scripts directory from setup.py in Python distutils?). This does not seem to be a good solution.
What's an alternative design to this? Keep in mind that the distribution of tasks has to result in an executable command that can be passed as a string to distribute_cmd. thanks.
are you looking for is a library that already does exactly what you need, e.g. Fabric or Celery.
if you were not using nodes, I would suggest using multiprocessing.
this is a slightly similar question to this one
To be able to execute remotely, you either need:
ssh access to the box, in that case you can use Fabric to send your commands.
a server, SocketServer, tcp server, or anything that will accept connections.
an agent, or client, that will wait for data, if you are using a agent, you may as well use a broker for your messages. Celery allows you to do some of the plumbing, one end puts messages on the queue while the other end gets message from the queue. If the message is a command to execute, then the agent can do an os.system() call, or call subprocess.Popen()
celery example:
import os
from celery import Celery
celery = Celery('tasks', broker='amqp://guest#localhost//')
#celery.task
def run_command(command):
return os.system(command)
You will then need a worker that binds on the queue and waits for tasks to execute. More info in the documentation.
fabric example:
the code:
from fabric.api import run
def exec_remotely(command):
run(command)
the invocation:
$ fab exec_remotely:command='ls -lh'
More info in the documentation.
batch system case:
To go back to the question...
distribute_cmd is something that would call bsub somescript.sh
you need to find file only because you are going to re-execute the same script with other parameters
because of the above, you might have a problem providing a correct distutils script.
Let's question this design.
Why do you need to use the same script?
Can your driver write scripts then call bsub?
Can you use temporary files?
Do all the nodes actually share a filesystem?
How do you know file is going to exist on the node?
example:
TASK_CODE = {
'TASK1': '''#!/usr/bin/env python
#... actual code for task1 goes here ...
''',
'TASK2': '''#!/usr/bin/env python
#... actual code for task2 goes here ...
'''}
# driver portion
(opts, args) = parser.parse_args()
if opts.run_all is not None:
for task in opts.run_all.split(","):
task_path = '/tmp/taskfile_%s' % task
with open(task_path, 'w') as task_file:
task_file.write(TASK_CODE[task])
# note: should probably do better error handling.
distribute_cmd(task_path)

Categories

Resources