First of all, I don't know what's the best way to get the functionality I would like to archive in the end.
My code will do the following:
#celery.task
def updateServerByID(sevrerID):
#run update task
os.system("samplecommadn to update server by id...")
#check if the output of the console contains "Success!", if yes, end job by using "return" statement
#return
These are the two ways I think of getting the code running:
Redirecting output of console command to a file (using python to "monitor" this file for changes and read the files content each time it's changed
Check the output of the console command for "Success!"
All in all I think way 2 would be the most efficient, but how to read the whole console output in python? Is there any way to prevent the celery task itself from printing this content?
This do nothing to do with celery, it does the matter that how to get output of get output of os.system.
Just get output in celery.task.updateServerByID following [python-how-to-get-stdout-after-running-os-system](Python: How to get stdout after running os.system?)
Related
I've got a Python script that uses os.system to run shell commands. The output from those commands is echoed to the screen; I like this and need to keep it. I would also like my script to be able to take action based on the contents of the output from the system call. How can I do this?
In my specific case, I'm calling os.system("svn update"). I need the output to go to the screen and (in case of conflicts, for example), the user needs to be able to interact with svn. I would like to be able to have the script take action based on the output - to trigger a build if it sees that a build script was updated, for example.
I'd prefer not to handle the I/O myself (that would seem unnecessarily complex) and I'd rather not send the output to a temporary file that I have to clean up later (though I will if I must).
Edit:
Here's my test script:
#!/usr/bin/python -t
import subprocess
output = subprocess.check_output(["echo","one"])
print "python:", output
output = subprocess.check_output(["echo", "two"], shell=True)
print "python:", output
output = subprocess.check_output("echo three", shell=True)
print "python:", output
and here's its output:
$ ./pytest
python: one
python:
python: three
(There's an extra blank line at the end that the code block doesn't show.) I expect something more like:
$ ./pytest
one
python: one
two
python:
three
python: three
To run a process, I would look into subprocess.check_output. In this case, something like:
output = subprocess.check_output(['svn','update'])
print output
This only works on python2.7 or newer though. If you want a version which works with older versions of python:
p = subprocess.Popen(['svn','update'],stdout=subprocess.PIPE)
output,stderr = p.communicate()
print output
I am trying to use output of external program run using the run function.
this program regularly throws a row of data which i need to use in mine script
I have found a subprocess library and used its run()/check_output()
Example:
def usual_process():
# some code here
for i in subprocess.check_output(['foo','$$']):
some_function(i)
Now assuming that foo is already in a PATH variable and it outputs a string in semi-random periods.
I want the program to do its own things, and run some_function(i)every time foo sends new row to its output.
which boiles to two problems. piping the output into a for loop and running this as a background subprocess
Thank you
Update: I have managed to get the foo output onto some_function using This
with os.popen('foo') as foos_output:
for line in foos_output:
some_function(line)
According to this os.popen is to be deprecated, but I am yet to figure out how to pipe internal processes in python
Now just need to figure out how to run this function in a background
SO, I have solved it.
First step was to start the external script:
proc=Popen('./cisla.sh', stdout=PIPE, bufsize=1)
Next I have started a function that would read it and passed it a pipe
def foo(proc, **args):
for i in proc.stdout:
'''Do all I want to do with each'''
foo(proc).start()`
Limitations are:
If your wish t catch scripts error you would have to pipe it in.
second is that it leaves a zombie if you kill parrent SO dont forget to kill child in signal-handling
I have a script which make a request to api get data and create pandas dataframe and if certain condition is fulfilled it send another request to api and prints result.
Simple version would like this:
request= api_reguest(data)
table = json_normalize()
table = table[table['field']>1]
if table.empty:
pass
else:
var1,var2,var3 = table[['var1','var2','var3']]
another_request = api_request2(var1,var2,var3)
print var1,var2,var3
threading.Timer(1, main).start()
It all works fine, but when I run it as a process in supervisord it stops logging and sending requests to api after about 12 hours. It is clearly the problem of output buffering, because if I restart the process it starts working again.
I already tried all possible solutions for output buffering in python:
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
sys.stdout.flush()
Running script with -u option
Running script through stdbuf
My feeling is that it is something to do with supervisord buffer rather then with the script itself, but I can't figure out how to turn off supervisord buffer.
Can you advise me please?
I've got a Python script that uses os.system to run shell commands. The output from those commands is echoed to the screen; I like this and need to keep it. I would also like my script to be able to take action based on the contents of the output from the system call. How can I do this?
In my specific case, I'm calling os.system("svn update"). I need the output to go to the screen and (in case of conflicts, for example), the user needs to be able to interact with svn. I would like to be able to have the script take action based on the output - to trigger a build if it sees that a build script was updated, for example.
I'd prefer not to handle the I/O myself (that would seem unnecessarily complex) and I'd rather not send the output to a temporary file that I have to clean up later (though I will if I must).
Edit:
Here's my test script:
#!/usr/bin/python -t
import subprocess
output = subprocess.check_output(["echo","one"])
print "python:", output
output = subprocess.check_output(["echo", "two"], shell=True)
print "python:", output
output = subprocess.check_output("echo three", shell=True)
print "python:", output
and here's its output:
$ ./pytest
python: one
python:
python: three
(There's an extra blank line at the end that the code block doesn't show.) I expect something more like:
$ ./pytest
one
python: one
two
python:
three
python: three
To run a process, I would look into subprocess.check_output. In this case, something like:
output = subprocess.check_output(['svn','update'])
print output
This only works on python2.7 or newer though. If you want a version which works with older versions of python:
p = subprocess.Popen(['svn','update'],stdout=subprocess.PIPE)
output,stderr = p.communicate()
print output
Hello I'm really new to the Python programming language and i have encountered a problem writing one script. I want to save the output from stdout that i obtain when i run a tcpdump command in a variable in a Python script, but i want the tpcdump command to run continuously because i want to gather the length from all packets transferred that get filtered by tcpdump(with the filter i wrote).
I tried :
fin, fout = os.popen4(comand)
result = fout.read()
return result
But it just hangs.
I'm guessing that it hangs because os.popen4 doesn't return until the child process exits. You should be using subprocess.Popen instead.
import subprocess
import shlex #just so you don't need break "comand" into a list yourself ;)
p=subprocess.Popen(shlex.split(comand),stdout=subprocess.PIPE)
first_line_of_output=p.stdout.readline()
second_line_of_output=p.stdout.readline()
...