Run python function from command line or subprocess popen - python

I wanted to run a test function called test.counttest() that counts up to 10.
def counttest():
x = 0
for x in range(0,3):
x = x+1
print("Number: "+ str(x))
time.sleep(1)
I want to call just the function from the command line OR from subprocess popen. Not write the function, just call it. Everything I have google keeps bringing me back to how I can write a function from the command line which is NOT what I need.
I need to specifically run a function from subprocess popen so I can get the stdout in a forloop that can then be sent to a flask socket. (This is required)
Main point - How can Call (not write) a function from the command line or from subprocess?
Not this:
python -c 'import whatever then add code'
But something like this:
python "test.counttest()"
or like this:
subprocess.Popen(['python', ".\test.counttest()"],stdout=subprocess.PIPE, bufsize=1,universal_newlines=True)
EDIT:
This is for #Andrew Holmgren. Consider the following script:
def echo(ws):
data = ws.receive()
with subprocess.Popen(['powershell', ".\pingtest.ps1"],stdout=subprocess.PIPE, bufsize=1,universal_newlines=True) as process:
for line in process.stdout:
line = line.rstrip()
print(line)
try:
ws.send(line+ "\n")
except:
pass
this works perfectly for what I need as it:
takes the script's stdout and send's it to the ws.send() function which is a websocket.
However I need this same concept for a function instead. The only way I know how to get the stdout easily is from using subprocess.popen but if there is another way let me know. This is why I am trying to make a hackjob way of running a function through the subprocess module.
The question of Run python function from command line or subprocess popen relates in the fact that if I can get a function to run from subprocess, then I know how to get the stdout for a websocket.

Actually you have really a lot of questions inside this one.
How can I send output of function line-by-line to another one (and/or websocket)? Just avoid writing to stdout and communicate directly. yield (or other generator creation methods) are intended exatly for that.
import time
def counttest():
for i in range(10):
yield f'Item {i}'
time.sleep(1)
def echo(ws):
# data = ws.receive()
for row in counttest():
ws.send(row)
How to call a function func_name defined in file (suppose it's test.py) from command line? Being in directory with test.py, do
$ python -c 'from test import func_name; func_name()'
How to read from sys.stdout? The easiest will be to replace it with io.StringIO and restore thing back later.
from contextlib import redirect_stdout
import io
def echo(ws):
f = io.StringIO()
with redirect_stdout(f):
counttest()
output = f.getvalue()
ws.send(output)
It will return only after call_function(), so you cannot monitor real-time printed items.
Regarding
I need the stdout
I can say, that I'm sure your question is X-Y problem, thus I try to suggest alternatives. Solution you want will also work, but it's awkward. This will exactly run function counttest defined in test.py, capture its output and send it line-by-line to websocket. It will process output immediately when new line arrives. Note -u flag on python call (unbuffered), it's important.
import subprocess
import shlex
def echo(ws):
data = ws.receive()
with subprocess.Popen(shlex.split("python -u -c 'from test import counttest; counttest()'"),
stdout=subprocess.PIPE,
bufsize=1,
universal_newlines=True) as process:
for line in iter(process.stdout.readline, ''):
line = line.rstrip()
if not line:
break
print(line)
try:
ws.send(line + "\n")
except:
pass

Related

unable to read content of the file which is written using the diffferent method

I want to read the content of the file which was written to the file by different function
from subprocess import *
import os
def compile():
f=open("reddy.txt","w+")
p=Popen("gcc -c rahul.c ",stdout=f,shell=True,stderr=STDOUT) #i have even tried with with open but it is not working,It is working with r+ but it is appending to file.
f.close()
def run():
p1=Popen("gcc -o r.exe rahul.c",stdout=PIPE,shell=True,stderr=PIPE)
p2=Popen("r.exe",stdout=PIPE,shell=True,stderr=PIPE)
print(p2.stdout.read())
p2.kill()
compile()
f1=open("reddy.txt","w+")
first_char=f1.readline() #unable to read here ….!!!!!!
print(first_char)
#run()
first_char must have first line of file reddy.txt but it is showing null
You are assuming that Popen finishes the process, but it doesn't; Popen will merely start a process - and unless the compilation is extremely fast, it's quite likely that reddy.txt will be empty when you try to read it.
With Python 3.5+ you want subprocess.run().
# Don't import *
from subprocess import run as s_run, PIPE, STDOUT
# Remove unused import
#import os
def compile():
# Use a context manager
with open("reddy.txt", "w+") as f:
# For style points, avoid shell=True
s_run(["gcc", "-c", "rahul.c "], stdout=f, stderr=STDOUT,
# Check that the compilation actually succeeds
check=True)
def run():
compile() # use the function we just defined instead of repeating youself
p2 = s_run(["r.exe"], stdout=PIPE, stderr=PIPE,
# Check that the process succeeds
check = True,
# Decode output from bytes() to str()
universal_newlines=True)
print(p2.stdout)
compile()
# Open file for reading, not writing!
with open("reddy.txt", "r") as f1:
first_char = f1.readline()
print(first_char)
(I adapted the run() function along the same lines, though it's not being used in any of the code you posted.)
first_char is misleadingly named; readline() will read an entire line. If you want just the first byte, try
first_char = f1.read(1)
If you need to be compatible with older Python versions, try check_output or check_call instead of run. If you are on 3.7+ you can use text=True instead of the older and slightly misleadingly named universal_newlines=True.
For more details about the changes I made, maybe see also this.
If you have a look at the documentation on open you can see that when you use w to open a file, it will first truncate that files contents. Meaning there will be no output as you describe.
Since you only want to read the file you should use r in the open statement:
f1 = open("reddy.txt", "r")

Read from stdin AND forward it to a subprocess in Python

I'm writing a wrapper script for a program that optionally accepts input from STDIN. My script needs to process each line of the file, but it also needs to forward STDIN to the program it is wrapping. In minimalist form, this looks something like this:
import subprocess
import sys
for line in sys.stdin:
# Do something with each line
pass
subprocess.call(['cat'])
Note that I'm not actually trying to wrap cat, it just serves as an example to demonstrate whether or not STDIN is being forwarded properly.
With the example above, if I comment out the for-loop, it works properly. But if I run it with the for-loop, nothing gets forwarded because I've already read to the end of STDIN. I can't seek(0) to the start of the file because you can't seek on streams.
One possible solution is to read the entire file into memory:
import subprocess
import sys
lines = sys.stdin.readlines()
for line in lines:
# Do something with each line
pass
p = subprocess.Popen(['cat'], stdin=subprocess.PIPE)
p.communicate(''.join(lines))
which works, but isn't very memory efficient. Can anyone think of a better solution? Perhaps a way to split or copy the stream?
Additional Constraints:
The subprocess can only be called once. So I can't read a line at a time, process it, and forward it to the subprocess.
The solution must work in Python 2.6
Does this work for you?
#!/usr/bin/env python2
import subprocess
import sys
p = subprocess.Popen(['cat'], stdin = subprocess.PIPE)
line = sys.stdin.readline()
####################
# Insert work here #
####################
line = line.upper()
####################
p.communicate(line)
Example:
$ echo "hello world" | ./wrapper.py
HELLO WORLD

Iterating over standard in blocks until EOF is read

I have two scripts which are connected by Unix pipe. The first script writes strings to standard out, and these are consumed by the second script.
Consider the following
# producer.py
import sys
import time
for x in range(10):
sys.stdout.write("thing number %d\n"%x)
sys.stdout.flush()
time.sleep(1)
and
# consumer.py
import sys
for line in sys.stdin:
print line
Now, when I run: python producer.py | python consumer.py, I expect to see a new line of output each second. Instead, I wait 10 seconds, and I suddenly see all of the output at once.
Why can't I iterate over stdin one-item-at-a-time? Why do I have to wait until the producer gives me an EOF before the loop-body starts executing?
Note that I can get to the correct behavior if I change consumer.py to:
# consumer.py
import sys
def stream_stdin():
line = sys.stdin.readline()
while line:
yield line
line = sys.stdin.readline()
for line in stream_stdin():
print line
I'm wondering why I have to explicitly build a generator to stream the items of stdin. Why doesn't this implicitly happen?
According to the python -h help message:
-u Force stdin, stdout and stderr to be totally unbuffered. On systems where it matters, also put stdin, stdout and stderr in
binary mode. Note that there is internal buffering in xread‐
lines(), readlines() and file-object iterators ("for line in
sys.stdin") which is not influenced by this option. To work
around this, you will want to use "sys.stdin.readline()" inside
a "while 1:" loop.

IPython: redirecting output of a Python script to a file (like bash >)

I have a Python script that I want to run in IPython. I want to redirect (write) the output to a file, similar to:
python my_script.py > my_output.txt
How do I do this when I run the script in IPython, i.e. like execfile('my_script.py')
There is an older page describing a function that could be written to do this, but I believe that there is now a built-in way to do this that I just can't find.
IPython has its own context manager for capturing stdout/err, but it doesn't redirect to files, it redirects to an object:
from IPython.utils import io
with io.capture_output() as captured:
%run my_script.py
print captured.stdout # prints stdout from your script
And this functionality is exposed in a %%capture cell-magic, as illustrated in the Cell Magics example notebook.
It's a simple context manager, so you can write your own version that would redirect to files:
class redirect_output(object):
"""context manager for reditrecting stdout/err to files"""
def __init__(self, stdout='', stderr=''):
self.stdout = stdout
self.stderr = stderr
def __enter__(self):
self.sys_stdout = sys.stdout
self.sys_stderr = sys.stderr
if self.stdout:
sys.stdout = open(self.stdout, 'w')
if self.stderr:
if self.stderr == self.stdout:
sys.stderr = sys.stdout
else:
sys.stderr = open(self.stderr, 'w')
def __exit__(self, exc_type, exc_value, traceback):
sys.stdout = self.sys_stdout
sys.stderr = self.sys_stderr
which you would invoke with:
with redirect_output("my_output.txt"):
%run my_script.py
To quickly store text contained in a variable while working in IPython use %store with > or >>:
%store VARIABLE >>file.txt (appends)
%store VARIABLE >file.txt (overwrites)
(Make sure there is no space immediately following the > or >>)
For just one script to run I would do the redirection in bash
ipython -c "execfile('my_script.py')" > my_output.txt
On python 3, execfile does not exist any more, so use this instead
ipython -c "exec(open('my_script.py').read())" > my_output.txt
Be careful with the double vs single quotes.
While this an old question, I found this and the answers as I was facing a similar problem.
The solution I found after sifting through IPython Cell magics documentation is actually fairly simple. At the most basic the solution is to assign the output of the command to a variable.
This simple two-cell example shows how to do that. In the first Notebook cell we define the Python script with some output to stdout making use of the %%writefile cell magic.
%%writefile output.py
print("This is the output that is supposed to go to a file")
Then we run that script like it was run from a shell using the ! operator.
output = !python output.py
print(output)
>>> ['This is the output that is supposed to go to a file']
Then you can easily make use of the %store magic to persist the output.
%store output >output.log
Notice however that the output of the command is persisted as a list of lines. You might want to call "\n".join(output) prior storing the output.
use this code to save the output to file
import time
from threading import Thread
import sys
#write the stdout to file
def log():
#for stop the thread
global run
while (run):
try:
global out
text = str(sys.stdout.getvalue())
with open("out.txt", 'w') as f:
f.write(text)
finally:
time.sleep(1)
%%capture out
run = True
print("start")
process = Thread(target=log, args=[]).start()
# do some work
for i in range(10, 1000):
print(i)
time.sleep(1)
run= False
process.join()
It is useful to use a text editor that tracer changes the file and suggest reloading the file like
notepad++
I wonder why the verified solution doesn't entirely work in a loop, the following:
for i in range(N):
with redirect_output("path_to_output_file"):
%run <python_script> arg1 arg2 arg3
creates N files in the directory with only output from the first print statement of the <python_script>. Just confirming- when run separately for each iteration of the for loop, the script produces the right result.
There's the hacky way of overwriting sys.stdout and sys.stderr with a file object, but that's really not a good way to go about it. Really, if you want to control where the output goes from inside python, you need to implement some sort of logging and/or output handling system that you can configure via the command line or function arguments instead of using print statements.
It seems a lot of code....
My solution.
redirect output of ipython script into a csv or text file like sqlplus spool
wonder there is an easy way like oracle sqlplus spool command..?

How to redirect the output of .exe to a file in python?

In a script , I want to run a .exe with some command line parameters as "-a",and then
redirect the standard output of the program to a file?
How can I implement that?
You can redirect directly to a file using subprocess.
import subprocess
with open('output.txt', 'w') as output_f:
p = subprocess.Popen('Text/to/execute with-arg',
stdout=output_f,
stderr=output_f)
Easiest is os.system("the.exe -a >thefile.txt"), but there are many other ways, for example with the subprocess module in the standard library.
You can do something like this
e.g. to read output of ls -l (or any other command)
p = subprocess.Popen(["ls","-l"],stdout=subprocess.PIPE)
print p.stdout.read() # or put it in a file
you can do similar thing for stderr/stdin
but as Alex mentioned if you just want it in a file, just redirect the cmd output to a file
If you just want to run the executable and wait for the results, Anurag's solution is probably the best. I needed to respond to each line of output as it arrived, and found the following worked:
1) Create an object with a write(text) method. Redirect stdout to it (sys.stdout = obj). In your write method, deal with the output as it arrives.
2) Run a method in a seperate thread with something like the following code:
p = subprocess.Popen('Text/to/execute with-arg', stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=False)
while p.poll() is None:
print p.stdout.readline().strip()
Because you've redirected stdout, PIPE will send the output to your write method line by line. If you're not certain you're going to get line breaks, read(amount) works too, I believe.
3) Remember to redirect stdout back to the default: sys.stdout = __sys.stdout__
Although the title (.exe) sounds like it's a problem on Windows.
I had to share that the accepted answer (subprocess.Popen() with stdout/stderr arguments) didn't work for me on Mac OS X (10.8) with python 2.7.
I had to use subprocess.check_output() (python 2.7 and above) to make it work. Example:
import subprocess
cmd = 'ls -l'
out = subprocess.check_output(cmd, shell=True)
with open('my.log', 'w') as f:
f.writelines(out)
f.close()
Note that this solution writes all the accumulated output out when the program finishes.
If you want to monitor the log file during the run. You may want to try something else.
In my own case, I only cared about the end result.

Categories

Resources