Hello I'm really new to the Python programming language and i have encountered a problem writing one script. I want to save the output from stdout that i obtain when i run a tcpdump command in a variable in a Python script, but i want the tpcdump command to run continuously because i want to gather the length from all packets transferred that get filtered by tcpdump(with the filter i wrote).
I tried :
fin, fout = os.popen4(comand)
result = fout.read()
return result
But it just hangs.
I'm guessing that it hangs because os.popen4 doesn't return until the child process exits. You should be using subprocess.Popen instead.
import subprocess
import shlex #just so you don't need break "comand" into a list yourself ;)
p=subprocess.Popen(shlex.split(comand),stdout=subprocess.PIPE)
first_line_of_output=p.stdout.readline()
second_line_of_output=p.stdout.readline()
...
Related
I am trying to use output of external program run using the run function.
this program regularly throws a row of data which i need to use in mine script
I have found a subprocess library and used its run()/check_output()
Example:
def usual_process():
# some code here
for i in subprocess.check_output(['foo','$$']):
some_function(i)
Now assuming that foo is already in a PATH variable and it outputs a string in semi-random periods.
I want the program to do its own things, and run some_function(i)every time foo sends new row to its output.
which boiles to two problems. piping the output into a for loop and running this as a background subprocess
Thank you
Update: I have managed to get the foo output onto some_function using This
with os.popen('foo') as foos_output:
for line in foos_output:
some_function(line)
According to this os.popen is to be deprecated, but I am yet to figure out how to pipe internal processes in python
Now just need to figure out how to run this function in a background
SO, I have solved it.
First step was to start the external script:
proc=Popen('./cisla.sh', stdout=PIPE, bufsize=1)
Next I have started a function that would read it and passed it a pipe
def foo(proc, **args):
for i in proc.stdout:
'''Do all I want to do with each'''
foo(proc).start()`
Limitations are:
If your wish t catch scripts error you would have to pipe it in.
second is that it leaves a zombie if you kill parrent SO dont forget to kill child in signal-handling
I have a c executable which gets data from a iot hardware and print information on console using printf. I want to run this executable from python which I am able to do so using subprocess.call in following way
subprocess.call(["demoProgram", "poll"])
and print the output to console. But I need to capture this output(printf) using my python code to process information further in real time. How can I capture this output using subprocess in real time?
The following opens a subprocess and creates an iterator from the output.
import subprocess
import sys
proc = subprocess.Popen(['ping','google.com'], stdout=subprocess.PIPE)
for line in iter(proc.stdout.readline, ''):
print(line) # process line-by-line
This solution was modified from the answer to a similar question.
if you're using python 2x
have a look at the commands module
import commands
output=commands.getoutput("gcc demoProgram.c") #command to be executed
print output
for python3x
use the subprocess module
import subprocess
output=subprocess.getoutput("gcc demoProgram.c") #command to be executed
print output
process = subprocess.check_output(BACKEND+"mainbgw setup " + str(NUM_USERS), shell=True,\
stderr=subprocess.STDOUT)
I am using the above statement to run a C program in django-python based server for some computations, there are some printf() statements whose output I would like to see on stdout while the server is running and executing the subprocess, how can that be done ?
If you actually don't need the output to be available to your python code as a string, you can just use os.system, or subprocess.call without redirecting stdout elsewhere. Then stdout of your C program will just go directly to stdout of your python program.
If you need both streaming stdout and access to the output as a string, you should use subprocess.Popen (or the old popen2.popen4) to obtain a file descriptor of the output stream, then repeatedly read lines from the stream until you exhausted it. In the mean time, you keep a concatenated version of all data you grabbed. This is an example of the loop.
I got a simple python script which should read from stdin.
So if I redirect a stdout of a program to the stdin to my python script.
But the stuff that's logged by my program to the python script will only "reach" the python script when the program which is logging the stuff gets killed.
But actually I want to handle each line which is logged by my program as soon as it is available and not when my program which should actually run 24/7 quits.
So how can I make this happen? How can I make the stdin not wait for CTRL+D or EOF until they handle data?
Example
# accept_stdin.py
import sys
import datetime
for line in sys.stdin:
print datetime.datetime.now().second, line
# print_data.py
import time
print "1 foo"
time.sleep(3)
print "2 bar"
# bash
python print_data.py | python accept_stdin.py
Like all file objects, the sys.stdin iterator reads input in chunks; even if a line of input is ready, the iterator will try to read up to the chunk size or EOF before outputting anything. You can work around this by using the readline method, which doesn't have this behavior:
while True:
line = sys.stdin.readline()
if not line:
# End of input
break
do_whatever_with(line)
You can combine this with the 2-argument form of iter to use a for loop:
for line in iter(sys.stdin.readline, ''):
do_whatever_with(line)
I recommend leaving a comment in your code explaining why you're not using the regular iterator.
It is also an issue with your producer program, i.e. the one you pipe stdout to your python script.
Indeed, as this program only prints and never flushes, the data it prints is kept in the internal program buffers for stdout and not flushed to the system.
Add sys.stdout.flush() call right after you print statement in print_data.py.
You see the data when you quit the program as it automatically flushes on exit.
See this question for explanation,
As said by #user2357112 you need to use:
for line in iter(sys.stdin.readline, ''):
After that you need to start python with the -u flag to flush stdin and stdout immediately.
python -u print_data.py | python -u accept_stdin.py
You can also specify the flag in the shebang.
I want to spawn (fork?) multiple Python scripts from my program (written in Python as well).
My problem is that I want to dedicate one terminal to each script, because I'll gather their output using pexpect.
I've tried using pexpect, os.execlp, and os.forkpty but neither of them do as I expect.
I want to spawn the child processes and forget about them (they will process some data, write the output to the terminal which I could read with pexpect and then exit).
Is there any library/best practice/etc. to accomplish this job?
p.s. Before you ask why I would write to STDOUT and read from it, I shall say that I don't write to STDOUT, I read the output of tshark.
See the subprocess module
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes. This module intends to replace several other, older modules and functions, such as:
os.system
os.spawn*
os.popen*
popen2.*
commands.*
From Python 3.5 onwards you can do:
import subprocess
result = subprocess.run(['python', 'my_script.py', '--arg1', val1])
if result.returncode != 0:
print('script returned error')
This also automatically redirects stdout and stderr.
I don't understand why you need expect for this. tshark should send its output to stdout, and only for some strange reason would it send it to stderr.
Therefore, what you want should be:
import subprocess
fp= subprocess.Popen( ("/usr/bin/tshark", "option1", "option2"), stdout=subprocess.PIPE).stdout
# now, whenever you are ready, read stuff from fp
You want to dedicate one terminal or one python shell?
You already have some useful answers for Popen and Subprocess, you could also use pexpect if you're already planning on using it anyways.
#for multiple python shells
import pexpect
#make your commands however you want them, this is just one method
mycommand1 = "print 'hello first python shell'"
mycommand2 = "print 'this is my second shell'"
#add a "for" statement if you want
child1 = pexpect.spawn('python')
child1.sendline(mycommand1)
child2 = pexpect.spawn('python')
child2.sendline(mycommand2)
Make as many children/shells as you want and then use the child.before() or child.after() to get your responses.
Of course you would want to add definitions or classes to be sent instead of "mycommand1", but this is just a simple example.
If you wanted to make a bunch of terminals in linux, you just need to replace the 'python' in the pextpext.spawn line
Note: I haven't tested the above code. I'm just replying from past experience with pexpect.