I need to continuously read and process data on one computer that is generated on another computer.
So far, I was able to use mypipe tosend data from the second computer to the first, using the following:
cat mypipe | ssh second_com#IP_address 'cat destfile'
This works and the data is now constantly dumped to destfile, but file size increases really fast and this is not the solution I need.
What I would like to do is pipe the data directly into my python script without writing it to a file. Any suggestions on how to do this?
What you've written doesn't dump data to destfile. What it does is:
cat mypipe: Dumps the contents of a file named mypipe to its stdout.
|: Takes the stdout from cat mypipe and sends it as the stdin to ssh.
ssh second_com#IP_address: Creates an ssh connection to another system and runs the specified command there, forwarding its stdin along.
'cat destfile': Runs the command cat destfile on that other system—which ignores the forwarded-in stdin and dumps the contents of a file named destfile to stdout, which goes nowhere useful.
What you probably have is something more like this:
cat mypipe | ssh second_com#IP_address 'cat >destfile'
The difference is:
'cat destfile': Runs the command cat >destfile on that other system—so cat just copies the forwarded-in stdin to its stdout, and then >destfile causes that stdout to be stored in the local file destfile.
So the result is exactly what you described as happening.
The most obvious way to change this is to just put your Python program in place of cat. Of course you need to put your program on the remote machine, somewhere accessible, like the home directory of that second_com user. Then you can execute it like:
cat mypipe | ssh second_com#IP_address 'python myscript.py'
Now, inside myscript.py, it can read from sys.stdin, which will be the stream of data coming from cat mypipe (via | and ssh), and you can do whatever it is you wanted to do with that data, without needing to save it to destfile first.
Related
Long story short I am a desktop support turned programmer with the loss of our dev team. I am learning as I go (which has been fun but rough). I have a weather rack sensor that now puts a json response to a file with a single line, from there I can parse and post to our REST server with a python script. What I want to do is write another python script to run the linux command. Once the json is written to the json file, a linux command ends the process. How do I achieve this?
This is what I have right now:
import os
cmd = 'rtl_433 -F json -R 146 | tee -a testjson.json'
os.system(cmd)
#need to close after written to testjson.json
close()
import sys
sys.exit()
You can also specify the programs return code like this:
sys.exit(1)
I have a python program that reads input from stdin (required), and processes lines from stdin:
for lines in stdin:
#do stuff to lines
filename = #need file name
#example print
print(filename)
However, in this for loop, I also need to get the name of the file that has been piped in to this python program like this:
cat document.txt | pythonFile.py #should print document.txt with the example print
Is there a way to do this?
No, this is not possible. As the receiving end of a pipe you have no knowledge where your data stream is coming from. The use of cat further obfuscates it, but even if you would write ./pythonFile.py < document.txt you would have no clue.
Many unix tools accept filenames as argument and - as a special code for 'stdin'. You could design your script the same way, so it can be called like
cat document.txt | pythonFile.py - (your script doesn't know the input origin)
./pythonFile.py document.txt (your script does know the file)
I'm running a binary that manages a usb device. The binary file, when executed outputs results to a file I specify.
Is there any way in python the redirect the output of a binary to my script instead of to a file? I'm just going to have to open the file and get it as soon as this line of code runs.
def rn_to_file(comport=3, filename='test.bin', amount=128):
os.system('capture.exe {0} {1} {2}'.format(comport, filename, amount))
it doesn't work with subprocess either
from subprocess import check_output as qx
>>> cmd = r'C:\repos\capture.exe 3 text.txt 128'
>>> output = qx(cmd)
Opening serial port \\.\COM3...OK
Closing serial port...OK
>>> output
b'TrueRNG Serial Port Capture Tool v1.2\r\n\r\nCapturing 128 bytes of data...Done'
The actual content of the file is a series of 0 and 1. This isn't redirecting the output to the file to me, instead it just prints out what would be printed out anyway as output.
It looks like you're using Windows, which has a special reserved filename CON which means to use the console (the analog on *nix would be /dev/stdout).
So try this:
subprocess.check_output(r'C:\repos\capture.exe 3 CON 128')
You might need to use shell=True in there, but I suspect you don't.
The idea is to make the program write to the virtual file CON which is actually stdout, then have Python capture that.
An alternative would be CreateNamedPipe(), which will let you create your own filename and read from it, without having an actual file on disk. For more on that, see: createNamedPipe in python
I'm a win7-user.
I accidentally read about redirections (like command1 < infile > outfile) in *nix systems, and then I discovered that something similar can be done in Windows (link). And python is also can do something like this with pipes(?) or stdin/stdout(?).
I do not understand how this happens in Windows, so I have a question.
I use some kind of proprietary windows-program (.exe). This program is able to append data to a file.
For simplicity, let's assume that it is the equivalent of something like
while True:
f = open('textfile.txt','a')
f.write(repr(ctime()) + '\n')
f.close()
sleep(100)
The question:
Can I use this file (textfile.txt) as stdin?
I mean that the script (while it runs) should always (not once) handle all new data, ie
In the "never-ending cycle":
The program (.exe) writes something.
Python script captures the data and processes.
Could you please write how to do this in python, or maybe in win cmd/.bat or somehow else.
This is insanely cool thing. I want to learn how to do it! :D
If I am reading your question correctly then you want to pipe output from one command to another.
This is normally done as such:
cmd1 | cmd2
However, you say that your program only writes to files. I would double check the documentation to see if their isn't a way to get the command to write to stdout instead of a file.
If this is not possible then you can create what is known as a named pipe. It appears as a file on your filesystem, but is really just a buffer of data that can be written to and read from (the data is a stream and can only be read once). Meaning your program reading it will not finish until the program writing to the pipe stops writing and closes the "file". I don't have experience with named pipes on windows so you'll need to ask a new question for that. One down side of pipes is that they have a limited buffer size. So if there isn't a program reading data from the pipe then once the buffer is full the writing program won't be able to continue and just wait indefinitely until a program starts reading from the pipe.
An alternative is that on Unix there is a program called tail which can be set up to continuously monitor a file for changes and output any data as it is appended to the file (with a short delay.
tail --follow=textfile.txt --retry | mycmd
# wait for data to be appended to the file and output new data to mycmd
cmd1 >> textfile.txt # append output to file
One thing to note about this is that tail won't stop just because the first command has stopped writing to the file. tail will continue to listen to changes on that file forever or until mycmd stops listening to tail, or until tail is killed (or "sigint-ed").
This question has various answers on how to get a version of tail onto a windows machine.
import sys
sys.stdin = open('textfile.txt', 'r')
for line in sys.stdin:
process(line)
If the program writes to textfile.txt, you can't change that to redirect to stdin of your Python script unless you recompile the program to do so.
If you were to edit the program, you'd need to make it write to stdout, rather than a file on the filesystem. That way you can use the redirection operators to feed it into your Python script (in your case the | operator).
Assuming you can't do that, you could write a program that polls for changes on the text file, and consumes only the newly written data, by keeping track of how much it read the last time it was updated.
When you use < to direct the output of a file to a python script, that script receives that data on it's stdin stream.
Simply read from sys.stdin to get that data:
import sys
for line in sys.stdin:
# do something with line
I am trying to daemonize a python script that currently runs in the foreground. However, I still need to be able to see its output which it currently dumps to stdout.
So I am using the following piece of code which generates a unique file name in /tmp and then it assigns sys.stdout to this new file. All subsequent calls to 'print' are then redirected to this log file.
import uuid
outfile = open('/tmp/outfile-' + str(uuid.uuid4()), 'w')
outfile.write("Log file for daemon script...\n")
sys.stdout=outfile
# Rest of script uses print statements to dump information into the /tmp file
.
.
.
The problem I am facing is that, when I tail -f the file created in /tmp, I don't see any output. However, once I kill my daemon process, output is visible in the /tmp logfile, because python flushes out the file data.
I want to monitor the /tmp log file in realtime, hence it would be great if somehow, the output can be made visible in realtime.
One solution that I have tried was trying to use unbeffered IO, but that didn't help either.
Try harder to use unbuffered I/O. The problem is almost certainly that your output is buffered.
Opening the file like this should work:
outfile = open(name, 'w', 0)