I'm trying to over write a file in python so it only keeps the most up to date information read from a serial port. I've tried several different methods and read quite a few different posts but the file keeps writing the information over and over with out overwriting the previous entry.
import serial
ser=serial.Serial('/dev/ttyUSB0',57600)
target=open( 'wxdata' , 'w+' )
with ser as port, target as outf:
while 1:
target.truncate()
outf.write(ser.read))
outf.flush()
I have a weather station sending data wirelessly to a raspberry pi, I just want the file to keep one line of current data received. right now it just keeps looping and adding over and over. Any help would be greatly appreciated..
I would change your code to look like:
from serial import Serial
with Serial('/dev/ttyUSB0',57600) as port:
while True:
with open('wxdata', 'w') as file:
file.write(port.read())
This will make sure it gets truncated, flushed, etc. Why do work you don't have to? :)
By default, truncate() only truncates the file to the current position. Which, with your loop, is only at 0 the first time through. Change your loop to:
while 1:
outf.seek(0)
outf.truncate()
outf.write(ser.read())
outf.flush()
Note that truncate() does accept an optional size argument, which you could pass 0 for, but you'd still need to seek back to the beginning before writing the next part anyway.
Before you start writing the file, add the following line:
outf.seek(0)
outf.truncate()
This will make it so that whatever you write next will overwrite the file
Related
I have a file in my python folder called data.txt and i have another file read.py trying to read text from data.txt but when i change something in data.txt my read doesn't show anything new i put
Something else i tried wasn't working and i found something that read, but when i changed it to something that was actually meaningful it didn't print the new text.
Can someone explain why it doesn't refresh, or what i need to do to fix it?
with open("data.txt") as f:
file_content = f.read().rstrip("\n")
print(file_content)
First and foremost, strings are immutable in Python - once you use file.read(), that returned object cannot change.
That being said, you must re-read the file at any given point the file contents may change.
For example
read.py
def get_contents(filepath):
with open(filepath) as f:
return f.read().rstrip("\n")
main.py
from read import get_contents
import time
print(get_contents("data.txt"))
time.sleep(30)
# .. change file somehow
print(get_contents("data.txt"))
Now, you could setup an infinite loop that watches the file's last modification timestamp from the OS, then always have the latest changes, but that seems like a waste of resources unless you have a specific need for that (e.g. tailing a log file), however there are arguably better tools for that
It was unclear from your question if you do the read once or multiple times. So here are steps to do:
Make sure you call the read function repeatedly with a certain interval
Check if you actually save file after modification
Make sure there are no file usage conflicts
So here is a description of each step:
When you read a file the way you shared it gets closed, meaning it is read only once, you need to read it multiple times if you want to see changes, so make it with some kind of interval in another thread or async or whatever suits your application best.
This step is obvious, remember to hit ctrl+c
It may happen that a single file is being accessed by multiple processes, for example your editor and the script, now to prevent errors try the following code:
def read_file(file_name: str):
while True:
try:
with open(file_name) as f:
return f.read().rstrip("\n")
except IOError:
pass
I need to use some At commands & read the data for that I am using below cmds , the response I would be getting is in multiple lines
serialPort.write(b"AT+CMD\r\n")
time.sleep(1)
response = serialPort.readlines()
if I am using only readline() i dont get full expected response but if i do read lines() i do get the full data but some lines skipped sometime , i need to know the difference between these 2 methods & also for the
timeout flag
how does it effects functionality of these
readline(): reads one line, if you use multiple times, each time it will keep giving next line until it reaches end of file or file is closed.
readlines(): returns a list of lines from a file
If you want the output of a whole file, you can use read() instead.
About the timeout flag, I don't think timeout flag exists for I/O operations in Python? But if you are talking about the one in Serial, then it specifies the maximum time to wait for serial data.
Noob question here. I'm scheduling a cron job for a Python script for every 2 hours, but I want the script to stop running after 48 hours, which is not a feature of cron. To work around this, I'm recording the number of executions at the end of the script in a text file using a tally mark x and opening the text file at the beginning of the script to only run if the count is less than n.
However, my script seems to always run regardless of the conditions. Here's an example of what I've tried:
with open("curl-output.txt", "a+") as myfile:
data = myfile.read()
finalrun = "xxxxx"
if data != finalrun:
[CURL CODE]
with open("curl-output.txt", "a") as text_file:
text_file.write("x")
text_file.close()
I think I'm missing something simple here. Please advise if there is a better way of achieving this. Thanks in advance.
The problem with your original code is that you're opening the file in a+ mode, which seems to set the seek position to the end of the file (try print(data) right after you read the file). If you use r instead, it works. (I'm not sure that's how it's supposed to be. This answer states it should write at the end, but read from the beginning. The documentation isn't terribly clear).
Some suggestions: Instead of comparing against the "xxxxx" string, you could just check the length of the data (if len(data) < 5). Or alternatively, as was suggested, use pickle to store a number, which might look like this:
import pickle
try:
with open("curl-output.txt", "rb") as myfile:
num = pickle.load(myfile)
except FileNotFoundError:
num = 0
if num < 5:
do_curl_stuff()
num += 1
with open("curl-output.txt", "wb") as myfile:
pickle.dump(num, myfile)
Two more things concerning your original code: You're making the first with block bigger than it needs to be. Once you've read the string into data, you don't need the file object anymore, so you can remove one level of indentation from everything except data = myfile.read().
Also, you don't need to close text_file manually. with will do that for you (that's the point).
Sounds more for a job scheduling with at command?
See http://www.ibm.com/developerworks/library/l-job-scheduling/ for different job scheduling mechanisms.
The first bug that is immediately obvious to me is that you are appending to the file even if data == finalrun. So when data == finalrun, you don't run curl but you do append another 'x' to the file. On the next run, data will be not equal to finalrun again so it will continue to execute the curl code.
The solution is of course to nest the code that appends to the file under the if statement.
Well there probably is an end of line jump \n character which makes that your file will contain something like xx\n and not simply xx. Probably this is why your condition does not work :)
EDIT
What happens if through the python command line you type
open('filename.txt', 'r').read() # where filename is the name of your file
you will be able to see whether there is an \n or not
Try using this condition along with if clause instead.
if data.count('x')==24
data string may contain extraneous data line new line characters. Check repr(data) to see if it actually a 24 x's.
is anyone could help me in finding a function that deletes just a portion from an opened file starting from its beginning. In other words, the program will open a file and read for example the first 100 bytes. Is there a built-in function on python or a way that helps me deleting just those first 100 bytes before closing the file (the file will be shifted to the right by 100 bytes). (FYI: truncate() does not help since it deletes the contents of a file starting from the current cursor position, I would like exactly the inverse-delete the content from beginning till the current cursor position and leave the rest.). Thank you
Is this something you want to do efficiently for large files, or just something you want to do in general?
It's pretty easy to do by reading in the file, and then writing it out:
import os
dat = open(filename, 'rb').read()
open(filename+'_temp', 'wb').write( dat[100:] )
os.rename(filename+'_temp',filename)
Note that this operates "safely" by first creating the new file, then moving it into place. If there is a failure anywhere, the old file will not be clobbered.
I'm trying to use a subprocess to write the output to a data file, and then parse through it in order to check for some data in it. However, when I need to do the reading through the file's lines, I always get a blank file unless I close the file and then reopen it. While it works, I just don't like having to do this and I want to know why it happens. Is it an issue with subprocess, or another intricacy of the file mode?
dumpFile=open(filename,"w+")
dump = subprocess.Popen(dumpPars,stdout=dumpFile)
dump.wait()
At this point, if I try to read the file, I get nothing. However, it works fine by doing these commands after:
dumpFile.close()
dumpFile=open(filename,"r")
The with statement automatically closes the file after the block ends:
with open(filename, "w+") as dumpFile:
dump = subprocess.Popen(dumpPars, stdout=dumpFile)
dump.wait()
with open(filename, "r") as dumpFile:
# dumpFile reading code goes here
You probably need to seek back to the beginning of the file, otherwise the file pointer will be at the end of the file when you try to read it:
dumpFile.seek(0)
However, if you don't need to actually store dumpFile, it's probably better to do something like:
dump = = subprocess.Popen(dumpPars,stdout=subprocess.PIPE)
stdoutdata,_ = dump.communicate() #now parse stdoutdata
unless your command produces large volumes of data.
If you want to read what you've already written, either close and reopen the file, or "rewind" it - seek to offset 0.
If you want to read the file while it is being written, you can do so (don't even need to write it to disk), see this other question Capture output from a program