I am trying to create a box that tells me if a file text is modified or not, if it is modified it prints out the new text inside of it. This should be in an infinite loop (the bot sleeps until the text file is modified).
I have tried this code but it doesn't work.
while True:
tfile1 = open("most_recent_follower.txt", "r")
SMRF1 = tfile1.readline()
if tfile1.readline() == SMRF1:
print(tfile1.readline())
But this is totally not working... I am new to Python, can anyone help me?
def read_file():
with open("most_recent_follower.txt", "r") as f:
SMRF1 = f.readlines()
return SMRF1
initial = read_file()
while True:
current = read_file()
if initial != current:
for line in current:
if line not in initial:
print(line)
initial = current
Read the file in once, to get it's initial state. Then continuously repeat reading of the file. When it changes, print out its contents.
I don't know what bot you are referring to, but this code, and yours, will continuously read the file. It never seems to exit.
I might suggest copying the file to a safe duplicate location, and possibly using a diff program to determine if the current file is different from the original copy, and print the added lines. If you just want lines appended you might try to utilize a utility like tail
You can also use a library like pyinotify to only trigger when the filesystem detects the file has been modified
This is the first result on Google for "check if a file is modified in python" so I'm gonna add an extra solution here.
If you're curious if a file is modified in the sense that its contents have changed, OR it was touched, then you can use os.stat:
import os
get_time = lambda f: os.stat(f).st_ctime
fn = 'file.name'
prev_time = get_time(fn)
while True:
t = get_time(fn)
if t != prev_time:
do_stuff()
prev_time = t
Related
I am practicing file reading stuff in python. I got a text file and I want to run a program which continuously reads this text file and appends the newly written lines to a list. To exit the program and print out the resulting list the user should press "enter".
The code I have written so far looks like this:
import sys, select, os
data = []
i = 0
while True:
os.system('cls' if os.name == 'nt' else 'clear')
with open('test.txt', 'r') as f:
for line in f:
data.append(int(line))
print(data)
if sys.stdin in select.select([sys.stdin], [], [], 0)[0]:
line_ = input()
break
So to break out of the while loop 'enter' should be pressed. To be fair I just copy pasted the solution to do this from here: Exiting while loop by pressing enter without blocking. How can I improve this method?
But this code just appends all lines to my list again and again.
So, say my text files contains the lines:
1
2
3
So my list is going to look like data = [1,2,3,1,2,3,1,2,3...] and have a certain lenght until I press enter. When I add a line (e.g. 4) it will go data = [1,2,3,1,2,3,1,2,3,1,2,3,4,1,2,3,4...].
So I am looking for some kind of if statement before my append command so that only the newly written line get appended. But I can't think of something easy.
I already got some tips, i.e.
Continuously checking the file size and only reading the part between old and new size.
Keeping track of line number and skipping to line that is not appended in next iteration.
At the moment, I can't think of a way on how to do this. I tried fiddling around with enumerate(f) and itertools.islice but can't make it work. Would appreciate some help, as I have not yet adopted the way of how programmers think.
Store the file position between iterations. This allows to efficiently fast-forward the file when it is opened again:
data = []
file_position = 0
while True:
with open('test.txt', 'r') as f:
f.seek(file_position) # fast forward beyond content read previously
for line in f:
data.append(int(line))
file_position = f.tell() # store position at which to resume
I could get it to work on Windows. First of all, exiting the while-loop and continuous reading the file are two different questions. I assume, that exiting the while loop is not the main problem, and because your select.select() statement doesn't work this way on Windows, I created an exit-while with a try-except clause that triggers on Ctrl-c. (It's just one way of doing it).
The second part of your questions is how to continuously read the file. Well, not by reopening it again and again within the while loop, open it before the while loop.
Then, as soon as the file is being changed, either a valid or an invalid line is read. I suppose this happens because the iteration over f may sometimes happen before the file was completely written (I'm not quite sure about that). Anyway, it is easy to check the read line. I used again a try-except clause for it which catches the error if int(line) raises an error.
Code:
import sys, select, os
data = []
with open('text.txt', 'r') as f:
try:
while True:
os.system('cls' if os.name == 'nt' else 'clear')
for line in f:
try:
data.append(int(line))
except:
pass
print(data)
except KeyboardInterrupt:
print('Quit loop')
print(data)
I am trying to print some of the results of my algorithm (score) to a .txt file to have that data for further analysis. Here, the algorithm shall create the file and then open it to write the number down. Then I thought about closing it again.
My problem here is, that I don´t even find the file. If I create one by my own, and only try to write the number, that doesn´t work as well.
This is for the analysis of Reinforcement Learning for a robot. The scores are symbolizing Q-values and are important for further analysis. Score is here a random number.
if __name__ == '__main__':
open('try.txt', 'w+').close()
for e in range(agent.load_episode + 1, EPISODES):
...
for t in range(agent.episode_step):
...
if done:
...
saveFile = open('try.txt','w')
saveFile.write(str(score))
saveFile.close()
From the first part I try to create a new file called try.txt (I only create the file once). Them after, I open the file, write something and close it again. When the next Q-value is calculated, the file is opened again.
Should the file contain only the last calculated value, all the values (possibly each in new line) from single run, or even values through separate runs? Nevertheless, this, a bit modified, snipped might be what you are looking for:
if __name__ == '__main__':
with open('try.txt', 'w') as saveFile: # change to 'a' if you want the results to be stored between runs
for e in range(agent.load_episode + 1, EPISODES):
...
for t in range(agent.episode_step):
...
if done:
...
# saveFile.truncate() uncommenting this means that the file only stores the latest value
saveFile.write(str(score) + '\n') # write each result to new line
saveFile.flush() # this line makes the results accessible from file as soon as they are calculated
In python with is the preferred method of opening files, as it takes care of closing it at the right moment. When opening file in 'w' mode the caret inside the file is placed at the beginning of file and if a file had any data in it, it gets erased.
The 'a' mode appends to file. You may want to take a look at this.
Now I believe that you wanted to open and close the file on and on, as to have the data accessible as soon as the iteration is finished. That is what saveFile.flush() is for. Please let me know if this helps you!
To better control where the file gets created take use of os module:
import os
directory = os.path.dirname(os.path.abspath(__file__))
file_path = os.path.join(directory, 'try.txt')
# print(file_path)
with open(file_path, 'w') as saveFile:
Try changing saveFile = open('try.txt', 'w') to with open('try.txt', 'a+') as saveFile:
I want to read the csv file in a manner similar to tail -f i.e. like reading an error log file.
I can perform this operation in a text file with this code:
while 1:
where = self.file.tell()
line = self.file.readline()
if not line:
print "No line waiting, waiting for one second"
time.sleep(1)
self.file.seek(where)
if (re.search('[a-zA-Z]', line) == False):
continue
else:
response = self.naturalLanguageProcessing(line)
if(response is not None):
response["id"] = self.id
self.id += 1
response["tweet"] = line
self.saveResults(response)
else:
continue
How do I perform the same task for a csv file? I have gone through a link which can give me last 8 rows but that is not what I require. The csv file will be getting updated simultaneously and I need to get the newly appended rows.
Connecting A File Tailer To A csv.reader
In order to plug your code that looks for content newly appended to a file into a csv.reader, you need to put it into the form of an iterator.
I'm not intending to showcase correct code, but specifically to show how to adopt your existing code into this form, without making assertions about its correctness. In particular, the sleep() would be better replaced with a mechanism such as inotify to let the operating system assertively inform you when the file has changed; and the seek() and tell() would be better replaced with storing partial lines in memory rather than backing up and rereading them from the beginning over and over.
import csv
import time
class FileTailer(object):
def __init__(self, file, delay=0.1):
self.file = file
self.delay = delay
def __iter__(self):
while True:
where = self.file.tell()
line = self.file.readline()
if line and line.endswith('\n'): # only emit full lines
yield line
else: # for a partial line, pause and back up
time.sleep(self.delay) # ...not actually a recommended approach.
self.file.seek(where)
csv_reader = csv.reader(FileTailer(open('myfile.csv')))
for row in csv_reader:
print("Read row: %r" % (row,))
If you create an empty myfile.csv, start python csvtailer.py, and then echo "first,line" >>myfile.csv from a different window, you'll see the output of Read row: ['first', 'line'] immediately appear.
Finding A Correct File Tailer In Python
For a correctly-implemented iterator that waits for new lines to be available, consider referring to one of the existing StackOverflow questions on the topic:
How to implement a pythonic equivalent of tail -F?
Reading infinite stream - tail
Reading updated files on the fly in Python
I am opening a text file, which once created is constantly being written to, and then printing this out to a console any new lines, as I don't want to reprint the whole text file each time. I am checking to see if the file grows in size, if it is, just print the next new line. This is mostly working, but occasionally it gets a bit confused about the next new line, and new lines appear a few lines up, mixed in with the old lines.
Is there a better way to do this, below is my current code.
infile = "Null"
while not os.path.exists(self.logPath):
time.sleep(.1)
if os.path.isfile(self.logPath):
infile = codecs.open(self.logPath, encoding='utf8')
else:
raise ValueError("%s isn't a file!" % file_path)
lastSize = 0
lastLineIndex = 0
while True:
wx.Yield()
fileSize = os.path.getsize(self.logPath)
if fileSize > lastSize:
lines = infile.readlines()
newLines = 0
for line in lines[lastLineIndex:]:
newLines += 1
self.running_log.WriteText(line)
lastLineIndex += newLines
if "DBG-X: Returning 1" in line:
self.subject = "FAILED! - "
self.sendEmail(self)
break
if "DBG-X: Returning 0" in line:
self.subject = "PASSED! - "
self.sendEmail(self)
break
fileSize1 = fileSize
infile.flush()
infile.seek(0)
infile.close()
Also my application freezes whilst waiting for the text file to be created, as it takes a couple of seconds to appear, which isn't great.
Cheers.
This solution could help. You'd also have to do a bit of waiting until the file appears, using os.path.isfile and time.sleep.
Maybe you could:
open the file each time you need to read in it,
use lastSize as argument to seek directly to where you stopped at last reading.
Additional comment: I don't know if you need some protection, but I think you should not bother to test whether given filename is a file or not; just open it in a try...except block and catch problems if any.
As for the freezing of your application, you may want to use some kind of Threading, for instance: one thread, your main one, is handling the GUI, and a second one would wait for the file to be created. Once the file is created, the second thread sends signals to the GUI thread, containing the data to be displayed.
I have a Django app that opens a file, continuously reads it, and at the same time writes data to a Postgres database. My issue is that whenever I open a file,
file = open(filename, 'r')
I am unable to also create new things in the database,
Message.objects.create_message(sys, msg)
That should create a database entry with two strings. However, nothing seems to happen and I am presented with no errors :( If I decide to close the file, file.close(), before I write to the database everything is fine. My problem is that I need that file open to create my objects. Does anyone have a solution for this? Thanks.
EDIT
Here's some more of my code. Basically I have the following snippet following the end of a file and then writing to the database as it gets information.
file.seek(0,2)
while True:
line = file.readline()
if not line:
time.sleep(1)
continue
Message.objects.create_message(sys, line)
EDIT 2
Got this to work finally but I'm not sure why. I'd love to understand why this worked:
str1ng = line[0:len(line)-1]
Message.objects.create_message(sys, str1ng)
Some how there is a difference between that string and the string gathered from file.readline().
Any ideas?
try this:
file = open(filename, 'r')
fileContents = file.read()
file.close()
Have you tried linecache? Something like this might work (not tested).
import linecache
i = 0
go = True
file = ...
while (go == True):
out = linecache.getline(file,i)
...process out...
i = i+1
if i % 100 == 0:
# check for cache update every 100 lines
linecache.checkcache(file)
if ( some eof condition):
go = False
linecache.clearcache()