Python failure file handling freeze - python

I am using a python script to transfer the contents of three files to a different three files. The original files are data from three thermometers I have connected to an RPI running raspian. All the script is supposed to do is take the contents of the files and move them so that I can have another program (ComScript) read and parse them.
My problem is that if one or more of the thermometers is disconnected before the script starts, it freezes. It doesn't freeze if I disconnect a thermometer while the script is running.
Here is the code
import time
a = 1
while a == 1:
try:
tfile = open("/sys/bus/w1/devices/28-000004d2ca5e/w1_slave")
text = tfile.read()
tfile.close()
temperature = text
tfile2 = open("/sys/bus/w1/devices/28-000004d2fb20/w1_slave")
text2 = tfile2.read()
tfile2.close()
temperature2 = text2
tfile3 = open("/sys/bus/w1/devices/28-000004d30568/w1_slave")
text3 = tfile3.read()
tfile3.close()
temperature3 = text3
textfile = open("/home/pi/ComScriptPi/profiles/Temperature_parse/w1_slave1", "w ")
textfile2 = open("/home/pi/ComScriptPi/profiles/Temperature_parse/w1_slave2", "w ")
textfile3 = open("/home/pi/ComScriptPi/profiles/Temperature_parse/w1_slave3", "w ")
temperature = str(temperature)
temperature2 = str(temperature2)
temperature3 = str(temperature3)
textfile.write(temperature)
textfile2.write(temperature2)
textfile3.write(temperature3)
textfile.close()
textfile2.close()
textfile3.close()
print temperature
print temperature2
print temperature3
time.sleep(3)
except:
pass
I added the exception pass because I need it to keep running even if it gets bad values. WHen one of the thermometers is disconnected the file python is trying to read is blank, but still there.

Remove the blanket except.
Your script is not freezing, but any error you get is being ignored in an endless loop. Because you use a blanket except: you catch all exceptions, including the keyboard interrupt exception KeyboardInterrupt.
At the very least log the exception, and catch only Exception:
except Exception:
import logging
logging.exception('Oops: error occurred')
KeyboardInterrupt is a subclass of BaseException, not Exception and won't be caught by this except handler.
Take a look at the shutil module for copying files, you doing way too much work:
import time
import shutil
import os.path
paths = ('28-000004d2ca5e', '28-000004d2fb20', '28-000004d30568')
while True:
for i, name in enumerate(paths, 1):
src = os.path.join('/sys/bus/w1/devices', name, 'w1_slave')
dst = '/home/pi/ComScriptPi/profiles/Temperature_parse/w1_slave{}'.format(i)
try:
shutil.copyfile(src, dst)
except EnvironmentError:
import logging
logging.exception('Oops: error occurred')
time.sleep(3)
Handling files should only ever raise EnvironmentError or it's subclasses, there is no need to catch everything here.

The open of the unplugged device is most likely blocking because the device driver won't open if the device is not present.
You'll need to use os.open which is the equivalent of the Unix system call "open" and specify the flag O_NONBLOCK and check the return code. You can then use os.fdopen to turn the return value of os.open into a normal Python file object.

Related

Python Log File is not created unless basicConfig is called on top before any functions

I have a script that processes csvs and load them to database. My intern mentor wanted us to use log file to capture what's going on and he wanted it to be flexible so one can use a config.ini file to edit where they want the log file to be created. As a result I did just that, using a config file that use key value pairs in a dict that i can extract the path to the log file from. These are excepts from my code where log file is created and used:
dirconfig_file = r"C:\Users\sys_nsgprobeingestio\Documents\dozie\odfs\venv\odfs_tester_history_dirs.ini"
start_time = datetime.now()
def process_dirconfig_file(config_file_from_sysarg):
try:
if Path.is_file(dirconfig_file_Pobj):
parseddict = {}
configsects_set = set()
for sect in config.sections():
configsects_set.add(sect)
for k, v in config.items(sect):
# print('{} = {}'.format(k, v))
parseddict[k] = v
print(parseddict)
try:
if ("log_dir" not in parseddict or parseddict["log_dir"] == "" or "log_dir" not in configsects_set):
raise Exception(f"Error: Your config file is missing 'logfile path' or properly formatted [log_file] section for this script to run. Please edit config file to include logfile path to capture errors")
except Exception as e:
#raise Exception(e)
logging.exception(e)
print(e)
parse_dict = process_dirconfig_file(dirconfig_file)
logfilepath = parse_dict["log_dir"]
log_file_name = start_time.strftime(logfilepath)
print(log_file_name)
logging.basicConfig(
filename=log_file_name,
level=logging.DEBUG,
format='[Probe Data Quality] %(asctime)s - %(name)s %(levelname)-7.7s %(message)s'
# can you explain this Tenzin?
)
if __name__ == '__main__':
try:
startTime = datetime.now()
db_instance = dbhandler(parse_dict["db_string"])
odfs_tabletest_dict = db_instance['odfs_tester_history_files']
odf_history_from_csv_to_dbtable(db_instance)
#print("test exception")
print(datetime.now() - startTime)
except Exception as e:
logging.exception(e)
print(e)
Doing this, no file is created. The script runs with no errors but no log file is created. I've tried several things including using a hardcoded log file name, instead of calling it from the config file but it didn't work
The only thing that works is when the log file is created up top before any method. Why is this?
When you are calling your process_dirconfig_file function, the logging configuration has not been set yet, so no file could have been created. The script executes top to bottom. It would be similar to doing something like this:
import sys
# default logging points to stdout/stderr kind of like this
my_logger = sys.stdout
my_logger.write("Something")
# Then you've pointed logging to a file
my_logger = open("some_file.log", 'w')
my_logger.write("Something else")
Only Something else would be written to our some_file.log, because my_logger pointed somewhere else beforehand.
Much the same is happening here. By default, the logging.<debug/info> functions do nothing because logging won't do anything with them without additional configuration. logging.error, logging.warning, and logging.exception will always at least write to stdout out of the box.
Also, I don't think the inner try is valid Python, you need a matching except. And I wouldn't just print an exception raised by that function, I'd probably raise and have the program crash:
def process_dirconfig_file(config_file_from_sysarg):
try:
# Don't use logging.<anything> yet
~snip~
except Exception as e:
# Just raise or don't use try/except at all until
# you have a better idea of what you want to do in this circumstance
raise
Especially since you are trying to use the logger while validating that its configuration is correct.
The fix? Don't use the logger until after you've determined it's ready.

Python else issues making an FTP program

I am having an issue with the else statement of this program... I have checked my spacing and it seems to be correct. I keep getting syntax error on the else statement. The program creates and file then attempts to upload it to a ftp server but if it fails to not say anything to the user and just continue It will try again when the program loops. Any help you could provide would be greatly appreciated.
#IMPORTS
import ConfigParser
import os
import random
import ftplib
from ftplib import FTP
#LOOP PART 1
from time import sleep
while True:
#READ THE CONFIG FILE SETUP.INI
config = ConfigParser.ConfigParser()
config.readfp(open(r'setup.ini'))
path = config.get('config', 'path')
name = config.get('config', 'name')
#CREATE THE KEYFILE
filepath = os.path.join((path), (name))
if not os.path.exists((path)):
os.makedirs((path))
file = open(filepath,'w')
file.write('text here')
file.close()
#Create Full Path
fullpath = path + name
#Random Sleep to Accomidate FTP Server
sleeptimer = random.randrange(1,30+1)
sleep((sleeptimer))
#Upload File to FTP Server
try:
host = '0.0.0.0'
port = 3700
ftp = FTP()
ftp.connect(host, port)
ftp.login('user', 'pass')
file = open(fullpath, "rb")
ftp.cwd('/')
ftp.storbinary('STOR ' + name, file)
ftp.quit()
file.close()
else:
print 'Something is Wrong'
#LOOP PART 2
sleep(180.00)
else is valid as part of an exception block, but it is only run if an exception is not raised and there must be a except defined before it.
(edit) Most people skip the else clause and just write code after exiting (dedenting) from the try/except clauses.
The quick tutorial is:
try:
# some statements that are executed until an exception is raised
...
except SomeExceptionType, e:
# if some type of exception is raised
...
except SomeOtherExceptionType, e:
# if another type of exception is raised
...
except Exception, e:
# if *any* exception is raised - but this is usually evil because it hides
# programming errors as well as the errors you want to handle. You can get
# a feel for what went wrong with:
traceback.print_exc()
...
else:
# if no exception is raised
...
finally:
# run regardless of whether exception was raised
...

python - Monitor if file is being requested/read from external application [duplicate]

I have a log file being written by another process which I want to watch for changes. Each time a change occurs I'd like to read the new data in to do some processing on it.
What's the best way to do this? I was hoping there'd be some sort of hook from the PyWin32 library. I've found the win32file.FindNextChangeNotification function but have no idea how to ask it to watch a specific file.
If anyone's done anything like this I'd be really grateful to hear how...
[Edit] I should have mentioned that I was after a solution that doesn't require polling.
[Edit] Curses! It seems this doesn't work over a mapped network drive. I'm guessing windows doesn't 'hear' any updates to the file the way it does on a local disk.
Did you try using Watchdog?
Python API library and shell utilities to monitor file system events.
Directory monitoring made easy with
A cross-platform API.
A shell tool to run commands in response to directory changes.
Get started quickly with a simple example in Quickstart...
If polling is good enough for you, I'd just watch if the "modified time" file stat changes. To read it:
os.stat(filename).st_mtime
(Also note that the Windows native change event solution does not work in all circumstances, e.g. on network drives.)
import os
class Monkey(object):
def __init__(self):
self._cached_stamp = 0
self.filename = '/path/to/file'
def ook(self):
stamp = os.stat(self.filename).st_mtime
if stamp != self._cached_stamp:
self._cached_stamp = stamp
# File has changed, so do something...
If you want a multiplatform solution, then check QFileSystemWatcher.
Here an example code (not sanitized):
from PyQt4 import QtCore
#QtCore.pyqtSlot(str)
def directory_changed(path):
print('Directory Changed!!!')
#QtCore.pyqtSlot(str)
def file_changed(path):
print('File Changed!!!')
fs_watcher = QtCore.QFileSystemWatcher(['/path/to/files_1', '/path/to/files_2', '/path/to/files_3'])
fs_watcher.connect(fs_watcher, QtCore.SIGNAL('directoryChanged(QString)'), directory_changed)
fs_watcher.connect(fs_watcher, QtCore.SIGNAL('fileChanged(QString)'), file_changed)
It should not work on windows (maybe with cygwin ?), but for unix user, you should use the "fcntl" system call. Here is an example in Python. It's mostly the same code if you need to write it in C (same function names)
import time
import fcntl
import os
import signal
FNAME = "/HOME/TOTO/FILETOWATCH"
def handler(signum, frame):
print "File %s modified" % (FNAME,)
signal.signal(signal.SIGIO, handler)
fd = os.open(FNAME, os.O_RDONLY)
fcntl.fcntl(fd, fcntl.F_SETSIG, 0)
fcntl.fcntl(fd, fcntl.F_NOTIFY,
fcntl.DN_MODIFY | fcntl.DN_CREATE | fcntl.DN_MULTISHOT)
while True:
time.sleep(10000)
Check out pyinotify.
inotify replaces dnotify (from an earlier answer) in newer linuxes and allows file-level rather than directory-level monitoring.
For watching a single file with polling, and minimal dependencies, here is a fully fleshed-out example, based on answer from Deestan (above):
import os
import sys
import time
class Watcher(object):
running = True
refresh_delay_secs = 1
# Constructor
def __init__(self, watch_file, call_func_on_change=None, *args, **kwargs):
self._cached_stamp = 0
self.filename = watch_file
self.call_func_on_change = call_func_on_change
self.args = args
self.kwargs = kwargs
# Look for changes
def look(self):
stamp = os.stat(self.filename).st_mtime
if stamp != self._cached_stamp:
self._cached_stamp = stamp
# File has changed, so do something...
print('File changed')
if self.call_func_on_change is not None:
self.call_func_on_change(*self.args, **self.kwargs)
# Keep watching in a loop
def watch(self):
while self.running:
try:
# Look for changes
time.sleep(self.refresh_delay_secs)
self.look()
except KeyboardInterrupt:
print('\nDone')
break
except FileNotFoundError:
# Action on file not found
pass
except:
print('Unhandled error: %s' % sys.exc_info()[0])
# Call this function each time a change happens
def custom_action(text):
print(text)
watch_file = 'my_file.txt'
# watcher = Watcher(watch_file) # simple
watcher = Watcher(watch_file, custom_action, text='yes, changed') # also call custom action function
watcher.watch() # start the watch going
Well after a bit of hacking of Tim Golden's script, I have the following which seems to work quite well:
import os
import win32file
import win32con
path_to_watch = "." # look at the current directory
file_to_watch = "test.txt" # look for changes to a file called test.txt
def ProcessNewData( newData ):
print "Text added: %s"%newData
# Set up the bits we'll need for output
ACTIONS = {
1 : "Created",
2 : "Deleted",
3 : "Updated",
4 : "Renamed from something",
5 : "Renamed to something"
}
FILE_LIST_DIRECTORY = 0x0001
hDir = win32file.CreateFile (
path_to_watch,
FILE_LIST_DIRECTORY,
win32con.FILE_SHARE_READ | win32con.FILE_SHARE_WRITE,
None,
win32con.OPEN_EXISTING,
win32con.FILE_FLAG_BACKUP_SEMANTICS,
None
)
# Open the file we're interested in
a = open(file_to_watch, "r")
# Throw away any exising log data
a.read()
# Wait for new data and call ProcessNewData for each new chunk that's written
while 1:
# Wait for a change to occur
results = win32file.ReadDirectoryChangesW (
hDir,
1024,
False,
win32con.FILE_NOTIFY_CHANGE_LAST_WRITE,
None,
None
)
# For each change, check to see if it's updating the file we're interested in
for action, file in results:
full_filename = os.path.join (path_to_watch, file)
#print file, ACTIONS.get (action, "Unknown")
if file == file_to_watch:
newText = a.read()
if newText != "":
ProcessNewData( newText )
It could probably do with a load more error checking, but for simply watching a log file and doing some processing on it before spitting it out to the screen, this works well.
Thanks everyone for your input - great stuff!
Check my answer to a similar question. You could try the same loop in Python. This page suggests:
import time
while 1:
where = file.tell()
line = file.readline()
if not line:
time.sleep(1)
file.seek(where)
else:
print line, # already has newline
Also see the question tail() a file with Python.
This is another modification of Tim Goldan's script that runs on unix types and adds a simple watcher for file modification by using a dict (file=>time).
usage: whateverName.py path_to_dir_to_watch
#!/usr/bin/env python
import os, sys, time
def files_to_timestamp(path):
files = [os.path.join(path, f) for f in os.listdir(path)]
return dict ([(f, os.path.getmtime(f)) for f in files])
if __name__ == "__main__":
path_to_watch = sys.argv[1]
print('Watching {}..'.format(path_to_watch))
before = files_to_timestamp(path_to_watch)
while 1:
time.sleep (2)
after = files_to_timestamp(path_to_watch)
added = [f for f in after.keys() if not f in before.keys()]
removed = [f for f in before.keys() if not f in after.keys()]
modified = []
for f in before.keys():
if not f in removed:
if os.path.getmtime(f) != before.get(f):
modified.append(f)
if added: print('Added: {}'.format(', '.join(added)))
if removed: print('Removed: {}'.format(', '.join(removed)))
if modified: print('Modified: {}'.format(', '.join(modified)))
before = after
Here is a simplified version of Kender's code that appears to do the same trick and does not import the entire file:
# Check file for new data.
import time
f = open(r'c:\temp\test.txt', 'r')
while True:
line = f.readline()
if not line:
time.sleep(1)
print 'Nothing New'
else:
print 'Call Function: ', line
Well, since you are using Python, you can just open a file and keep reading lines from it.
f = open('file.log')
If the line read is not empty, you process it.
line = f.readline()
if line:
// Do what you want with the line
You may be missing that it is ok to keep calling readline at the EOF. It will just keep returning an empty string in this case. And when something is appended to the log file, the reading will continue from where it stopped, as you need.
If you are looking for a solution that uses events, or a particular library, please specify this in your question. Otherwise, I think this solution is just fine.
Simplest solution for me is using watchdog's tool watchmedo
From https://pypi.python.org/pypi/watchdog I now have a process that looks up the sql files in a directory and executes them if necessary.
watchmedo shell-command \
--patterns="*.sql" \
--recursive \
--command='~/Desktop/load_files_into_mysql_database.sh' \
.
As you can see in Tim Golden's article, pointed by Horst Gutmann, WIN32 is relatively complex and watches directories, not a single file.
I'd like to suggest you look into IronPython, which is a .NET python implementation.
With IronPython you can use all the .NET functionality - including
System.IO.FileSystemWatcher
Which handles single files with a simple Event interface.
This is an example of checking a file for changes. One that may not be the best way of doing it, but it sure is a short way.
Handy tool for restarting application when changes have been made to the source. I made this when playing with pygame so I can see effects take place immediately after file save.
When used in pygame make sure the stuff in the 'while' loop is placed in your game loop aka update or whatever. Otherwise your application will get stuck in an infinite loop and you will not see your game updating.
file_size_stored = os.stat('neuron.py').st_size
while True:
try:
file_size_current = os.stat('neuron.py').st_size
if file_size_stored != file_size_current:
restart_program()
except:
pass
In case you wanted the restart code which I found on the web. Here it is. (Not relevant to the question, though it could come in handy)
def restart_program(): #restart application
python = sys.executable
os.execl(python, python, * sys.argv)
Have fun making electrons do what you want them to do.
Seems that no one has posted fswatch. It is a cross-platform file system watcher. Just install it, run it and follow the prompts.
I've used it with python and golang programs and it just works.
ACTIONS = {
1 : "Created",
2 : "Deleted",
3 : "Updated",
4 : "Renamed from something",
5 : "Renamed to something"
}
FILE_LIST_DIRECTORY = 0x0001
class myThread (threading.Thread):
def __init__(self, threadID, fileName, directory, origin):
threading.Thread.__init__(self)
self.threadID = threadID
self.fileName = fileName
self.daemon = True
self.dir = directory
self.originalFile = origin
def run(self):
startMonitor(self.fileName, self.dir, self.originalFile)
def startMonitor(fileMonitoring,dirPath,originalFile):
hDir = win32file.CreateFile (
dirPath,
FILE_LIST_DIRECTORY,
win32con.FILE_SHARE_READ | win32con.FILE_SHARE_WRITE,
None,
win32con.OPEN_EXISTING,
win32con.FILE_FLAG_BACKUP_SEMANTICS,
None
)
# Wait for new data and call ProcessNewData for each new chunk that's
# written
while 1:
# Wait for a change to occur
results = win32file.ReadDirectoryChangesW (
hDir,
1024,
False,
win32con.FILE_NOTIFY_CHANGE_LAST_WRITE,
None,
None
)
# For each change, check to see if it's updating the file we're
# interested in
for action, file_M in results:
full_filename = os.path.join (dirPath, file_M)
#print file, ACTIONS.get (action, "Unknown")
if len(full_filename) == len(fileMonitoring) and action == 3:
#copy to main file
...
Since I have it installed globally, my favorite approach is to use nodemon. If your source code is in src, and your entry point is src/app.py, then it's as easy as:
nodemon -w 'src/**' -e py,html --exec python src/app.py
... where -e py,html lets you control what file types to watch for changes.
Here's an example geared toward watching input files that write no more than one line per second but usually a lot less. The goal is to append the last line (most recent write) to the specified output file. I've copied this from one of my projects and just deleted all the irrelevant lines. You'll have to fill in or change the missing symbols.
from PyQt5.QtCore import QFileSystemWatcher, QSettings, QThread
from ui_main_window import Ui_MainWindow # Qt Creator gen'd
class MainWindow(QMainWindow, Ui_MainWindow):
def __init__(self, parent=None):
QMainWindow.__init__(self, parent)
Ui_MainWindow.__init__(self)
self._fileWatcher = QFileSystemWatcher()
self._fileWatcher.fileChanged.connect(self.fileChanged)
def fileChanged(self, filepath):
QThread.msleep(300) # Reqd on some machines, give chance for write to complete
# ^^ About to test this, may need more sophisticated solution
with open(filepath) as file:
lastLine = list(file)[-1]
destPath = self._filemap[filepath]['dest file']
with open(destPath, 'a') as out_file: # a= append
out_file.writelines([lastLine])
Of course, the encompassing QMainWindow class is not strictly required, ie. you can use QFileSystemWatcher alone.
Just to put this out there since no one mentioned it: there's a Python module in the Standard Library named filecmp which has this cmp() function that compares two files.
Just make sure you don't do from filecmp import cmp to not overshadow the built-in cmp() function in Python 2.x. That's okay in Python 3.x, though, since there's no such built-in cmp() function anymore.
Anyway, this is how its use looks like:
import filecmp
filecmp.cmp(path_to_file_1, path_to_file_2, shallow=True)
The argument shallow defaults to True. If the argument's value is True, then only the metadata of the files are compared; however, if the argument's value is False, then the contents of the files are compared.
Maybe this information will be useful to someone.
watchfiles (https://github.com/samuelcolvin/watchfiles) is a Python API and CLI that uses the Notify (https://github.com/notify-rs/notify) library written in Rust.
The rust implementation currently (2022-10-09) supports:
Linux / Android: inotify
macOS: FSEvents or kqueue, see features
Windows: ReadDirectoryChangesW
FreeBSD / NetBSD / OpenBSD / DragonflyBSD: kqueue
All platforms: polling
Binaries available on PyPI (https://pypi.org/project/watchfiles/) and conda-forge (https://github.com/conda-forge/watchfiles-feedstock).
You can also use a simple library called repyt, here is an example:
repyt ./app.py
related #4Oh4 solution a smooth change for a list of files to watch;
import os
import sys
import time
class Watcher(object):
running = True
refresh_delay_secs = 1
# Constructor
def __init__(self, watch_files, call_func_on_change=None, *args, **kwargs):
self._cached_stamp = 0
self._cached_stamp_files = {}
self.filenames = watch_files
self.call_func_on_change = call_func_on_change
self.args = args
self.kwargs = kwargs
# Look for changes
def look(self):
for file in self.filenames:
stamp = os.stat(file).st_mtime
if not file in self._cached_stamp_files:
self._cached_stamp_files[file] = 0
if stamp != self._cached_stamp_files[file]:
self._cached_stamp_files[file] = stamp
# File has changed, so do something...
file_to_read = open(file, 'r')
value = file_to_read.read()
print("value from file", value)
file_to_read.seek(0)
if self.call_func_on_change is not None:
self.call_func_on_change(*self.args, **self.kwargs)
# Keep watching in a loop
def watch(self):
while self.running:
try:
# Look for changes
time.sleep(self.refresh_delay_secs)
self.look()
except KeyboardInterrupt:
print('\nDone')
break
except FileNotFoundError:
# Action on file not found
pass
except Exception as e:
print(e)
print('Unhandled error: %s' % sys.exc_info()[0])
# Call this function each time a change happens
def custom_action(text):
print(text)
# pass
watch_files = ['/Users/mexekanez/my_file.txt', '/Users/mexekanez/my_file1.txt']
# watcher = Watcher(watch_file) # simple
if __name__ == "__main__":
watcher = Watcher(watch_files, custom_action, text='yes, changed') # also call custom action function
watcher.watch() # start the watch going
The best and simplest solution is to use pygtail:
https://pypi.python.org/pypi/pygtail
from pygtail import Pygtail
import sys
while True:
for line in Pygtail("some.log"):
sys.stdout.write(line)
import inotify.adapters
from datetime import datetime
LOG_FILE='/var/log/mysql/server_audit.log'
def main():
start_time = datetime.now()
while True:
i = inotify.adapters.Inotify()
i.add_watch(LOG_FILE)
for event in i.event_gen(yield_nones=False):
break
del i
with open(LOG_FILE, 'r') as f:
for line in f:
entry = line.split(',')
entry_time = datetime.strptime(entry[0],
'%Y%m%d %H:%M:%S')
if entry_time > start_time:
start_time = entry_time
print(entry)
if __name__ == '__main__':
main()
The easiest solution would get the two instances of the same file after an interval and Compare them. You Could try something like this
while True:
# Capturing the two instances models.py after certain interval of time
print("Looking for changes in " + app_name.capitalize() + " models.py\nPress 'CTRL + C' to stop the program")
with open(app_name.capitalize() + '/filename', 'r+') as app_models_file:
filename_content = app_models_file.read()
time.sleep(5)
with open(app_name.capitalize() + '/filename', 'r+') as app_models_file_1:
filename_content_1 = app_models_file_1.read()
# Comparing models.py after certain interval of time
if filename_content == filename_content_1:
pass
else:
print("You made a change in " + app_name.capitalize() + " filename.\n")
cmd = str(input("Do something with the file?(y/n):"))
if cmd == 'y':
# Do Something
elif cmd == 'n':
# pass or do something
else:
print("Invalid input")
If you're using windows, create this POLL.CMD file
#echo off
:top
xcopy /m /y %1 %2 | find /v "File(s) copied"
timeout /T 1 > nul
goto :top
then you can type "poll dir1 dir2" and it will copy all the files from dir1 to dir2 and check for updates once per second.
The "find" is optional, just to make the console less noisy.
This is not recursive. Maybe you could make it recursive using /e on the xcopy.
I don't know any Windows specific function. You could try getting the MD5 hash of the file every second/minute/hour (depends on how fast you need it) and compare it to the last hash. When it differs you know the file has been changed and you read out the newest lines.
I'd try something like this.
try:
f = open(filePath)
except IOError:
print "No such file: %s" % filePath
raw_input("Press Enter to close window")
try:
lines = f.readlines()
while True:
line = f.readline()
try:
if not line:
time.sleep(1)
else:
functionThatAnalisesTheLine(line)
except Exception, e:
# handle the exception somehow (for example, log the trace) and raise the same exception again
raw_input("Press Enter to close window")
raise e
finally:
f.close()
The loop checks if there is a new line(s) since last time file was read - if there is, it's read and passed to the functionThatAnalisesTheLine function. If not, script waits 1 second and retries the process.

cleaning up when using exceptions and files in python

I'm learning python for a couple of days now and am struggling with its 'spirit'.
I'm comming from the C/C++/Java/Perl school and I understand that python is not C (at all) that's why I'm trying to understand the spirit to get the most out of it (and so far it's hard)...
My question is especially focused on exception handling and cleaning:
The code at the end of this post is meant to simulate a fairly common case of file opening/parsing where you need to close the file in case of an error...
Most samples I have seen use the 'else' clause of a try statement to close the file... which made sense to me until I realized that the error might be due to
the opening itself (in which case
there is no need to close the not
opened file)
the parsing (in which
case the file needs to be closed)
The trap here is that if you use the 'else' clause of a try bloc then the file never gets closed if the error happens during parsing!
On the other end using the 'finally' clause result in an extra necessary check because the file_desc variable may not exist if the error happened during the opened (see comments in the code below)...
This extra check is inefficient and full of shit because any reasonable program may contain hundreds of symbols and parsing the results of dir() is a pain in the ass... Not to mention the lack of readability of such a statement...
Most other languages allow for variable definitions which could save the day here... but in python, everything seems to be implicit...
Normally, one would just declare a file_desc variable, then use many try/catch blocs for every task... one for opening, one for parsing and the last one for the closing()... no need to nest them... here I don't know a way to declare the variable... so I'm stuck right at the begining of the problem !
so what is the spirit of python here ???
split the opening/parsing in two different methods ? How ?
use some kind of nested try/except clauses ??? How ?
maybe there is a way to declare the file_desc variable and then there would be no need for the extra checking... is it at all possible ??? desirable ???
what about the close() statement ??? what if it raises an error ?
thanx for your hints... here is the sample code:
class FormatError(Exception):
def __init__(self, message):
self.strerror = message
def __str__(self):
return repr(message)
file_name = raw_input("Input a filename please: ")
try:
file_desc = open(file_name, 'r')
# read the file...
while True:
current_line = file_desc.readline()
if not current_line: break
print current_line.rstrip("\n")
# lets simulate some parsing error...
raise FormatError("oops... the file format is wrong...")
except FormatError as format_error:
print "The file {0} is invalid: {1}".format(file_name, format_error.strerror)
except IOError as io_error:
print "The file {0} could not be read: {1}".format(file_name, io_error.strerror)
else:
file_desc.close()
# finally:
# if 'file_desc' in dir() and not file_desc.closed:
# file_desc.close()
if 'file_desc' in dir():
print "The file exists and closed={0}".format(file_desc.closed)
else:
print "The file has never been defined..."
The easiest way to deal with this is to use the fact that file objects in Python 2.5+ are context managers. You can use the with statement to enter a context; the context manager's __exit__ method is automatically called when exiting this with scope. The file object's context management automatically closes the file then.
try:
with file("hello.txt") as input_file:
for line in input_file:
if "hello" not in line:
raise ValueError("Every line must contain 'hello'!")
except IOError:
print "Damnit, couldn't open the file."
except:
raise
else:
print "Everything went fine!"
The open hello.txt handle will automatically be closed, and exceptions from within the with scope are propagated outside.
Just a note: you can always declare a variable, and then it would become something like this:
file_desc = None
try:
file_desc = open(file_name, 'r')
except IOError, err:
pass
finally:
if file_desc:
close(file_desc)
Of course, if you are using a newer version of Python, the construct using context manager is way better; however, I wanted to point out how you can generically deal with exceptions and variable scope in Python.
As of Python 2.5, there's a with command that simplifies some of what you're fighting with. Read more about it here. Here's a transformed version of your code:
class FormatError(Exception):
def __init__(self, message):
self.strerror = message
def __str__(self):
return repr(message)
file_name = raw_input("Input a filename please: ")
with open(file_name, 'r') as file_desc:
try:
# read the file...
while True:
current_line = file_desc.readline()
if not current_line: break
print current_line.rstrip("\n")
# lets simulate some parsing error...
raise FormatError("oops... the file format is wrong...")
except FormatError as format_error:
print "The file {0} is invalid: {1}".format(file_name, format_error.strerror)
except IOError as io_error:
print "The file {0} could not be read: {1}".format(file_name, io_error.strerror)
if 'file_desc' in dir():
print "The file exists and closed={0}".format(file_desc.closed)
else:
print "The file has never been defined..."
OK, I'm an ass.
edit:and BTW, many thanx for those who already answered while I was posting this
The code below does the trick.
You must create a nested block with the 'with as' statement to make sure the file is cleaned:
class FormatError(Exception):
def __init__(self, message):
self.strerror = message
def __str__(self):
return repr(message)
file_name = raw_input("Input a filename please: ")
try:
#
# THIS IS PYTHON'S SPIRIT... no else/finally
#
with open(file_name, 'r') as file_desc:
# read the file...
while True:
current_line = file_desc.readline()
if not current_line: break
print current_line.rstrip("\n")
raise FormatError("oops... the file format is wrong...")
print "will never get here"
except FormatError as format_error:
print "The file {0} is invalid: {1}".format(file_name, format_error.strerror)
except IOError as io_error:
print "The file {0} could not be read: {1}".format(file_name, io_error.strerror)
if 'file_desc' in dir():
print "The file exists and closed={0}".format(file_desc.closed)
else:
print "The file has never been defined..."
Close can to my knowledge never return an error.
In fact, the file handle will be closed when garbage collected, so you don't have to do it explicitly in Python. Although it's still good programming to do so, obviously.

How do I watch a file for changes?

I have a log file being written by another process which I want to watch for changes. Each time a change occurs I'd like to read the new data in to do some processing on it.
What's the best way to do this? I was hoping there'd be some sort of hook from the PyWin32 library. I've found the win32file.FindNextChangeNotification function but have no idea how to ask it to watch a specific file.
If anyone's done anything like this I'd be really grateful to hear how...
[Edit] I should have mentioned that I was after a solution that doesn't require polling.
[Edit] Curses! It seems this doesn't work over a mapped network drive. I'm guessing windows doesn't 'hear' any updates to the file the way it does on a local disk.
Did you try using Watchdog?
Python API library and shell utilities to monitor file system events.
Directory monitoring made easy with
A cross-platform API.
A shell tool to run commands in response to directory changes.
Get started quickly with a simple example in Quickstart...
If polling is good enough for you, I'd just watch if the "modified time" file stat changes. To read it:
os.stat(filename).st_mtime
(Also note that the Windows native change event solution does not work in all circumstances, e.g. on network drives.)
import os
class Monkey(object):
def __init__(self):
self._cached_stamp = 0
self.filename = '/path/to/file'
def ook(self):
stamp = os.stat(self.filename).st_mtime
if stamp != self._cached_stamp:
self._cached_stamp = stamp
# File has changed, so do something...
If you want a multiplatform solution, then check QFileSystemWatcher.
Here an example code (not sanitized):
from PyQt4 import QtCore
#QtCore.pyqtSlot(str)
def directory_changed(path):
print('Directory Changed!!!')
#QtCore.pyqtSlot(str)
def file_changed(path):
print('File Changed!!!')
fs_watcher = QtCore.QFileSystemWatcher(['/path/to/files_1', '/path/to/files_2', '/path/to/files_3'])
fs_watcher.connect(fs_watcher, QtCore.SIGNAL('directoryChanged(QString)'), directory_changed)
fs_watcher.connect(fs_watcher, QtCore.SIGNAL('fileChanged(QString)'), file_changed)
It should not work on windows (maybe with cygwin ?), but for unix user, you should use the "fcntl" system call. Here is an example in Python. It's mostly the same code if you need to write it in C (same function names)
import time
import fcntl
import os
import signal
FNAME = "/HOME/TOTO/FILETOWATCH"
def handler(signum, frame):
print "File %s modified" % (FNAME,)
signal.signal(signal.SIGIO, handler)
fd = os.open(FNAME, os.O_RDONLY)
fcntl.fcntl(fd, fcntl.F_SETSIG, 0)
fcntl.fcntl(fd, fcntl.F_NOTIFY,
fcntl.DN_MODIFY | fcntl.DN_CREATE | fcntl.DN_MULTISHOT)
while True:
time.sleep(10000)
Check out pyinotify.
inotify replaces dnotify (from an earlier answer) in newer linuxes and allows file-level rather than directory-level monitoring.
For watching a single file with polling, and minimal dependencies, here is a fully fleshed-out example, based on answer from Deestan (above):
import os
import sys
import time
class Watcher(object):
running = True
refresh_delay_secs = 1
# Constructor
def __init__(self, watch_file, call_func_on_change=None, *args, **kwargs):
self._cached_stamp = 0
self.filename = watch_file
self.call_func_on_change = call_func_on_change
self.args = args
self.kwargs = kwargs
# Look for changes
def look(self):
stamp = os.stat(self.filename).st_mtime
if stamp != self._cached_stamp:
self._cached_stamp = stamp
# File has changed, so do something...
print('File changed')
if self.call_func_on_change is not None:
self.call_func_on_change(*self.args, **self.kwargs)
# Keep watching in a loop
def watch(self):
while self.running:
try:
# Look for changes
time.sleep(self.refresh_delay_secs)
self.look()
except KeyboardInterrupt:
print('\nDone')
break
except FileNotFoundError:
# Action on file not found
pass
except:
print('Unhandled error: %s' % sys.exc_info()[0])
# Call this function each time a change happens
def custom_action(text):
print(text)
watch_file = 'my_file.txt'
# watcher = Watcher(watch_file) # simple
watcher = Watcher(watch_file, custom_action, text='yes, changed') # also call custom action function
watcher.watch() # start the watch going
Well after a bit of hacking of Tim Golden's script, I have the following which seems to work quite well:
import os
import win32file
import win32con
path_to_watch = "." # look at the current directory
file_to_watch = "test.txt" # look for changes to a file called test.txt
def ProcessNewData( newData ):
print "Text added: %s"%newData
# Set up the bits we'll need for output
ACTIONS = {
1 : "Created",
2 : "Deleted",
3 : "Updated",
4 : "Renamed from something",
5 : "Renamed to something"
}
FILE_LIST_DIRECTORY = 0x0001
hDir = win32file.CreateFile (
path_to_watch,
FILE_LIST_DIRECTORY,
win32con.FILE_SHARE_READ | win32con.FILE_SHARE_WRITE,
None,
win32con.OPEN_EXISTING,
win32con.FILE_FLAG_BACKUP_SEMANTICS,
None
)
# Open the file we're interested in
a = open(file_to_watch, "r")
# Throw away any exising log data
a.read()
# Wait for new data and call ProcessNewData for each new chunk that's written
while 1:
# Wait for a change to occur
results = win32file.ReadDirectoryChangesW (
hDir,
1024,
False,
win32con.FILE_NOTIFY_CHANGE_LAST_WRITE,
None,
None
)
# For each change, check to see if it's updating the file we're interested in
for action, file in results:
full_filename = os.path.join (path_to_watch, file)
#print file, ACTIONS.get (action, "Unknown")
if file == file_to_watch:
newText = a.read()
if newText != "":
ProcessNewData( newText )
It could probably do with a load more error checking, but for simply watching a log file and doing some processing on it before spitting it out to the screen, this works well.
Thanks everyone for your input - great stuff!
Check my answer to a similar question. You could try the same loop in Python. This page suggests:
import time
while 1:
where = file.tell()
line = file.readline()
if not line:
time.sleep(1)
file.seek(where)
else:
print line, # already has newline
Also see the question tail() a file with Python.
This is another modification of Tim Goldan's script that runs on unix types and adds a simple watcher for file modification by using a dict (file=>time).
usage: whateverName.py path_to_dir_to_watch
#!/usr/bin/env python
import os, sys, time
def files_to_timestamp(path):
files = [os.path.join(path, f) for f in os.listdir(path)]
return dict ([(f, os.path.getmtime(f)) for f in files])
if __name__ == "__main__":
path_to_watch = sys.argv[1]
print('Watching {}..'.format(path_to_watch))
before = files_to_timestamp(path_to_watch)
while 1:
time.sleep (2)
after = files_to_timestamp(path_to_watch)
added = [f for f in after.keys() if not f in before.keys()]
removed = [f for f in before.keys() if not f in after.keys()]
modified = []
for f in before.keys():
if not f in removed:
if os.path.getmtime(f) != before.get(f):
modified.append(f)
if added: print('Added: {}'.format(', '.join(added)))
if removed: print('Removed: {}'.format(', '.join(removed)))
if modified: print('Modified: {}'.format(', '.join(modified)))
before = after
Here is a simplified version of Kender's code that appears to do the same trick and does not import the entire file:
# Check file for new data.
import time
f = open(r'c:\temp\test.txt', 'r')
while True:
line = f.readline()
if not line:
time.sleep(1)
print 'Nothing New'
else:
print 'Call Function: ', line
Well, since you are using Python, you can just open a file and keep reading lines from it.
f = open('file.log')
If the line read is not empty, you process it.
line = f.readline()
if line:
// Do what you want with the line
You may be missing that it is ok to keep calling readline at the EOF. It will just keep returning an empty string in this case. And when something is appended to the log file, the reading will continue from where it stopped, as you need.
If you are looking for a solution that uses events, or a particular library, please specify this in your question. Otherwise, I think this solution is just fine.
Simplest solution for me is using watchdog's tool watchmedo
From https://pypi.python.org/pypi/watchdog I now have a process that looks up the sql files in a directory and executes them if necessary.
watchmedo shell-command \
--patterns="*.sql" \
--recursive \
--command='~/Desktop/load_files_into_mysql_database.sh' \
.
As you can see in Tim Golden's article, pointed by Horst Gutmann, WIN32 is relatively complex and watches directories, not a single file.
I'd like to suggest you look into IronPython, which is a .NET python implementation.
With IronPython you can use all the .NET functionality - including
System.IO.FileSystemWatcher
Which handles single files with a simple Event interface.
This is an example of checking a file for changes. One that may not be the best way of doing it, but it sure is a short way.
Handy tool for restarting application when changes have been made to the source. I made this when playing with pygame so I can see effects take place immediately after file save.
When used in pygame make sure the stuff in the 'while' loop is placed in your game loop aka update or whatever. Otherwise your application will get stuck in an infinite loop and you will not see your game updating.
file_size_stored = os.stat('neuron.py').st_size
while True:
try:
file_size_current = os.stat('neuron.py').st_size
if file_size_stored != file_size_current:
restart_program()
except:
pass
In case you wanted the restart code which I found on the web. Here it is. (Not relevant to the question, though it could come in handy)
def restart_program(): #restart application
python = sys.executable
os.execl(python, python, * sys.argv)
Have fun making electrons do what you want them to do.
Seems that no one has posted fswatch. It is a cross-platform file system watcher. Just install it, run it and follow the prompts.
I've used it with python and golang programs and it just works.
ACTIONS = {
1 : "Created",
2 : "Deleted",
3 : "Updated",
4 : "Renamed from something",
5 : "Renamed to something"
}
FILE_LIST_DIRECTORY = 0x0001
class myThread (threading.Thread):
def __init__(self, threadID, fileName, directory, origin):
threading.Thread.__init__(self)
self.threadID = threadID
self.fileName = fileName
self.daemon = True
self.dir = directory
self.originalFile = origin
def run(self):
startMonitor(self.fileName, self.dir, self.originalFile)
def startMonitor(fileMonitoring,dirPath,originalFile):
hDir = win32file.CreateFile (
dirPath,
FILE_LIST_DIRECTORY,
win32con.FILE_SHARE_READ | win32con.FILE_SHARE_WRITE,
None,
win32con.OPEN_EXISTING,
win32con.FILE_FLAG_BACKUP_SEMANTICS,
None
)
# Wait for new data and call ProcessNewData for each new chunk that's
# written
while 1:
# Wait for a change to occur
results = win32file.ReadDirectoryChangesW (
hDir,
1024,
False,
win32con.FILE_NOTIFY_CHANGE_LAST_WRITE,
None,
None
)
# For each change, check to see if it's updating the file we're
# interested in
for action, file_M in results:
full_filename = os.path.join (dirPath, file_M)
#print file, ACTIONS.get (action, "Unknown")
if len(full_filename) == len(fileMonitoring) and action == 3:
#copy to main file
...
Since I have it installed globally, my favorite approach is to use nodemon. If your source code is in src, and your entry point is src/app.py, then it's as easy as:
nodemon -w 'src/**' -e py,html --exec python src/app.py
... where -e py,html lets you control what file types to watch for changes.
Here's an example geared toward watching input files that write no more than one line per second but usually a lot less. The goal is to append the last line (most recent write) to the specified output file. I've copied this from one of my projects and just deleted all the irrelevant lines. You'll have to fill in or change the missing symbols.
from PyQt5.QtCore import QFileSystemWatcher, QSettings, QThread
from ui_main_window import Ui_MainWindow # Qt Creator gen'd
class MainWindow(QMainWindow, Ui_MainWindow):
def __init__(self, parent=None):
QMainWindow.__init__(self, parent)
Ui_MainWindow.__init__(self)
self._fileWatcher = QFileSystemWatcher()
self._fileWatcher.fileChanged.connect(self.fileChanged)
def fileChanged(self, filepath):
QThread.msleep(300) # Reqd on some machines, give chance for write to complete
# ^^ About to test this, may need more sophisticated solution
with open(filepath) as file:
lastLine = list(file)[-1]
destPath = self._filemap[filepath]['dest file']
with open(destPath, 'a') as out_file: # a= append
out_file.writelines([lastLine])
Of course, the encompassing QMainWindow class is not strictly required, ie. you can use QFileSystemWatcher alone.
Just to put this out there since no one mentioned it: there's a Python module in the Standard Library named filecmp which has this cmp() function that compares two files.
Just make sure you don't do from filecmp import cmp to not overshadow the built-in cmp() function in Python 2.x. That's okay in Python 3.x, though, since there's no such built-in cmp() function anymore.
Anyway, this is how its use looks like:
import filecmp
filecmp.cmp(path_to_file_1, path_to_file_2, shallow=True)
The argument shallow defaults to True. If the argument's value is True, then only the metadata of the files are compared; however, if the argument's value is False, then the contents of the files are compared.
Maybe this information will be useful to someone.
watchfiles (https://github.com/samuelcolvin/watchfiles) is a Python API and CLI that uses the Notify (https://github.com/notify-rs/notify) library written in Rust.
The rust implementation currently (2022-10-09) supports:
Linux / Android: inotify
macOS: FSEvents or kqueue, see features
Windows: ReadDirectoryChangesW
FreeBSD / NetBSD / OpenBSD / DragonflyBSD: kqueue
All platforms: polling
Binaries available on PyPI (https://pypi.org/project/watchfiles/) and conda-forge (https://github.com/conda-forge/watchfiles-feedstock).
You can also use a simple library called repyt, here is an example:
repyt ./app.py
related #4Oh4 solution a smooth change for a list of files to watch;
import os
import sys
import time
class Watcher(object):
running = True
refresh_delay_secs = 1
# Constructor
def __init__(self, watch_files, call_func_on_change=None, *args, **kwargs):
self._cached_stamp = 0
self._cached_stamp_files = {}
self.filenames = watch_files
self.call_func_on_change = call_func_on_change
self.args = args
self.kwargs = kwargs
# Look for changes
def look(self):
for file in self.filenames:
stamp = os.stat(file).st_mtime
if not file in self._cached_stamp_files:
self._cached_stamp_files[file] = 0
if stamp != self._cached_stamp_files[file]:
self._cached_stamp_files[file] = stamp
# File has changed, so do something...
file_to_read = open(file, 'r')
value = file_to_read.read()
print("value from file", value)
file_to_read.seek(0)
if self.call_func_on_change is not None:
self.call_func_on_change(*self.args, **self.kwargs)
# Keep watching in a loop
def watch(self):
while self.running:
try:
# Look for changes
time.sleep(self.refresh_delay_secs)
self.look()
except KeyboardInterrupt:
print('\nDone')
break
except FileNotFoundError:
# Action on file not found
pass
except Exception as e:
print(e)
print('Unhandled error: %s' % sys.exc_info()[0])
# Call this function each time a change happens
def custom_action(text):
print(text)
# pass
watch_files = ['/Users/mexekanez/my_file.txt', '/Users/mexekanez/my_file1.txt']
# watcher = Watcher(watch_file) # simple
if __name__ == "__main__":
watcher = Watcher(watch_files, custom_action, text='yes, changed') # also call custom action function
watcher.watch() # start the watch going
The best and simplest solution is to use pygtail:
https://pypi.python.org/pypi/pygtail
from pygtail import Pygtail
import sys
while True:
for line in Pygtail("some.log"):
sys.stdout.write(line)
import inotify.adapters
from datetime import datetime
LOG_FILE='/var/log/mysql/server_audit.log'
def main():
start_time = datetime.now()
while True:
i = inotify.adapters.Inotify()
i.add_watch(LOG_FILE)
for event in i.event_gen(yield_nones=False):
break
del i
with open(LOG_FILE, 'r') as f:
for line in f:
entry = line.split(',')
entry_time = datetime.strptime(entry[0],
'%Y%m%d %H:%M:%S')
if entry_time > start_time:
start_time = entry_time
print(entry)
if __name__ == '__main__':
main()
The easiest solution would get the two instances of the same file after an interval and Compare them. You Could try something like this
while True:
# Capturing the two instances models.py after certain interval of time
print("Looking for changes in " + app_name.capitalize() + " models.py\nPress 'CTRL + C' to stop the program")
with open(app_name.capitalize() + '/filename', 'r+') as app_models_file:
filename_content = app_models_file.read()
time.sleep(5)
with open(app_name.capitalize() + '/filename', 'r+') as app_models_file_1:
filename_content_1 = app_models_file_1.read()
# Comparing models.py after certain interval of time
if filename_content == filename_content_1:
pass
else:
print("You made a change in " + app_name.capitalize() + " filename.\n")
cmd = str(input("Do something with the file?(y/n):"))
if cmd == 'y':
# Do Something
elif cmd == 'n':
# pass or do something
else:
print("Invalid input")
If you're using windows, create this POLL.CMD file
#echo off
:top
xcopy /m /y %1 %2 | find /v "File(s) copied"
timeout /T 1 > nul
goto :top
then you can type "poll dir1 dir2" and it will copy all the files from dir1 to dir2 and check for updates once per second.
The "find" is optional, just to make the console less noisy.
This is not recursive. Maybe you could make it recursive using /e on the xcopy.
I don't know any Windows specific function. You could try getting the MD5 hash of the file every second/minute/hour (depends on how fast you need it) and compare it to the last hash. When it differs you know the file has been changed and you read out the newest lines.
I'd try something like this.
try:
f = open(filePath)
except IOError:
print "No such file: %s" % filePath
raw_input("Press Enter to close window")
try:
lines = f.readlines()
while True:
line = f.readline()
try:
if not line:
time.sleep(1)
else:
functionThatAnalisesTheLine(line)
except Exception, e:
# handle the exception somehow (for example, log the trace) and raise the same exception again
raw_input("Press Enter to close window")
raise e
finally:
f.close()
The loop checks if there is a new line(s) since last time file was read - if there is, it's read and passed to the functionThatAnalisesTheLine function. If not, script waits 1 second and retries the process.

Categories

Resources