kivy logging; 'Too many logfile, remove them' - python

When using kivy logging like this:
from kivy.logger import Logger
from kivy.config import Config
Config.set('kivy', 'log_enable', 1)
Config.set('kivy', 'log_dir', '/home/dude/folder')
Config.set('kivy', 'log_level', 'debug')
Config.set('kivy', 'log_name', 'my_file.log')
Config.write()
Logger.debug('main:switching stuff on')
Logger.info('socket:send command to raspberry')
I always get the the error:
[ERROR ] Error while activating FileHandler logger
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/kivy/logger.py", line 220, in emit
self._configure()
File "/usr/lib/python2.7/dist-packages/kivy/logger.py", line 171, in _configure
raise Exception('Too many logfile, remove them')
Exception: Too many logfile, remove them
... even after removing any file with this name.
What am I missing here?
I also get the error when running bigger programs which actually contain kivy widgets and apps.

Setting the correct settings in the /home/.kivy/config.ini file solved the problem. All existing files in the set directory have to be removed. Kivy logger does not append in an existing file. It just raises the mentioned error.

This is because kivy Logger does not actually have a single file when dumping stdout into a text file (possibly due to it calling the handler multiple times not sure). Rather it will occasionally raise another file with a number appendage (replacing %_) and continue logging. However, if for whatever reason it cannot append this number or the count reaches over 10k, then it will raise the above exception and exit the app. Great if you want to avoid flooding, as the notes in the kivy code indicate, not so great when you were not aware of kivy naming convention.
Kivy Logger -> https://kivy.org/doc/stable/_modules/kivy/logger.html

Related

Python logging failes with log-file on network drive (windows 10)

I want to log using python's logging module to a file on a network drive. My problem is that the logging fails at some random point giving me this error:
--- Logging error ---
Traceback (most recent call last):
File "c:\programme\anaconda3\lib\logging\__init__.py", line 1085, in emit
self.flush()
File "c:\programme\anaconda3\lib\logging\__init__.py", line 1065, in flush
self.stream.flush()
OSError: [Errno 22] Invalid argument
Call stack:
File "log_test.py", line 67, in <module>
logger_root.error('FLUSH!!!'+str(i))
Message: 'Minute:120'
Arguments: ()
--- Logging error ---
Traceback (most recent call last):
File "c:\programme\anaconda3\lib\logging\__init__.py", line 1085, in emit
self.flush()
File "c:\programme\anaconda3\lib\logging\__init__.py", line 1065, in flush
self.stream.flush()
OSError: [Errno 22] Invalid argument
Call stack:
File "log_test.py", line 67, in <module>
logger_root.error('FLUSH!!!'+str(i))
Message: 'FLUSH!!!120'
Arguments: ()
I am on a virtual machine with Windows 10 (Version 1909) and I am using Python 3.8.3 and logging 0.5.1.2. The script runs in an virtual environment on a network drive, where the log files are stored.
I am writing a script that is automating some data quality control tasks and I am not 100% sure, where (network drive, local drive, etc.) the script will end up on, so it should be able to log in every possible situation. The error does not appear at the same position/line in the script but randomly. Sometimes the program (~120 minutes in total) finishes without the error appearing at all.
What I tried so far:
I believe that the logfile is closed at some point so that no new logging messages can be written to it. I wrote a simple script that basically only does logs to check if it is related to my original script or the logging process itself. Since the "only-logs-script" also fails randomly, when running on the network drive but not when it is running on my local drive, I assume that it is related to the connection to the network drive. I thought about having the whole logging stored in the memory and then written to the file but the MemoryHandler will also open the file at the beginning of the script and therefore fail at some point.
Here is my code for the "only-logs-script" (log_test.py):
import logging
import logging.handlers
import os
import datetime
import time
##################################################################
# setting up a logger to create a log file with information about this programm
logfile_dir = 'logfiles_test'
CHECK_FOLDER = os.path.isdir(logfile_dir)
# if folder doesn't exist, create it
if not CHECK_FOLDER:
os.makedirs(logfile_dir)
print("created folder : ", logfile_dir)
log_path = '.\\'+logfile_dir+'\\'
Current_Date = datetime.datetime.today().strftime ('%Y-%m-%d_')
log_filename = log_path+Current_Date+'logtest.log'
print(log_filename)
# Create a root logger
logger_root = logging.getLogger()
# Create handlers
f1_handler = logging.FileHandler(log_filename, mode='w+')
f2_handler = logging.StreamHandler()
f1_handler.setLevel(logging.INFO)
f2_handler.setLevel(logging.INFO)
# Create formatters and add it to handlers
f1_format = logging.Formatter('%(asctime)s | %(name)s | %(levelname)s | %(message)s \n')
f2_format = logging.Formatter('%(asctime)s | %(name)s | %(levelname)s | %(message)s \n')
f1_handler.setFormatter(f1_format)
f2_handler.setFormatter(f2_format)
# create a memory handler
memoryhandler = logging.handlers.MemoryHandler(
capacity=1024*100,
flushLevel=logging.ERROR,
target=f1_handler,
flushOnClose=True
)
# Add handlers to the logger
logger_root.addHandler(memoryhandler)
logger_root.addHandler(f2_handler)
logger_root.setLevel(logging.INFO)
logger_root.info('Log-File initiated.')
fname = log_path+'test.log'
open(fname, mode='w+')
for i in range(60*4):
print(i)
logger_root.warning('Minute:'+str(i))
print('Write access:', os.access(fname, os.W_OK))
if(i%10==0):
logger_root.error('FLUSH!!!'+str(i))
time.sleep(60)
Is there something horribly wrong with my logging process or is it because of the network drive? And does anyone of you have any ideas on how to tackle this issue? Would storing the whole information in the memory and writing it to a file in the end solve the problem? How would I best achieve this?
Another idea would be to log on the local drive and then automatically copy the file to the network drive, when the script is done. Any help is strongly appreciated as I have tried to identify and solve this problem for several days now.
Thank you!
Since this is not really going anywhere atm I will post what I did to "solve" my problem. It is not a satisfactory solution as it fails when the code fails but it is better than not logging at all.
The solution is inspired by the answer to this question: log messages to an array/list with logging
So here is what I did:
import io
#####################################
# first create an in-memory file-like object to save the logs to
log_messages = io.StringIO()
# create a stream handler that saves the log messages to that object
s1_handler = logging.StreamHandler(log_messages)
s1_handler.setLevel(logging.INFO)
# create a file handler just in case
f1_handler = logging.FileHandler(log_filename, mode='w+')
f1_handler.setLevel(logging.INFO)
# set the format for the log messages
log_format = '%(asctime)s | %(name)s | %(levelname)s | %(message)s \n'
f1_format = logging.Formatter(log_format)
s1_handler.setFormatter(f1_format)
f1_format = logging.Formatter(log_format)
# add the handler to the logger
logger_root.addHandler(s1_handler)
logger_root.addHandler(f1_handler)
#####################################
# here would be the main code ...
#####################################
# at the end of my code I added this to write the in-memory-message to the file
contents = log_messages.getvalue()
# opening a file in 'w'
file = open(log_filename, 'w')
# write log message to file
file.write("{}\n".format(contents))
# closing the file and the in-memory object
file.close()
log_messages.close()
Obviously this fails when the code fails but the code tries to catch most errors, so I hope it will work. I got rid of the Memory handler but kept a file handler so that in case of a real failure at least some of the logs are recorded until the file handler fails. It is far from ideal but it works for me atm. If you have some other suggestions/improvements I would be happy to hear them!

Logging inside Threads When Logger is Already Configured

EDIT: Repo with all code (branch "daemon"). The question is regarding the code in the file linked to).
My main program configures logging like this (options have been simplified):
logging.basicConfig(level='DEBUG', filename="/some/directory/cstash.log")
Part of my application starts a daemon, for which I use the daemon package:
with daemon.DaemonContext(
pidfile=daemon.pidfile.PIDLockFile(self.pid_file),
stderr=self.log_file,
stdout=self.log_file
):
self.watch_files()
where self.log_file is a file I've opened for writing.
When I start the application, I get:
--- Logging error ---
Traceback (most recent call last):
File "/Users/afraz/.pyenv/versions/3.7.2/lib/python3.7/logging/__init__.py", line 1038, in emit
self.flush()
File "/Users/afraz/.pyenv/versions/3.7.2/lib/python3.7/logging/__init__.py", line 1018, in flush
self.stream.flush()
OSError: [Errno 9] Bad file descriptor
If I switch off the logging to a file in the daemon, the logging in my main application works, and if I turn off the logging to a file in my main application, the logging in the daemon works. If I set them up to log to a file (even different files), I get the error above.
After trying many things, here's what worked:
def process_wrapper():
with self.click_context:
self.process_queue()
def watch_wrapper():
with self.click_context:
self.watch_files()
with daemon.DaemonContext(
pidfile=daemon.pidfile.PIDLockFile(self.pid_file),
files_preserve=[logger.handlers[0].stream.fileno()],
stderr=self.log_file,
stdout=self.log_file
):
logging.info("Started cstash daemon")
while True:
threading.Thread(target=process_wrapper).start()
time.sleep(5)
threading.Thread(target=watch_wrapper).start()
There were two main things wrong:
daemon.DaemonContext needs files_preserve set to the file logging handler, so it doesn't close the file once the context is switched. This is the actual solution to the original problem.
Further however, both methods needed to be in separate threads, not just one. The while True loop in the main thread was stopping the other method from running, so putting them both into separate threads means they can both run

How to integrate APScheduler and Imp?

I have built a plugin-based application where "plugins" (python modules) can be loaded by imp and then scheduled for later execution by APScheduler, I was able to successfully integrate them but I want to implement persistence in case of crashes or application reestarts, so I changed the default memory job store to the SqlAlchemyJobStore, it works quite well the first time you execute the program: tasks are loaded, scheduled, saved at the database and executed at the right time.
Problem is when I try to load the application again I get this traceback:
ERROR:apscheduler.jobstores.default:Unable to restore job "d3e0f0068df54d15986e9b7b6757f665" -- removing it
Traceback (most recent call last):
File "/home/jesus/.local/lib/python2.7/site-packages/apscheduler/jobstores/sqlalchemy.py", line 126, in _get_jobs
jobs.append(self._reconstitute_job(row.job_state))
File "/home/jesus/.local/lib/python2.7/site-packages/apscheduler/jobstores/sqlalchemy.py", line 114, in _reconstitute_job
job.__setstate__(job_state)
File "/home/jesus/.local/lib/python2.7/site-packages/apscheduler/job.py", line 228, in __setstate__
self.func = ref_to_obj(self.func_ref)
File "/home/jesus/.local/lib/python2.7/site-packages/apscheduler/util.py", line 257, in ref_to_obj
raise LookupError('Error resolving reference %s: could not import module' % ref)
LookupError: Error resolving reference __init__:run: could not import module
So it is obvious that there is a problem when attempting to import the function again
Here is my scheduler initialization:
executors = {'default': ThreadPoolExecutor(5)}
jobstores = {'default': SQLAlchemyJobStore(url='sqlite:///jobs.sqlite')}
self.scheduler = BackgroundScheduler(executors = executors,jobstores=jobstores)
I have a "tests" dictionary containing the "plugins" that should be loaded and some parameters, "load_plugin" uses imp to load a plugin by it's name.
for test,parameters in tests.items():
if test in pluggins:
module=load_plugin(pluggins[test])
self.jobs[test]=self.scheduler.add_job(module.run,"interval",seconds=parameters["interval"],name=test)
Any idea about how can I handle reconstituting jobs?
Something in the automatic detection of the module name is going wrong. Hard to say what, but the alternative is to manually give it the proper lookup path as a string (e.g. "package.module:function"). If you can do this, you can avoid this problem.

Docopt - Errors, exit undefined - CLI interface for Python programme

I'm sure that the answer for this is out there, but I've read the site info, I've watched the video they made and I've tried to find a really basic tutorial but I can't. I've been messing about with this for most of the day and It's not really making sense to me.
Here's my error:
vco#geoHP:~$ python3 a_blah.py "don't scare the cats" magic
Traceback (most recent call last):
File "a_blah.py", line 20, in <module>
arguments = docopt.docopt(__doc__)
File "/usr/lib/python3/dist-packages/docopt.py", line 579, in docopt
raise DocoptExit()
docopt.DocoptExit: Usage:
a_blah.py <start>... <end>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "a_blah.py", line 33, in <module>
except DocoptExit:
NameError: name 'DocoptExit' is not defined
line 20 - I don't see why that line is creating an error, it worked before and I've seen that exact line in others programmes?
I don't know why the line 570 of docopt is creating an error - I've seen others use DocoptExit(), isn't this something that's just part of Docopt? Do I have to write my own exit function for this? (I've not seen anyone else do that)
here's the code
import docopt
if __name__ == '__main__':
try:
arguments = docopt.docopt(__doc__)
print(arguments['<start>'])
print("that was that")
print(arguments['<end>'])
except docopt.DocoptExit:
print("this hasn't worked")
What I'm trying to make this for is a script that I've written that moves files from one place to another based on their extension.
So the arguments at the command line will be file type, start directory, destination directory, and an option to delete them from the start directory after they've been moved.
I'm trying (and failing) to get docopt working on it's own prior to including it in the other script though.
The exception you want is in docopt's namespace. You never import it into your global namespace, so you can't refer to it simply with it's name. You need to import it separately or refer to it through the module. You also shouldn't use parenthesis after the exception.
import docopt
try:
# stuff
except docopt.DocoptExit:
# other stuff
or
import docopt
from docopt import DocoptExit
try:
# stuff
except DocoptExit:
# other stuff

Using python's logging package, how to write to different log files using same Logger

I have two files names "parent" and "child". There are no classes, just two files with functions and if __name__ == "__main__" statements. Both files in their main functions have a function that creates a logger object with a TimedRotatingFileHandler using logger = logging.getLogger('MyLogger'), and then returns logger. That function is shown below:
def setupLogFile():
LOG_FILE_NAME = "/path/to/pyrpc.log"
logHandler = TimedRotatingFileHandler(LOG_FILE_NAME, when="midnight")
logFormatter = logging.Formatter('%(asctime)s |%(levelname)s|: %(message)s')
logHandler.setFormatter(logFormatter)
logger = logging.getLogger('MyLogger')
logger.addHandler(logHandler)
logger.setLevel(logging.INFO)
return logger
Many functions in both files contain calls such as
logger.info("log something important")
Let's call one of these functions "funcA()". My question is, if parent imports child, and then somewhere in parent there is a line that runs
child.funcA()
How do I make it so that the "logger.info("log something important")" line within funcA() knows to use the logger object created in the main function of parent?
Right now I'm getting the following error
Traceback (most recent call last):
File "testblockadder.py", line 57, in <module>
testInitialDBData(topBlockHeight)
File "testblockadder.py", line 33, in testInitialDBData
testpyrpc.testBlock(block, blkHeight)
File "/path/to/pyrpc.py", line 46, in testBlock
logger.info("log something important")
NameError: global name 'logger' is not defined
I have read about declaring the logger at the module level using
logger = logging.getLogger(__name__)
But it seems like this always uses logging.basicConfig(), which I'm fairly certain means I have to have log to the same file everytime. The functions that run exclusively in parent log to a different file than that of funcA(). It'd be okay if any of the code I ran in parent logged to the same log file, but I need the logs in that file to be able to say which file that log was called in (i.e. whether it came from parent or child).
EDIT: I have managed to use the tips from this SO post difference in logging mechanism: API and application(python) to get the logging that I needed. However I have a lingering question regarding this type of setup. I've used basicConfig to set the formatter and logging location. Is there a way to change the default StreamHandler that basicConfig sets up for me to instead me a TimedRotatingFileHandler? That way I don't log everything to one file for all time.

Categories

Resources