I am new to Python. I'm using Vim with Python-mode to edit and test my code and noticed that if I run the same code more than once, the logging file will only be updated the first time the code is run. For example below is a piece of code called "testlogging.py"
#!/usr/bin/python2.7
import os.path
import logging
filename_logging = os.path.join(os.path.dirname(__file__), "testlogging.log")
logging.basicConfig(filename=filename_logging, filemode='w',
level=logging.DEBUG)
logging.info('Aye')
If I open a new gvim session and run this code with python-mode, then I would get a file called "testlogging.log" with the content
INFO:root:Aye
Looks promising! But if I delete the log file and run the code in pythong-mode again, then the log file won't be re-created. If at this time I run the code in a terminal like this
./testlogging.py
Then the log file would be generated again!
I checked with Python documentation and noticed this line in the logging tutorial (https://docs.python.org/2.7/howto/logging.html#logging-to-a-file):
A very common situation is that of recording logging events in a file, so let’s look at that next. Be sure to try the following in a newly-started Python interpreter, and don’t just continue from the session described above:...
So I guess this problem with logging file only updated once has something to do with python-mode remaining in the same interpreter when I run a code a second time. So my question is: is there anyway to solve this problem, by fiddling with the logging module, by putting something in the code to reset the interpreter or by telling Python-mode to reset it?
I am also curious why the logging module requires a newly-started interpreter to work...
Thanks for your help in advance.
The log file is not recreated because the logging module still has the old log file open and will continue to write to it (even though you've deleted it). The solution is to force the logging module to release all acquired resources before running your code again in the same interpreter:
# configure logging
# log a message
# delete log file
logging.shutdown()
# configure logging
# log a message (log file will be recreated)
In other words, call logging.shutdown() at the end of your code and then you can re-run it within the same interpreter and it will work as expected (re-creating the log file on every run).
You opened log file with "w" mode. "w" means write data from begining, so you saw only log of last execution.
This is why, you see same contents on log file.
You should correct your code on fifth to seventh line to follows.
filename_logging = os.path.join(os.path.dirname(__file__), "testlogging.log")
logging.basicConfig(filename=filename_logging, filemode='a',
level=logging.DEBUG)
Code above use "a" mode, i.e. append mode, so new log data will be added at the end of log file.
Related
I'm doing an automation project which in in i want to record the results into a log file, I creacted a function that create the log file and write in it the messeges.
I dont get why this log file function does'nt work.
P.S, The test in the example runs perfectly.
def test10(self):
log=User(self.driver)
log.LogIn('By1zx','Cb12')
log.LogOut()
logFile("INFO",10,True)
#Logger creator
def logFile(level,test,passedornot):
lfile=r'C:\Users\97252\PycharmProjects\Automation\AutomationLogging\log.txt'
logging.basicConfig(level= logging.INFO, filename="lfile",filemode="a")
passtext="Failed"
if(passedornot):passtext="Passed"
if level == "INFO":logging.info(f'The test {test} {passtext}')
if level == "ERROR":logging.error(f'The test {test} {passtext}')
I noticed inefficiency in the code and a minor bug as well.
The bug is, filename=lfile. No double quotes required. You are referring variable not the string "lfile"
logging.basicConfig(level= logging.INFO, filename=lfile,filemode="a")
Every time you call logFile function, it will try to open file_pointer to append logs to log file. This would be overhead for the OS, if you use logFile function more frequently. The best approach for this is, RotatingFileHandler. Its worth looking into this library. Please comment if you have any questions on this library.
I seem to be running into a problem when I am logging data after invoking another module in an application I am working on. I'd like assistance in understanding what may be happening here.
To replicate the issue, I have developed the following script...
#!/usr/bin/python
import sys
import logging
from oletools.olevba import VBA_Parser, VBA_Scanner
from cloghandler import ConcurrentRotatingFileHandler
# set up logger for application
dbg_h = logging.getLogger('dbg_log')
dbglog = '%s' % 'dbg.log'
dbg_rotateHandler = ConcurrentRotatingFileHandler(dbglog, "a")
dbg_h.addHandler(dbg_rotateHandler)
dbg_h.setLevel(logging.ERROR)
# read some document as a buffer
buff = sys.stdin.read()
# generate issue
dbg_h.error('Before call to module....')
vba = VBA_Parser('None', data=buff)
dbg_h.error('After call to module....')
When I run this, I get the following...
cat somedocument.doc | ./replicate.py
ERROR:dbg_log:After call to module....
For some reason, my last dbg_h logger write attempt is getting output to the console as well as getting written to my dbg.log file? This only appears to happen AFTER the call to VBA_Parser.
cat dbg.log
Before call to module....
After call to module....
Anyone have any idea as to why this might be happening? I reviewed the source code of olevba and did not see anything that stuck out to me specifically.
Could this be a problem I should raise with the module author? Or am I doing something wrong with how I am using the cloghandler?
The oletools codebase is littered with calls to the root logger though calls to logging.debug(...), logging.error(...), and so on. Since the author didn't bother to configure the root logger, the default behavior is to dump to sys.stderr. Since sys.stderr defaults to the console when running from the command line, you get what you're seeing.
You should contact the author of oletools since they're not using the logging system effectively. Ideally they would use a named logger and push the messages to that logger. As a work-around to suppress the messages you could configure the root logger to use your handler.
# Set a handler
logger.root.addHandler(dbg_rotateHandler)
Be aware that this may lead to duplicated log messages.
I am currently playing with embedding pig into python, and whenever I run the file it works, however it cloggs up the command line with output like the following:
*sys-package-mgr*: processing new jar, '/usr/lib/hadoop/lib/hue-plugins-2.3.0-cdh4.3.0.jar'
*sys-package-mgr*: processing new jar, '/usr/lib/hadoop/lib/paranamer-2.3.jar'
*sys-package-mgr*: processing new jar, '/usr/lib/hadoop/lib/avro-1.7.4.jar'
*sys-package-mgr*: processing new jar, '/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar'
*sys-package-mgr*: processing new jar, '/usr/lib/hadoop/lib/commons-configuration-1.6.jar'
Command line input:
pig embedded_pig_testing.py -p /home/cloudera/Documents/python_testing_config_files/test002_config.cfg
the parameter passed is a file that contains a bunch of variables I am using in the test.
Is there any way to get the script to not log these actions to the command line?
Logging in Java programs/libraries is usually configured by means of a configuration or .properties file. I'm sure there's one for Pig. Something that might be what you're looking for is http://svn.apache.org/repos/asf/pig/trunk/conf/pig.properties.
EDIT: looks like this is specific to Jython.
I have not been able to determine if it's possible at all to disable this, but unless I could find something cleaner, I'd consider simply redirecting sys.stderr (or sys.stdout) during the .jar loading phase:
import os
import sys
old_stdout, old_stderr = sys.stdout, sys.stderr
sys.stdout = sys.stderr = open(os.devnull, 'w')
do_init() # load your .jar's here
sys.stdout, sys.stderr = old_stdout, old_stderr
This logging comes from jython scanning your Java packages to build a cache for later use: https://wiki.python.org/jython/PackageScanning
So long as your script only uses full class imports (no import x.y.* wildcards), you can disable package scanning via the python.cachedir.skip property:
pig ... -Dpython.cachedir.skip=true ...
Frustratingly, I believe jython writes these messages to stdout instead of stderr, so piping stderr elsewhere won't help you out.
Another option is to use streaming python instead of jython whenever pig 0.12 ships. See PIG-2417 for more details on that.
I need to use pyfits (http://www.stsci.edu/institute/software_hardware/pyfits) to open/write some spectra for the work I am doing. Problem is, everytime I use the "writeto" function to write a .fits file and it overwrites it, I get a "Overwrite existing file: XXX.fits" message on the screen. Is it possible to tell the program to not show this partiular message?
I already checked and could not find a keyword for the "writeto" function that would deactivate this message, so I was thinking if there was anyway to tell python to redirect all output (except if it is an error) of a particular function to something like /dev/null or similar.
Worst case scenario, I thought that maybe using "logging" and redirect all output to a file and thats it..
Any ideas?
The library uses the Python warnings module to emit a warning when you 'clobber' an existing file.
You can use that same module to suppress the warning:
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
pyfits.writeto(...)
Using the catch_warnings() context manager suppresses all warnings that pyfits.writeto() might raise. You can also configure filters for specific messages to be ignored:
import warnings
warnings.filterwarnings('ignore', message='Overwriting existing file .*', module='pyfits\.hdu.*')
would ignore messages that start with Overwriting existing file raised by modules that start with pyfits.hdu for example.
You can use warnings.filterwarnings to suppress the messages. Simply add
warnings.filterwarnings('ignore', message='Overwriting existing file')
I'm writing a script that uses curses to produce a main window and a log window at the bottom of the screen.
It seems that when I import pjsua it insists on printing to the screen even though I have set log level to 0. Here's what it outputs:
15:49:09.716 os_core_unix.c !pjlib 2.0.1 for POSIX initialized
15:49:09.844 sip_endpoint.c .Creating endpoint instance...
15:49:09.844 pjlib .select() I/O Queue created (0x7f84690decd8)
15:49:09.844 sip_endpoint.c .Module "mod-msg-print" registered
15:49:09.844 sip_transport. .Transport manager created.
15:49:09.845 pjsua_core.c .PJSUA state changed: NULL --> CREATED
15:49:09.896 pjsua_media.c ..NAT type detection failed: Invalid STUN server or server not configured (PJNATH_ESTUNINSERVER)
Note it doesn't send this through the logging callback, meaning I have no way to put it in the log window with the rest of my logging information. Can anyone give me some advice on dealing with this output please?
Thanks
If you can detect which stream it writes to, e.g. sys.stderr, you could redirect it somewhere by simple assignment of sys.stderr to another open file (or even /dev/null ?).