Suppressing pjsua output in Python script - python

I'm writing a script that uses curses to produce a main window and a log window at the bottom of the screen.
It seems that when I import pjsua it insists on printing to the screen even though I have set log level to 0. Here's what it outputs:
15:49:09.716 os_core_unix.c !pjlib 2.0.1 for POSIX initialized
15:49:09.844 sip_endpoint.c .Creating endpoint instance...
15:49:09.844 pjlib .select() I/O Queue created (0x7f84690decd8)
15:49:09.844 sip_endpoint.c .Module "mod-msg-print" registered
15:49:09.844 sip_transport. .Transport manager created.
15:49:09.845 pjsua_core.c .PJSUA state changed: NULL --> CREATED
15:49:09.896 pjsua_media.c ..NAT type detection failed: Invalid STUN server or server not configured (PJNATH_ESTUNINSERVER)
Note it doesn't send this through the logging callback, meaning I have no way to put it in the log window with the rest of my logging information. Can anyone give me some advice on dealing with this output please?
Thanks

If you can detect which stream it writes to, e.g. sys.stderr, you could redirect it somewhere by simple assignment of sys.stderr to another open file (or even /dev/null ?).

Related

Strange Issue Using Logging Module in Python

I seem to be running into a problem when I am logging data after invoking another module in an application I am working on. I'd like assistance in understanding what may be happening here.
To replicate the issue, I have developed the following script...
#!/usr/bin/python
import sys
import logging
from oletools.olevba import VBA_Parser, VBA_Scanner
from cloghandler import ConcurrentRotatingFileHandler
# set up logger for application
dbg_h = logging.getLogger('dbg_log')
dbglog = '%s' % 'dbg.log'
dbg_rotateHandler = ConcurrentRotatingFileHandler(dbglog, "a")
dbg_h.addHandler(dbg_rotateHandler)
dbg_h.setLevel(logging.ERROR)
# read some document as a buffer
buff = sys.stdin.read()
# generate issue
dbg_h.error('Before call to module....')
vba = VBA_Parser('None', data=buff)
dbg_h.error('After call to module....')
When I run this, I get the following...
cat somedocument.doc | ./replicate.py
ERROR:dbg_log:After call to module....
For some reason, my last dbg_h logger write attempt is getting output to the console as well as getting written to my dbg.log file? This only appears to happen AFTER the call to VBA_Parser.
cat dbg.log
Before call to module....
After call to module....
Anyone have any idea as to why this might be happening? I reviewed the source code of olevba and did not see anything that stuck out to me specifically.
Could this be a problem I should raise with the module author? Or am I doing something wrong with how I am using the cloghandler?
The oletools codebase is littered with calls to the root logger though calls to logging.debug(...), logging.error(...), and so on. Since the author didn't bother to configure the root logger, the default behavior is to dump to sys.stderr. Since sys.stderr defaults to the console when running from the command line, you get what you're seeing.
You should contact the author of oletools since they're not using the logging system effectively. Ideally they would use a named logger and push the messages to that logger. As a work-around to suppress the messages you could configure the root logger to use your handler.
# Set a handler
logger.root.addHandler(dbg_rotateHandler)
Be aware that this may lead to duplicated log messages.

reset python interpreter for logging

I am new to Python. I'm using Vim with Python-mode to edit and test my code and noticed that if I run the same code more than once, the logging file will only be updated the first time the code is run. For example below is a piece of code called "testlogging.py"
#!/usr/bin/python2.7
import os.path
import logging
filename_logging = os.path.join(os.path.dirname(__file__), "testlogging.log")
logging.basicConfig(filename=filename_logging, filemode='w',
level=logging.DEBUG)
logging.info('Aye')
If I open a new gvim session and run this code with python-mode, then I would get a file called "testlogging.log" with the content
INFO:root:Aye
Looks promising! But if I delete the log file and run the code in pythong-mode again, then the log file won't be re-created. If at this time I run the code in a terminal like this
./testlogging.py
Then the log file would be generated again!
I checked with Python documentation and noticed this line in the logging tutorial (https://docs.python.org/2.7/howto/logging.html#logging-to-a-file):
A very common situation is that of recording logging events in a file, so let’s look at that next. Be sure to try the following in a newly-started Python interpreter, and don’t just continue from the session described above:...
So I guess this problem with logging file only updated once has something to do with python-mode remaining in the same interpreter when I run a code a second time. So my question is: is there anyway to solve this problem, by fiddling with the logging module, by putting something in the code to reset the interpreter or by telling Python-mode to reset it?
I am also curious why the logging module requires a newly-started interpreter to work...
Thanks for your help in advance.
The log file is not recreated because the logging module still has the old log file open and will continue to write to it (even though you've deleted it). The solution is to force the logging module to release all acquired resources before running your code again in the same interpreter:
# configure logging
# log a message
# delete log file
logging.shutdown()
# configure logging
# log a message (log file will be recreated)
In other words, call logging.shutdown() at the end of your code and then you can re-run it within the same interpreter and it will work as expected (re-creating the log file on every run).
You opened log file with "w" mode. "w" means write data from begining, so you saw only log of last execution.
This is why, you see same contents on log file.
You should correct your code on fifth to seventh line to follows.
filename_logging = os.path.join(os.path.dirname(__file__), "testlogging.log")
logging.basicConfig(filename=filename_logging, filemode='a',
level=logging.DEBUG)
Code above use "a" mode, i.e. append mode, so new log data will be added at the end of log file.

How to diable logging to console (pig embedded in python)

I am currently playing with embedding pig into python, and whenever I run the file it works, however it cloggs up the command line with output like the following:
*sys-package-mgr*: processing new jar, '/usr/lib/hadoop/lib/hue-plugins-2.3.0-cdh4.3.0.jar'
*sys-package-mgr*: processing new jar, '/usr/lib/hadoop/lib/paranamer-2.3.jar'
*sys-package-mgr*: processing new jar, '/usr/lib/hadoop/lib/avro-1.7.4.jar'
*sys-package-mgr*: processing new jar, '/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar'
*sys-package-mgr*: processing new jar, '/usr/lib/hadoop/lib/commons-configuration-1.6.jar'
Command line input:
pig embedded_pig_testing.py -p /home/cloudera/Documents/python_testing_config_files/test002_config.cfg
the parameter passed is a file that contains a bunch of variables I am using in the test.
Is there any way to get the script to not log these actions to the command line?
Logging in Java programs/libraries is usually configured by means of a configuration or .properties file. I'm sure there's one for Pig. Something that might be what you're looking for is http://svn.apache.org/repos/asf/pig/trunk/conf/pig.properties.
EDIT: looks like this is specific to Jython.
I have not been able to determine if it's possible at all to disable this, but unless I could find something cleaner, I'd consider simply redirecting sys.stderr (or sys.stdout) during the .jar loading phase:
import os
import sys
old_stdout, old_stderr = sys.stdout, sys.stderr
sys.stdout = sys.stderr = open(os.devnull, 'w')
do_init() # load your .jar's here
sys.stdout, sys.stderr = old_stdout, old_stderr
This logging comes from jython scanning your Java packages to build a cache for later use: https://wiki.python.org/jython/PackageScanning
So long as your script only uses full class imports (no import x.y.* wildcards), you can disable package scanning via the python.cachedir.skip property:
pig ... -Dpython.cachedir.skip=true ...
Frustratingly, I believe jython writes these messages to stdout instead of stderr, so piping stderr elsewhere won't help you out.
Another option is to use streaming python instead of jython whenever pig 0.12 ships. See PIG-2417 for more details on that.

Have Win32 MessageBox appear over other programs

I've recently started learning Python and wrote a little script that informs me when a certain website changes content. I then added it as a scheduled task to Windows so it can run every 10 minutes. I'd like to be informed of the website changing right away so I added a win32ui MessageBox that pops up if the script detects that the website has changed. Here's the little code snippet I'm using for the MessageBox (imaginative text, I know):
win32ui.MessageBox("The website has changed.", "Website Change", 0)
My issue is this, I spend most of my time using remote desktop so when the MessageBox does pop up it sits behind the remote desktop session, is there any way to force the MessageBox to appear on top of it?
On a similar note when the script runs the command line opens up very briefly over the remote desktop session which I don't want, is there any way of stopping this behaviour?
I'm happy with Windows specific solutions as I'm aware it might mean dealing with the windowing manager or possibly an alternative way to inform me rather than using a MessageBox.
When you start anything from Task Scheduler, Windows blocks any "easy" ways to bring your windows or dialogs to top.
First way - use MB_SYSTEMMODAL (4096 value) flag. In my experience, it makes Msg dialog "Always on top".
win32ui.MessageBox("The website has changed.", "Website Change", MB_SYSTEMMODAL)
Second way - try to bring your console/window/dialog to the front with Following calls. Of course, if you use MessageBox you must do that (for your own created window) before calling MessageBox.
SetForegroundWindow(Wnd);
BringWindowToTop(Wnd);
SetForegroundWindow(Wnd);
As for flickering of the console window, you may try to start Python in a hidden state. For example, use ConEmu, ‘HidCon’ or cmdow. Refer to their parameters, something like:
ConEmu -basic -MinTSA -cmd C:\Python27\python.exe C:\pythonScript.py
or
CMDOW /RUN /MIN C:\Python27\python.exe C:\pythonScript.py
Avoiding the command window flash is done by naming the script with a pyw extension instead of simply py. You might also use pythonw.exe instead of python.exe, it really depends on your requirements.
See http://onlamp.com/pub/a/python/excerpts/chpt20/index.html?page=2
Use ctypes it displays an windows error message box very easy to use,
import ctypes
if condition:
ctypes.windll.user32.MessageBoxW(0, u"Error", u"Error", 0)
This works for me:
from ctypes import *
def MessageBox(title, text, style):
sty = int(style) + 4096
return windll.user32.MessageBoxW(0, text, title, sty) #MB_SYSTEMMODAL==4096
## Button Styles:
### 0:OK -- 1:OK|Cancel -- 2:Abort|Retry|Ignore -- 3:Yes|No|Cancel -- 4:Yes|No -- 5:Retry|No -- 6:Cancel|Try Again|Continue
## To also change icon, add these values to previous number
### 16 Stop-sign ### 32 Question-mark ### 48 Exclamation-point ### 64 Information-sign ('i' in a circle)
Usage:
MessageBox('Here is my Title', 'Message to be displayed', 64)
Making the message box system modal will cause it to pop up over every application, but none can be interacted with until it is dismissed. Consider either creating a custom dialog box window that you can bring to the front or using a notification bubble instead.
Windows tries to make it hard to pop a window over the active application. Users find it annoying, especially since the interrupting window generally steals keyboard focus.
The Windows way to give a notification like this is with a balloon in the notification area rather than a message box. Notification balloons don't steal focus and are (supposedly) less distracting.
I'm not sure if the python Windows UI library offers wrappers for notification balloons.
Very easy modal async message box with help of Python and MSG command (working on Win10):
# In CMD (you may use Username logged on target machine instead of * to send message to particular user):
msg /server:IP_or_ComputerName * /v /time:appearance_in_secs Message_up_to_255_chars
# I.e. send "Hello everybody!" to all users on 192.168.0.110 disappearing after 5 mins
msg /server:192.168.0.110 * /v /time:300 "Hello everybody!"
In Python I've been using subprocess to send CMD commands, allows me to read and process output, find error etc.
import subprocess as sp
name = 'Lucas'
message = f'Express yourself {name} in 255 characters ;)'
command = f'msg /server:192.168.0.110 * /v /time:360 "{message}"'
output = str(sp.run(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT))
if 'returncode=0' in output:
pint('Message sent')
else:
print('Error occurred. Details:\n')
print(output[output.index('stdout=b'):])

Hide the "Failed to load" message when loading an invalid image, wxpython

bmp = wx.Image("C:\User\Desktop\cool.bmp", wx.BITMAP_TYPE_ANY).ConvertToBitmap()
If i run this, it will automatically show an error message saying that it failed to load the image. How can I stop my program from doing this?
If all you're after is to stop the exception from raising, you can enclose it in a try/except block:
try:
bmp = wx.Image("C:\User\Desktop\cool.py", wx.BITMAP_TYPE_ANY).ConvertToBitmap()
except:
pass
Bear in mind, it's good practice to only ignore specific exceptions, and to do something when it occurs (ie tell user to pick another image):
try:
bmp = wx.Image("C:\User\Desktop\cool.py", wx.BITMAP_TYPE_ANY).ConvertToBitmap()
except <Specific Exception>, e:
doSomething() # Handle exception
Since it's an actual pop up message, you can use wx.Log_EnableLogging(False) to disable error logging in your application
To stop stderr redirecting you can set wx.App(redirect=False)
Or to make error log to a file instead of onscreen you can use:
wx.App(redirect=True,filename='error_log')
For wxpython version 4+, I was able to disable the popup message by calling
wx.Log.EnableLogging(False)
or by calling
wx.Log.SetLogLevel(wx.LOG_Error)
Relevant docs here
An alternative to wx.Log_EnableLogging(False) is wx.LogNull. From the docs:
# There will normally be a log message if a non-existent file is
# loaded into a wx.Bitmap. It can be suppressed with wx.LogNull
noLog = wx.LogNull()
bmp = wx.Bitmap('bogus.png')
# when noLog is destroyed the old log sink is restored
del noLog
I can't even get my wxPython code to run if I pass it an an invalid image. This is probably related to the fact that wxPython is a light wrapper around a C++ library though. See http://wiki.wxpython.org/C%2B%2B%20%26%20Python%20Sandwich for an interesting explanation.
The best way around issues like this is to actually use Python's os module, like this:
if os.path.exists(path):
# then create the widget
I do this sort of thing for config files and other things. If the file doesn't exist, I either create it myself or don't create the widget or I show a message so I know to fix it.

Categories

Resources