I thought it is for activating debug mode which prints all logging.debug() messages. But apparently that doesn't happen. The documentation simply says:
Turn on parser debugging output (for wizards only, depending on compilation options).
See also PYTHONDEBUG.
Which doesn't explain anything in my eyes. Can somebody give a more verbose explanation and verify that there is really no default CPython Argument that activates debug logging?
From Parser/parser.c:
#ifdef Py_DEBUG
extern int Py_DebugFlag;
#define D(x) if (!Py_DebugFlag); else x
#else
#define D(x)
#endif
This D macro is used with printf() to print debugging messages when the debug flag is supplied and the interpreter is compiled with in debug mode. The debugging messages are intended for the developers of Python, people who work on Python itself (not to be confused with Python programmers, who are people who use Python). I've gone through the Python manual page and none of them activate the logging debug mode. However, one can use the -i flag in conjunction with the -c to achieve the same affect:
python -i -c "import logging;logging.basicConfig(level=logging.DEBUG)"
The -d option enables the python parser debugging flags. Unless you're hacking the Python interpreter and changing how it parses Python code, you're unlikely to ever need that option.
The logging infrastructure is a standard library module, not a builtin feature of the interpreter. It doesn't make much sense to have an interpreter flag that changes such a localized feature of a module.
Also, consider how logging level depends on the logging logger and handler you're using. You can set different levels for different loggers and handlers, for different parts of your application. For instance, when you want all DEBUG lines from anyone to appear in console, but INFO and above from a lib should be logged to a common file, and WARNING and ERROR to be logged to specific files for easier monitoring. You can set a global handler for DEBUG that logs to console, and other handlers that log the different levels to separate files.
Related
I'm currently running the openstack executable and it generates python deprecation warnings.
After some searching I did find this howto.
The relevant part is here:
Use the PYTHONWARNINGS Environment Variable to Suppress Warnings in Python
We can export a new environment variable in Python 2.7 and up. We can export PYTHONWARNINGS and set it to ignore to suppress the warnings raised in the Python program.
However, doing this:
PYTHONWARNINGS="ignore" openstack image show image name -f value -c id
does nothing, deprecation warnings are still displayed.
I've tried setting PYTHONWARNINGS to various things:
ignore
"ignore"
"all"
"deprecated"
"ignore::DeprecationWarning"
"error::Warning,default::Warning:has_deprecated_syntax"
"error::Warning"
but none of them seem to do anything.
I was able to work around the issue by appending 2>/dev/null to the end but I would like to know why PYTHONWARNINGS doesn't seem to do anything.
PYTHONWARNINGS certainly does suppress python's warnings. Try running:
PYTHONWARNINGS="ignore" python -c "import warnings; warnings.warn('hi')"
But in this case you are not calling python, but openstack, which is apparently not inheriting the same environment. Without looking at the source I can't say why. It may even be explicitly settings the warning level, which will override anything you do before hand.
If you don't want to see errors, sending STDERR to /dev/null is the proper approach.
You possibly need to "export PYTHONWARNINGS" so it is avaialble to anything openstack invokes
When using pdb to debug a curses application, the interactive debugger is useless, since curses messes up the terminal screen. Debugging post mortem works though, but that is a bit limited.
So what we probably need is having the debugger work in a terminal separately from the debuggee (the application that is being debugged).
Some alternatives which apply remote debugging (such as xpdb) appear either not to work with python 3.3 or give weird errors for other reasons.
So how can I use pdb in a different terminal, or in another proper way?
Use some debugger's functionalities for attach to a running process. For instance you can try:
gdb python <pid>
See how here Python Wiki DebuggingWithGdb.
being the pid of the process you want to debug. Also there is WinPdb that allows you to connect to a remote or local process. WinPdb is well documented and I think is your best option.
I've found that this bit of advice from the Python documentation helps:
A common problem when debugging a curses application is to get your
terminal messed up when the application dies without restoring the
terminal to its previous state. In Python this commonly happens when
your code is buggy and raises an uncaught exception. Keys are no
longer echoed to the screen when you type them, for example, which
makes using the shell difficult. In Python you can avoid these
complications and make debugging much easier by importing the module
curses.wrapper. It supplies a wrapper() function that takes a
callable. It does the initializations described above, and also
initializes colors if color support is present. It then runs your
provided callable and finally deinitializes appropriately. The
callable is called inside a try-catch clause which catches exceptions,
performs curses deinitialization, and then passes the exception
upwards. Thus, your terminal won’t be left in a funny state on
exception.
Please see here for info.
We have recently switched to py.test for python testing (which is fantastic btw). However, I'm trying to figure out how to control the log output (i.e. the built-in python logging module). We have pytest-capturelog installed and this works as expected and when we want to see logs we can pass --nologcapture option.
However, how do you control the logging level (e.g. info, debug etc.) and also filter the logging (if you're only interested in a specific module). Is there existing plugins for py.test to achieve this or do we need to roll our own?
Thanks,
Jonny
Installing and using the pytest-capturelog plugin could satisfy most of your pytest/logging needs. If something is missing you should be able to implement it relatively easily.
As Holger said you can use pytest-capturelog:
def test_foo(caplog):
caplog.setLevel(logging.INFO)
pass
If you don't want to use pytest-capturelog you can use a stdout StreamHandler in your logging config so pytest will capture the log output. Here is an example basicConfig
logging.basicConfig(level=logging.DEBUG, stream=sys.stdout)
A bit of a late contribution, but I can recommend pytest-logging for a simple drop-in logging capture solution. After pip install pytest-logging you can control the verbosity of the your logs (displayed on screen) with
$ py.test -s -v tests/your_test.py
$ py.test -s -vv tests/your_test.py
$ py.test -s -vvvv tests/your_test.py
etc... NB - the -s flag is important, without it py.test will filter out all the sys.stderr information.
Pytest now has native support for logging control via the caplog fixture; no need for plugins.
You can specify the logging level for a particular logger or by default for the root logger:
import pytest
def test_bar(caplog):
caplog.set_level(logging.CRITICAL, logger='root.baz')
Pytest also captures log output in caplog.records so you can assert logged levels and messages. For further information see the official documentation here and here.
A bit of an even later contribution: you can try pytest-logger. Novelty of this plugin is logging to filesystem: pytest provides nodeid for each test item, which can be used to organize test session logs directory (with help of pytest tmpdir facility and it's testcase begin/end hooks).
You can configure multiple handlers (with levels) for terminal and filesystem separately and provide own cmdline options for filtering loggers/levels to make it work for your specific test environment - e.g. by default you can log all to filesystem and small fraction to terminal, which can be changed on per-session basis with --log option if needed. Plugin does nothing by default, if user defines no hooks.
(Note: I’ve Linux in mind, but the problem may apply on other platforms.)
Problem: Linux doesn’t do suid on #! scripts nor does it activate “Linux capabilities” on them.
Why dow we have this problem? Because during the kernel interpreter setup to run the script, an attacker may have replaced that file. How? The formerly trusted suid/capability-enabled script file may be in a directory he has control over (e.g. can delete the not-owned trusted file, or the file is actually a symbolic link he owns).
Proper solution: make the kernel allow suid/cap scripts if: a) it is clear that the caller has no power over the script file -or- like a couple of other operating systems do b) pass the script as /dev/fd/x, referring to the originally kernel-opened trusted file.
Answer I’m looking for: for kernels which can’t do this (all Linux), I need a safe “now” solution.
What do I have in mind? A binary wrapper, which does what the kernel does not, in a safe way.
I would like to
hear from established wrappers for (Python) scripts that pass Linux capabilities and possibly suid from the script file to the interpreter to make them effective.
get comments on my wrapper proposed below
Problems with sudo: sudo is not a good wrapper, because it doesn’t help the kernel to not fall for that just explained “script got replaced” trap (“man sudo” under caveats says so).
Proposed wrapper
actually, I want a little program, which generates the wrapper
command line, e.g.: sudo suid_capability_wrapper ./script.py
script.py has already the suid bit and capabilites set (no function, just information)
the generator suid_capability_wrapper does
generate C(?) source and compile
compile output into: default: basename script.py .py, or argument -o
set the wrapper owner, group, suid like script.py
set the permitted capabilities like script.py, ignore inheritable and effective caps
warn if the interpreter (e.g. /usr/bin/python) does not have the corresponding caps in its inheritable set (this is a system limitation: there is no way to pass on capabilites without suid-root otherwise)
the generated code does:
check if file descriptors 0, 1 and 2 are open, abort otherwise (possibly add more checks for too crazy environment conditions)
if compiled-in target script is compiled-in with relative path, determine self’s location via /proc/self/exe
combine own path with relative path to the script to find it
check if target scripts owner, group, permissions, caps, suid are still like the original (compiled-in) [this is the only non-necessary safety-check I want to include: otherwise I trust that script]
set the set of inherited capabilities equal to the set of permitted capabilities
execve() the interpreter similar to how the kernel does, but use the script-path we know, and the environment we got (the script should take care of the environment)
A bunch of notes and warnings may be printed by suid_capability_wrapper to educate the user about:
make sure nobody can manipulate the script (e.g. world writable)
be aware that suid/capabilities come from the wrapper, nothing cares about suid/xattr mounts for the script file
the interpreter (python) is execve()ed, it will get a dirty environment from here
it will also get the rest of the standard process environment passed through it, which is ... ... ... (read man-pages for exec to begin with)
use #!/usr/bin/python -E to immunize the python interpreter from environment variables
clean the environment yourself in the script or be aware that there is a lot of code you run as side-effect which does care about some of these variables
You don't want to use a shebang at all, on any file - you want to use a binary which invokes the Python interpreter, then tells it to start the script file for which you asked.
It needs to do three things:
Start a Python interpreter (from a trusted path, breaking chroot jails and so on). I suggest statically linking libpython and using the CPython API for this, but it's up to you.
Open the script file FD and atomically check that it is both suid and owned by root. Don't allow the file to be altered between the check and the execution - be careful.
Tell CPython to execute the script from the FD you opened earlier.
This will give you a binary which will execute all owned-by-root-and-suid scripts under Python only. You only need one such program, not one per script. It's your "suidpythonrunner".
As you surmised, you must clear the environment before running Python. LD_LIBRARY_PATH is taken care of by the kernel, but PYTHONPATH could be deadly.
I have got a list of files in txt files and I need to check them out in edit mode, and make some changes(there are word documents), and check them back in via WinCVS.
I know I can write tcl scripts or macro, or python scripts in wincvs shell but I have some problems with them.
I have installed TCL 8.5 and selected tcl DLL in Admin>Preferences, tcl is now available, but whenever I type and execute a tcl script, it says
can not find channel named "stdout"
Do you have any idea regarding this error?
Also, I cannot see admin macros, it says Shell is not available. I have installed the latest version of python and select related dll in preferences.
Could anyone give me a hint for checking a list of files via wincvs?
many thanks in advance,
regards
The problem is that Tcl's trying to build the standard file descriptors into available-by-default channels (i.e., stdin, stdout and stderr) but this goes wrong when they're not opened by default. That's the case on Windows when running disconnected (which is what happens inside GUI applications on that platform). When you're running with a full Tcl shell such as wish, this is worked around, but you're embedded so that's not going to work; the code to fix things isn't run because it's part of the shell startup and not the library initialization (after all, replacing a process-global resource like file descriptors is a little unfriendly for any library to do without the app or user asking it to!)
The simplest workaround is to not write to stdout – note that it's the default destination of the puts command, so you have to be careful – and to take care not to write to stderr either, as that's probably under the same restrictions (which means that you've got to be careful how you trap errors, especially while testing your script).