How can I see a warning again without restarting python. Now I see them only once.
Consider this code for example:
import pandas as pd
pd.Series([1]) / 0
I get
RuntimeWarning: divide by zero encountered in true_divide
But when I run it again it executes silently.
How can I see the warning again without restarting python?
I have tried to do
del __warningregistry__
but that doesn't help.
Seems like only some types of warnings are stored there.
For example if I do:
def f():
X = pd.DataFrame(dict(a=[1,2,3],b=[4,5,6]))
Y = X.iloc[:2]
Y['c'] = 8
then this will raise warning only first time when f() is called.
However, now when if do del __warningregistry__ I can see the warning again.
What is the difference between first and second warning? Why only the second one is stored in this __warningregistry__? Where is the first one stored?
How can I see the warning again without restarting python?
As long as you do the following at the beginning of your script, you will not need to restart.
import pandas as pd
import numpy as np
import warnings
np.seterr(all='warn')
warnings.simplefilter("always")
At this point every time you attempt to divide by zero, it will display
RuntimeWarning: divide by zero encountered in true_divide
Explanation:
We are setting up a couple warning filters. The first (np.seterr) is telling NumPy how it should handle warnings. I have set it to show warnings on all, but if you are only interested in seeing the Divide by zero warnings, change the parameter from all to divide.
Next we change how we want the warnings module to always display warnings. We do this by setting up a warning filter.
What is the difference between first and second warning? Why only the second one is stored in this __warningregistry__? Where is the first one stored?
This is described in the bug report reporting this issue:
If you didn't raise the warning before using the simple filter, this
would have worked. The undesired behavior is because of
__warningsregistry__. It is set the first time the warning is emitted.
When the second warning comes through, the filter isn't even looked at.
I think the best way to fix this is to invalidate __warningsregistry__
when a filter is used. It would probably be best to store warnings data
in a global then instead of on the module, so it is easy to invalidate.
Incidentally, the bug has been closed as fixed for versions 3.4 and 3.5.
warnings is a pretty awesome standard library module. You're going to enjoy getting to know it :)
A little background
The default behavior of warnings is to only show a particular warning, coming from a particular line, on its first occurrence. For instance, the following code will result in two warnings shown to the user:
import numpy as np
# 10 warnings, but only the first copy will be shown
for i in range(10):
np.true_divide(1, 0)
# This is on a separate line from the other "copies", so its warning will show
np.true_divide(1, 0)
You have a few options to change this behavior.
Option 1: Reset the warnings registry
when you want python to "forget" what warnings you've seen before, you can use resetwarnings:
# warns every time, because the warnings registry has been reset
for i in range(10):
warnings.resetwarnings()
np.true_divide(1, 0)
Note that this also resets any warning configuration changes you've made. Which brings me to...
Option 2: Change the warnings configuration
The warnings module documentation covers this in greater detail, but one straightforward option is just to use a simplefilter to change that default behavior.
import warnings
import numpy as np
# Show all warnings
warnings.simplefilter('always')
for i in range(10):
# Now this will warn every loop
np.true_divide(1, 0)
Since this is a global configuration change, it has global effects which you'll likely want to avoid (all warnings anywhere in your application will show every time). A less drastic option is to use the context manager:
with warnings.catch_warnings():
warnings.simplefilter('always')
for i in range(10):
# This will warn every loop
np.true_divide(1, 0)
# Back to normal behavior: only warn once
for i in range(10):
np.true_divide(1, 0)
There are also more granular options for changing the configuration on specific types of warnings. For that, check out the docs.
Related
It seems as though some scipy modules are messing with my warning filters. Consider the following code. My understanding is that it should only throw one warning because of the "once" filter I supplied to my custom Warning class. However, the warning after the scipy import gets shown as well.
This is with python 3.7 and scipy 1.6.3.
import warnings
class W(DeprecationWarning): pass
warnings.simplefilter("once", W)
warnings.warn('warning!', W)
warnings.warn('warning!', W)
from scipy import interpolate
warnings.warn('warning!', W)
This only seems to happen when I import certain scipy modules. A generic "import scipy" doesn't do this.
I've narrowed it down to the filters set in scipy.special.sf_error.py and scipy.sparse.__init__.py. I don't see how that code would cause the problem, but it does. When I comment those filtersout, my code works as expected.
Am I misunderstanding something? Is there a work-around that that doesn't involved overwriting warnings.filterwarnings/warnings.simplefilters?
This an open Python bug: https://bugs.python.org/issue29672.
Note, in particular, the last part of the comment by Tom Aldcroft:
Even a documentation update would be useful. This could explain not only catch_warnings(), but in general the unexpected feature that if any package anywhere in the stack sets a warning filter, then that globally resets whether a warning has been seen before (via the call to _filters_mutated()).
The code in scipy/special/sf_error.py sets a warning filter, and that causes a global reset of which warnings have been seen before. (If you add another call of warnings.warn('warning!', W) to the end of your sample code, you should see that it does not raise a warning.)
I couldn't remember if np.zeros(x) will automatically covert a float x to int or not, so I tried it in IDLE. What I got the first time was a Warning message that refers to the script I had run earlier in the same session, and then warns me "using a non-integer number instead of an integer will result in an error in the future".
I tried it again, and the warning did not repeat, and the array was instantiated as expected with dtype=float.
Why does the warning say there will be an error (as opposed to could be), and what will it be? And why did it refer to the first non-blank line in the script I'd run much earlier today get embedded into the warning?
This may be a window into how IDLE is working - so I'm hoping to learn something from this. I've read here that I can suppress the warning, but I would like to understand it's behavior first.
>>>
>>> equator = np.zeros(3.14)
Warning (from warnings module):
File "/Users/xxxxxx/Documents/xxxxxx/CYGNSS/CYGNSS TLE interpolator v00.py", line 2
CYGNSS_BLOB = """1 41884U 16078A 16350.61686218 -.00000033 00000-0 00000+0 0 9996
VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
>>>
>>> equator = np.zeros(3.14)
>>> equator
array([ 0., 0., 0.])
>>>
"In the future" means "in a future version of NumPy". So far you get a warning, not an error. The assignment was made (you didn't need to run the command the second time, equator was already assigned as you wanted) and execution proceeded normally.
But some future version of NumPy will throw an error, halting the execution.
The warning is not repeated again within the same session; there's some logic there intended to avoid nagging the user too much.
I can't explain the line reference; for me it refers to __main__:1:.
I'm running the following code
positive_values = values.where(values > 0)
In this example values may contain nan elements. I believe that for this reason, I'm getting the following runtime warning:
RuntimeWarning: invalid value encountered in greater_equal if not reflexive
Does xarray have methods of surpressing these warnings?
The warnings module provides the functionality you are looking for.
To suppress all warnings do (see John Coleman's answer for why this is not good practice):
import warnings
warnings.simplefilter("ignore")
# warnings.simplefilter("ignore", category=RuntimeWarning) # for RuntimeWarning only
To make the suppression temporary do it inside the warnings.catch_warnings() context manager:
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
positive_values = values.where(values > 0)
The context manager saves the original warning settings prior to entering the context and then sets them back when exiting the context.
As a general rule of thumb, warnings should be heeded rather than suppressed. Either you know what causes the warning or you don't. If you know what causes the warning, there is usually a simple workaround. If you don't know what causes the warning, there is likely a bug. In this case, you can use the short-circuiting nature of & as follows:
positive_values = values.where(values.notnull() & values > 0)
I've the following simple erroneous code
from numpy import random, sqrt
points = random.randn(20,3);
points = points / sqrt(sum(points**2,1))
In ipython (with %autoreload 2) if I copy and paste it into the terminal I get a ValueError as one would expect. If I save this as a file and use %run then it runs without error (it shouldn't).
What's going on here?
I just figured it out, but I had written the question and it might be useful to someone else.
It is a difference between the numpy sum and the native sum. Changing the first line to
from numpy import random, sqrt, sum
fixes it as %run uses the native version by default (at least with my settings). The native run does not take an axis parameter, but does not throw an error either, because it is a start parameter, which is in effect just an offset to the sum. So,
>>> sum([1,2,3],10000)
10006
for the native version. And "axis out of bounds" for the numpy one.
I received warnings first time only.
Is this normal?
>>> cv=LassoCV(cv=10).fit(x,y)
C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\linear_model\coordinate_descent.py:418: UserWarning: Objective did not converge. You might want to increase the number of iterations
' to increase the number of iterations')
>>> cv=LassoCV(cv=10).fit(x,y)
>>>
It's because the python warnings filter is set to just warn the first time it catches a particular warning by default.
If you want to get all the warnings, just add this:
import warnings
warnings.simplefilter("always")
because the "objective did not converge". The maximum iterations are by default 1000 and you are not setting them. Try setting the max_iter parameter to a higher value to avoid the warning.