I wanted to use the ddeint in my project. I copied the two examples provided on
https://pypi.org/project/ddeint/
and only the second one works. When I'm running the first one:
from pylab import cos, linspace, subplots
from ddeint import ddeint
def model(Y, t):
return -Y(t - 3 * cos(Y(t)) ** 2)
def values_before_zero(t):
return 1
tt = linspace(0, 30, 2000)
yy = ddeint(model, values_before_zero, tt)
fig, ax = subplots(1, figsize=(4, 4))
ax.plot(tt, yy)
ax.figure.savefig("variable_delay.jpeg")
The following error occures:
Traceback (most recent call last):
File "C:\Users\piobo\PycharmProjects\pythonProject\main.py", line 14, in <module>
yy = ddeint(model, values_before_zero, tt)
File "C:\Users\piobo\PycharmProjects\pythonProject\venv\lib\site-packages\ddeint\ddeint.py", line 145, in ddeint
return np.array([g(tt[0])] + results)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2000,) + inhomogeneous part.
I'm using python 3.9. Could anyone advise us on what I'm doing wrong? I didn't manipulate the code in any way.
Reproduction
Could not reproduce - the code runs when using following versions:
Python 3.6.9 (python3 --version)
ddeint 0.2 (pip3 show ddeint)
Numpy 1.18.3 (pip3 show numpy)
Upgrading numpy to 1.19
Then I got following warning:
/.local/lib/python3.6/site-packages/ddeint/ddeint.py:145: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return np.array([g(tt[0])] + results)
But the output JPEG was created successfully.
Using Python 3.8 with latest numpy
Using Python 3.8 with a fresh install of ddeint using numpy 1.24.0:
Python 3.8
ddeint 0.2
Numpy 1.24.0
Now I could reproduce the error.
Hypotheses
Since this example does not run successfully out-of-the-box in the question's environment, I assume it is an issue with numpy versions.
Issue with versions
See Numpy 1.19 deprecation warning · Issue #9 · Zulko/ddeint · GitHub which seems related to this code line that we see in the error stacktrace:
return np.array([g(tt[0])] + results)
Directly using numpy
I suppose the tt value is the issue here. It is returned by pylab's linspace() function call (below written with module prefix):
tt = pylab.linspace(0, 30, 2000)
On MatPlotLib's pylab docs there is a warning:
Since heavily importing into the global namespace may result in unexpected behavior, the use of pylab is strongly discouraged. Use matplotlib.pyplot instead.
Furthermore the module pylab is explained as mixed bag:
pylab is a module that includes matplotlib.pyplot, numpy, numpy.fft, numpy.linalg, numpy.random, and some additional functions, all within a single namespace. Its original purpose was to mimic a MATLAB-like way of working by importing all functions into the global namespace. This is considered bad style nowadays.
Maybe you can use numpy.linspace() function directly.
Attention: There was a change for the dtype default:
The type of the output array. If dtype is not given, the data type is inferred from start and stop. The inferred dtype will never be an integer; float is chosen even if the arguments would produce an array of integers.
Since here arguments start and stop are given as 0 and 30, also the dtype should be integer (like in previous numpy version 1.19).
Note the braking-change:
Since 1.20.0
Values are rounded towards -inf instead of 0 when an integer dtype is specified. The old behavior can still be obtained with np.linspace(start, stop, num).astype(int)
So, we could replace the tt-assignment line with:
from numpy import np
tt = np.linspace(0, 30, 2000).astype(int)
Related
I want to plot the Poisson distribution and get negative probabilities for lambda >= 9.
This code generates plots for different lambdas:
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import factorial
for lambda_val in range(1, 12, 2):
plt.figure()
k = np.arange(0,20)
y = np.power(lambda_val, k)*np.exp(-lambda_val)/factorial(k)
plt.bar(k, y)
plt.title('lambda = ' + str(lambda_val))
plt.xlabel('k')
plt.ylabel('probability')
plt.ylim([-0.1, 0.4])
plt.grid()
plt.show()
Please see these two plots:
Lambda = 5 looks fine in my opinion.
Lambda = 9 not.
I'm quite sure it has something to do with np.power because
np.power(11, 9)
gives me: -1937019605, whereas
11**9
gives me: 2357947691 (same in WolframAlpha).
But if I avoid np.power and use
y = (lambda_val**k)*math.exp(-lambda_val)/factorial(k)
for calculating the probability, I get negative values as well. I am totally confused. Can anybody explain me the effect or what am I doing wrong? Thanks in advance. :)
Your problem is due to 32-bit integer overflows. This happens because Numpy is sometimes compiled with 32-bit integer even though the platform (OS + processor) is a 64-bit one. There is an overflow because Numpy automatically transform the unbounded integer of the Python interpreter to the native np.int_ type. You can check if this type is a 64-bit one using np.int_ is np.int64. AFAIK, the default Numpy binary package compiled for Windows available on Python Pip use 32-bit integers and the one of the Linux packages use 64-bit integers (assuming you are on a 64-bit platform).
The issue can be easily reproduced using:
In [546]: np.power(np.int32(11), np.int32(9))
Out[546]: -1937019605
It can also be solved using:
In [547]: np.power(np.int64(11), np.int64(9))
Out[547]: 2357947691
In the second expression, you use k which is of type np.int_ by default and this is certainly why you get the same problem. Hopefully, you can specify to Numpy that the integer should be bigger. Note that Numpy have some implicit rule to avoid overflow but this is hard to avoid them in all case without strongly impacting performance. Here is a fixed formula:
k = np.arange(0, 20, dtype=np.int64)
y = np.power(lambda_val, k) * np.exp(-lambda_val) / factorial(k)
The rule of thumb is to be very careful about implicit conversions when you get unexpected results.
Hi I am a new to computer visionand stackoverflow and I have a problem with my python 3 program on Windows,as the cv2.findContours() function returns 2 instead of three values as in the documentation. I passed 2 values for return to solve the bug,the type of the first(image) is a list and that of the second (cnts)is an int32 but none of them is abled to be used in cv2.drawContours() without bugging here I use image as parameter in because it is the only list returned so I guess it is the contours list cv2.drawContours().So here is the code:
#This is the program for a document scanner so as to extract a document
#from any image and apply perspective transform to show it as final result
import numpy as np
import cv2
import imutils
from matplotlib import pyplot as plt
cap=cv2.VideoCapture(0)
ret,img=cap.read()
img1=img.copy()
cv2.imshow('Image',img1)
img1=cv2.cvtColor(img1,cv2.COLOR_BGR2GRAY)
img1=cv2.bilateralFilter(img1,7,17,17)
img1=cv2.Canny(img1,30,200)
image,cnts=cv2.findContours(img1,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
#cnts=np.asarray(cnts,dtype='uint8')
cnts=np.array(cnts)
cv2.imshow('Edge',img1)
print('cnts var data type',cnts.dtype)
#print("Hi")
img=cv2.drawContours(img,[image],-1,(255,255,0),3)
Here is the python idle shell result appearing now:
cnts var data type is int32
Traceback (most recent call last):
File "D:\PyCharm Projects\Test_1_docscanner.py", line 20, in <module>
img=cv2.drawContours(img,[image],-1,(255,255,0),3)
TypeError: contours is not a numpy array, neither a scalar
I got it working finally,the following I did:
First I had previously messed up with most of my environmental variables haven suppressed some system variables. So I with help of a friend I retrieved as much as I could and deleted those I had stupidly ignorantly created.
Secondly I uninstalled all other python versions(at least I tried) though it seems that I still see their icons around(they seem to be "undeletable") and even the one I was using(Python3.7.3). I then install Python 3.7.4.
Thirdly,and that must be the answer is that I added this line cnts=imutils.grab_contours(cnts) before the cv2.drawContours() functions. Getting this from imutils package from Adrian Rosebrock github. my code now works because of that line which helps to parse the contours for whatever cv2.drawContours() opencv version you are using thereby avoiding conflicts of versions originating from cv2.findContours() function used prior to cv2.drawContours().
In conclusion I tried imutils.grab_contours() previously to these changes on my python3.7.3 but it did not work. But I believe above all the combination of "cnts=imutils.grab_contours(cnts)" and the update to Python3.7.4,is what solved the issue.
Hope this is helpful
I am trying to run some code (which is not mine), where is used 'stack' from numpy library.
Looking into documentation, stack really exists in numpy:
https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.stack.html
but when I run the code, I got:
AttributeError: 'module' object has no attribute 'stack'
any idea how to fix this.
code extract:
s_t = np.stack((x_t, x_t, x_t, x_t), axis = 2)
do I need some old libraries?
Thanks.
EDIT:
for some reason, python uses older version of numpy library. pip2 freeze prints "numpy==1.10.4". I've also reinstalled numpy and I've got "Successfully installed numpy-1.10.4", but printing np.version.version in code gives me 1.8.2.
The function numpy.stack is new; it appeared in numpy == 1.10.0. If you can't get that version running on your system, the code can be found at (near the end)
https://github.com/numpy/numpy/blob/f4cc58c80df5202a743bddd514a3485d5e4ec5a4/numpy/core/shape_base.py
I need to examine it a bit more, but the working part of the function is:
sl = (slice(None),) * axis + (_nx.newaxis,)
expanded_arrays = [arr[sl] for arr in arrays]
return _nx.concatenate(expanded_arrays, axis=axis)
So it adds a np.newaxis to each array, and then concatenate on that. So like, vstack, hstack and dstack it adjusts the dimensions of the inputs, and then uses np.concatenate. Nothing particularly new or magical.
So if x is (2,3) shape, x[:,np.newaxis] is (2,1,3), x[:,:,np.newaxis] is (2,3,1) etc.
If x_t is 2d, then
np.stack((x_t, x_t, x_t, x_t), axis = 2)
is probably the equivalent of
np.dstack((x_t, x_t, x_t, x_t))
creating a new array that has size 4 on axis 2.
Or:
tmp = x_t[:,:,None]
np.concatenate((tmp,tmp,tmp,tmp), axis=2)
It is likely have 2 numpy libraries, one in your System libraries, and the other in your python's site packages which is maintained by pip. You have a few options to fix this.
You should reorder the libraries in sys.path so your pip installed numpy library comes in front the native numpy library. Check this out to fix your path permanently.
Also look into virtualenv or Anaconda, which will allow you to work with specific versions of a package when you have multiple versions on your system.
Here's another suggestion about how to ensure pip installs the library on your user path (System Library).
I've the following simple erroneous code
from numpy import random, sqrt
points = random.randn(20,3);
points = points / sqrt(sum(points**2,1))
In ipython (with %autoreload 2) if I copy and paste it into the terminal I get a ValueError as one would expect. If I save this as a file and use %run then it runs without error (it shouldn't).
What's going on here?
I just figured it out, but I had written the question and it might be useful to someone else.
It is a difference between the numpy sum and the native sum. Changing the first line to
from numpy import random, sqrt, sum
fixes it as %run uses the native version by default (at least with my settings). The native run does not take an axis parameter, but does not throw an error either, because it is a start parameter, which is in effect just an offset to the sum. So,
>>> sum([1,2,3],10000)
10006
for the native version. And "axis out of bounds" for the numpy one.
I'm a Python newbie coming from using MATLAB extensively. I was converting some code that uses log2 in MATLAB and I used the NumPy log2 function and got a different result than I was expecting for such a small number. I was surprised since the precision of the numbers should be the same (i.e. MATLAB double vs NumPy float64).
MATLAB Code
a = log2(64);
--> a=6
Base Python Code
import math
a = math.log2(64)
--> a = 6.0
NumPy Code
import numpy as np
a = np.log2(64)
--> a = 5.9999999999999991
Modified NumPy Code
import numpy as np
a = np.log(64) / np.log(2)
--> a = 6.0
So the native NumPy log2 function gives a result that causes the code to fail a test since it is checking that a number is a power of 2. The expected result is exactly 6, which both the native Python log2 function and the modified NumPy code give using the properties of the logarithm. Am I doing something wrong with the NumPy log2 function? I changed the code to use the native Python log2 for now, but I just wanted to know the answer.
No. There is nothing wrong with the code, it is just because floating points cannot be represented perfectly on our computers. Always use an epsilon value to allow a range of error while checking float values. Read The Floating Point Guide and this post to know more.
EDIT - As cgohlke has pointed out in the comments,
Depending on the compiler used to build numpy np.log2(x) is either computed by the C library or as 1.442695040888963407359924681001892137*np.log(x) See this link.
This may be a reason for the erroneous output.