compatibility between sage and numpy - python

Here are two lines of code for the purpose of generating a random permutation of size 4:
from numpy import random
t = random.permutation(4)
This can be executed in Python, but not sage, which gives the following error:
TypeError Traceback (most recent call last)
<ipython-input-3-033ef4665637> in <module>()
1 from numpy import random
----> 2 t = random.permutation(Integer(4))
mtrand.pyx in mtrand.RandomState.permutation (numpy/random/mtrand/mtrand.c:34842)()
mtrand.pyx in mtrand.RandomState.shuffle (numpy/random/mtrand/mtrand.c:33796)()
TypeError: len() of unsized object
Why?
A bit more details: I executed the code in Python 3, and the mtrand is also in the Python 3 directory, which should rule out the possibility that sage is calling Python 2 version of the numpy.

To escape Sage's preparser, you can also append the letter r (for "raw") to numerical input.
from numpy import random
t = random.permutation(4r)
The advantage of 4r over int(4) is that 4r bypasses the
preparser, while int(4) is preparsed as int(Integer(4)) so that
the Python integer is transformed into a Sage integer and then
transformed back into a Python integer.
In the same way, 1.5r will give you a pure Python float rather than
a Sage "real number".

The reason this doesn't work in Sage is that Sage preparses its input, turning "4" from a Python int to a Sage Integer. In Sage, this will work:
from numpy import random
t = random.permutation(int(4))
Or you can turn the preparser off:
preparser(False)
t = random.permutation(4)

Related

ddeint - problem with reproducing examples provided in pypi

I wanted to use the ddeint in my project. I copied the two examples provided on
https://pypi.org/project/ddeint/
and only the second one works. When I'm running the first one:
from pylab import cos, linspace, subplots
from ddeint import ddeint
def model(Y, t):
return -Y(t - 3 * cos(Y(t)) ** 2)
def values_before_zero(t):
return 1
tt = linspace(0, 30, 2000)
yy = ddeint(model, values_before_zero, tt)
fig, ax = subplots(1, figsize=(4, 4))
ax.plot(tt, yy)
ax.figure.savefig("variable_delay.jpeg")
The following error occures:
Traceback (most recent call last):
File "C:\Users\piobo\PycharmProjects\pythonProject\main.py", line 14, in <module>
yy = ddeint(model, values_before_zero, tt)
File "C:\Users\piobo\PycharmProjects\pythonProject\venv\lib\site-packages\ddeint\ddeint.py", line 145, in ddeint
return np.array([g(tt[0])] + results)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2000,) + inhomogeneous part.
I'm using python 3.9. Could anyone advise us on what I'm doing wrong? I didn't manipulate the code in any way.
Reproduction
Could not reproduce - the code runs when using following versions:
Python 3.6.9 (python3 --version)
ddeint 0.2 (pip3 show ddeint)
Numpy 1.18.3 (pip3 show numpy)
Upgrading numpy to 1.19
Then I got following warning:
/.local/lib/python3.6/site-packages/ddeint/ddeint.py:145: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return np.array([g(tt[0])] + results)
But the output JPEG was created successfully.
Using Python 3.8 with latest numpy
Using Python 3.8 with a fresh install of ddeint using numpy 1.24.0:
Python 3.8
ddeint 0.2
Numpy 1.24.0
Now I could reproduce the error.
Hypotheses
Since this example does not run successfully out-of-the-box in the question's environment, I assume it is an issue with numpy versions.
Issue with versions
See Numpy 1.19 deprecation warning · Issue #9 · Zulko/ddeint · GitHub which seems related to this code line that we see in the error stacktrace:
return np.array([g(tt[0])] + results)
Directly using numpy
I suppose the tt value is the issue here. It is returned by pylab's linspace() function call (below written with module prefix):
tt = pylab.linspace(0, 30, 2000)
On MatPlotLib's pylab docs there is a warning:
Since heavily importing into the global namespace may result in unexpected behavior, the use of pylab is strongly discouraged. Use matplotlib.pyplot instead.
Furthermore the module pylab is explained as mixed bag:
pylab is a module that includes matplotlib.pyplot, numpy, numpy.fft, numpy.linalg, numpy.random, and some additional functions, all within a single namespace. Its original purpose was to mimic a MATLAB-like way of working by importing all functions into the global namespace. This is considered bad style nowadays.
Maybe you can use numpy.linspace() function directly.
Attention: There was a change for the dtype default:
The type of the output array. If dtype is not given, the data type is inferred from start and stop. The inferred dtype will never be an integer; float is chosen even if the arguments would produce an array of integers.
Since here arguments start and stop are given as 0 and 30, also the dtype should be integer (like in previous numpy version 1.19).
Note the braking-change:
Since 1.20.0
Values are rounded towards -inf instead of 0 when an integer dtype is specified. The old behavior can still be obtained with np.linspace(start, stop, num).astype(int)
So, we could replace the tt-assignment line with:
from numpy import np
tt = np.linspace(0, 30, 2000).astype(int)

Unexpected result with Numpy ctypes data_as pointer arrays

I am getting an unexpected result with two Numpy arrays when I represent them as Ctypes pointers. I have created a minimal example that reproduces the problem I am running into:
import numpy as np
from ctypes import c_float, POINTER
c_float_p = POINTER(c_float)
a = np.array([1], dtype=c_float).ctypes.data_as(c_float_p)
b = np.array([2], dtype=c_float).ctypes.data_as(c_float_p)
print('a: {}, b: {}'.format(a.contents, b.contents))
When I run this I get the following output:
a: c_float(2.0), b: c_float(2.0)
Clearly the contents of the first array have been overridden with the contents of the second. Hence, it seems that the two pointers point to the same location. How can I avoid this from happening?
Note: I am using Python 3.6, Numpy 1.15.4.
It turns out the above example runs as expected when I updated Numpy to 1.16.0.

Python. How to call a function that expect ctypes array. Strand/Straus7

I'm using python scripting for Strand/Straus7 importing his DLL.
I'm trying to call a function for set the units, called St7SetUnits, following the manual (Img.1) and watching the .py scripting of the DLL that I imported (Img.2). The function expected a c_long and a ctypes.POINTER.(c_long), as specify in the script (Img.3)
Here the complete manual strand7.com/downloads/Strand7%20R246%20API%20Manual%20TOC.pdf
and here the .py scripting https://www.dropbox.com/s/88plz2funjqy1vb/St7API.py?dl=0
As specify at the beginning of the manual I have to convert the list in ctypes array (Img.4).
The function I call is the same of the example, but I can't call it correctly.
I write
import St7API as SA
import ctypes
SA.St7Init()
unitsArray = ctypes.c_int * SA.kLastUnit
units = unitsArray()
units[0] = 0
units[1] = 1
units[2] = 3
units[3] = 1
units[4] = 1
units[5] = 2
SA.St7SetUnits(1, units)
But returns error
expected c_long, got c_long_Array_6
If I try something else, for example an int from the array
SA.St7SetUnits(1, units[0])
the error change in
expected LP_c_long, got int
I tried many solution, but no one works.
Can anyone help me?
Thanks a lot
I know its been a while, but this works for me:
units_type=ctypes.c_long*6
Units = units_type(0,1,3,1,1,2)
St7API.St7SetUnits(1,Units)
From your screenshots it looks like you might be using Grasshopper. If so, you might need to explicitly cast the units array to a pointer by adding this line to the top of your script:
PI = ctypes.POINTER(ctypes.c_long)
And do this whenever you pass an array from IronPython to St7API:
SA.St7SetUnits(1, PI(units))
This answer has slightly more.

IndexError: fail to coerce slice entry of type tensorvariable to integer

I run "ipython debugf.py" and it gave me error message as below
IndexError Traceback (most recent call last)
/home/ml/debugf.py in <module>()
8 fff = theano.function(inputs=[index],
9 outputs=cost,
---> 10 givens={x: train_set_x[index: index+1]})
IndexError: failed to coerce slice entry of type TensorVariable to integer"
I search the forum and no luck, is there someone can help ?
thanks!
debugf.py :
import theano.tensor as T
import theano
import numpy
index =T.lscalar()
x=T.dmatrix()
cost=x +index
train_set_x=numpy.arange(100).reshape([20,5])
fff=theano.function(inputs=[index],
outputs=cost,
givens={x:train_set_x[index: index+1]}) #<--- Error here
Change train_set_x variable to theano.shared variable, and the code is OK.
I dont know the reason, but it works! Hope this post can help others.
The correct code is as below
import theano.tensor as T
import theano
import numpy
index =T.lscalar()
x=T.dmatrix()
cost=x +index
train_set_x=numpy.arange(100.).reshape([20,5]) #<--- change to float,
#because shared must be floatX type
#change to shared variable
shared_x = theano.shared(train_set_x)
fff=theano.function(inputs=[index],
outputs=cost,
givens={x:shared_x[index: index+1]}) #<----change to shared_x
The reason this occurs is because index is a tensor symbolic variable (a long scalar, as you can see on line 4). So when python tries to build the dictionary that theano needs for its 'given' input, it tries to slice the numpy array using the symbolic variable – which it obviously can't do because it doesn't have a value yet (it is only set when you input something to the function).
As you've realised passing the data through theano.shared is the best approach. This means all the training data can be offloaded to the GPU, and then sliced/indexed on the fly to run each example.
However you might find that you have too much training data to fit in your GPU's memory, or for some other reason don't want to use a shared variable. Then you could just change your function definition
data = T.matrix()
fff=theano.function(inputs=[data],
outputs=cost,
givens={x: data}
)
Then instead of writing
fff(index)
You write
fff(train_set_x[index: index+1])
Be warned the process of moving data onto the GPU is slow, so it's much better to minimise the number of transfers if possible.

Python NumPy log2 vs MATLAB

I'm a Python newbie coming from using MATLAB extensively. I was converting some code that uses log2 in MATLAB and I used the NumPy log2 function and got a different result than I was expecting for such a small number. I was surprised since the precision of the numbers should be the same (i.e. MATLAB double vs NumPy float64).
MATLAB Code
a = log2(64);
--> a=6
Base Python Code
import math
a = math.log2(64)
--> a = 6.0
NumPy Code
import numpy as np
a = np.log2(64)
--> a = 5.9999999999999991
Modified NumPy Code
import numpy as np
a = np.log(64) / np.log(2)
--> a = 6.0
So the native NumPy log2 function gives a result that causes the code to fail a test since it is checking that a number is a power of 2. The expected result is exactly 6, which both the native Python log2 function and the modified NumPy code give using the properties of the logarithm. Am I doing something wrong with the NumPy log2 function? I changed the code to use the native Python log2 for now, but I just wanted to know the answer.
No. There is nothing wrong with the code, it is just because floating points cannot be represented perfectly on our computers. Always use an epsilon value to allow a range of error while checking float values. Read The Floating Point Guide and this post to know more.
EDIT - As cgohlke has pointed out in the comments,
Depending on the compiler used to build numpy np.log2(x) is either computed by the C library or as 1.442695040888963407359924681001892137*np.log(x) See this link.
This may be a reason for the erroneous output.

Categories

Resources