I've come across a bit of a strange bug, and I'm really not sure what's causing it.
I have a list containing lambda functions, and i have set this list to be a global variable as shown below. The global variables are then referenced within the "residual" function with the required function inputs.
from numpy import array
from math import exp
from guppy import hpy
hp = hpy()
hp.setrelheap()
rate_array = [[0, "4.91*(10**-22)*(T_m**4)"],
[1, "1.4*(10**-18)*(T_m**0.928)*exp(-T_m/16200)"]]
global k
k = [0,0]
for i in range(0,2):
k[i] = (eval('lambda T_m,T_r,Z_initial: ' + rate_array[i][1]))
def residual(t,y,yd):
Z_initial = 10
res_0 = -k[1](y[2],y[3], Z_initial)*y[0]
res_1 = -k[0](y[2],y[3],Z_initial)*y[1]
return array([res_0,res_1 ])
y = [0,0,0,0]
yd= [0,0,0,0]
t=0
residual(t,y,yd)
print("\nMemory statistics are as follows:\n")
print hp.heap()
Running the residual function seems to give a invalid pointer error, or a segmentation fault. Every now and then the code runs, but most of the time it does not. I don't see anything wrong with the code, so im not sure whats going on. Is there anything glaringly obvious?
EXTRA: I realize this is a strange way to go about it, but to explain: the "rate_array" contains strings as it is pickled/unpickled depending on whether or not the user wants to edit it.
Also the residual function is integrated by 3rd party software and it only allows the 3 inputs shown, so i can't just pass the array as an input into the function as normal. I also cannot append it to include the rate_array, as it wont take lambda functions as an acceptable type.
Normally the lambda functions/expressions could just be in the residual function itself, as just extra lines of code, but then its not possible for the user to edit them before the code runs.....Its such a mess!
EDIT: apologies, i heavily cut down the code to try and make it simpler to explain, but in the process just typed stuff that was plain wrong, now corrected.
The error code:
*** glibc detected *** python: free(): invalid pointer: 0x0000000002a7aa50 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7fd857ddbb96]
/usr/local/lib/python2.7/dist-packages/guppy/sets/setsc.so(+0xbeeb)[0x7fd841bd1eeb]
/usr/local/lib/python2.7/dist-packages/guppy/sets/setsc.so(+0xbf73)[0x7fd841bd1f73]
python[0x48abf9]
python[0x48a9de]
python(PyEval_EvalFrameEx+0x52c)[0x45fb2c]
python(PyEval_EvalFrameEx+0xcb7)[0x4602b7]
python(PyEval_EvalCodeEx+0x199)[0x467209]
python(PyEval_EvalCode+0x32)[0x4d0242]
python[0x5102bb]
python(PyRun_FileExFlags+0x9a)[0x44a466]
python(PyRun_SimpleFileExFlags+0x2bc)[0x44a97a]
python(Py_Main+0xb36)[0x44b6bc]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7fd857d7e76d]
python[0x4ce0ad]
Related
I've been facing strange results in a numerical problem I've been working on and since I started to program in python recently, I would like to see if you can help me.
Basically, the program has a function that is minimized for different values in a nested loop. I'll skip the details of the function for simplicity but I checked it several times and it is working correctly. Basically, the code looks like that:
def function(mins,args):
#I'll skip those details for simplicity
return #Value
ranges = ((0+np.pi/100),(np.pi/2-np.pi/100),np.pi/100)
while Ri[0] < R:
Ri[1] = 0; Ri[n-1] = R-(sum(Ri)-Ri[n-1])
while Ri[1] < (R-Ri[0]):
res = opt.brute(function, ranges, args=[args], finish=None)
F = function(res,args)
print(f'F = {F}')
Ri[1] += dR
Ri[2] = R-(sum(Ri)-Ri[n-1])
Ri[0] += dR
So, ignoring the Ri[] meaning in the loop (which is a variable of the function), for every increment in Ri[] the program makes a minimization of mins by the scipy.optimize.brute obtaining res as the answer, then it should run the function with this answer and print the result F. The problem is that it always get the same answer, no matter what the parameters I get in the minimization (which is working fine, I checked). It's strange because if I get the values from the minimization (which is an n-sized array, being n an input) and create a new program to run the same function and just get the result of the function, it returns me the right answer.
Anyone can give me a hint why I'm getting this? If I didn't make myself clear please tell and I could provide more details about the function and variables. Thanks in advance!
I am trying to minimize a function of 3 input variables using scipy. The function reads like so-
def myfunc(x):
x[0] = a
x[1] = b
x[2] = c
n = f(a,b,c)
return n
bound1 = (80,100)
bound2 = (10,20)
bound3 = (312,740)
guess = [a0,b0,c0]
bds = (bound1,bound2,bound3)
result = minimize(myfunc, guess,method='L-BFGS-B',bounds=bds)
The function I am trying to currently run reaches a minimum at a=100,b=10,c=740, which is at the end of the bounds.
The minimize function keeps trying to iterate past the end of bound 3 (gets to c0 value of 740.0000000149012 on its last iteration.
Is there any way to stop this from happening? i.e. stop the iteration at the actual end of my bound?
This happens due to numerical-differentiation, which itself is not only needed to infer the step-direction and size, but also to reason about termination.
In general you can't do much without being very careful in regards to whatever solver (and there are many backend-solvers) being used. The basic idea is to replace the automatic numerical-differentiation with one provided by you: this one then respects those bounds and must be careful about the solvers-internals, e.g. "how to reason about termination at this end".
Fix A:
Your problem should vanish automatically when using: Pull-request #10673, which touches your configuration: L-BFGS-B.
It seems, this PR is not part of the current release SciPy 1.4.1 (as this was 2 months before the PR).
See also #6026, where a milestone of 1.5.0 is mentioned in regards to some changes including respecting bounds in num-diff.
For above PR, you will need to install scipy from the sources, which is:
quite doable on linux (and maybe os x)
not something you should try on windows!
trust me...
See the documentation if needed.
Fix B:
Apart from that, as you are doing unconstrained-optimization (with variable-bounds) where more solver-backends are available (compared to constrained-optimization), you might try another solver, trust-constr, which has explicit support for this, see #9098.
Be careful to recognize, that you need to signal this explicitly when setting up the bounds!
I have a piece of code that is supposed to calculate a simple
matrix product, in python (using theano). The matrix that I intend to multiply with is a shared variable.
The example is the smallest example that demonstrates my problem.
I have made use of two helper-functions. floatX converts its input to something of type theano.config.floatX
init_weights generates a random matrix (in type floatX), of given dimensions.
The last line causes the code to crash. In fact, this forces so much output on the commandline that I can't even scroll to the top of it anymore.
So, can anyone tell me what I'm doing wrong?
def floatX(x):
return numpy.asarray(x,dtype=theano.config.floatX)
def init_weights(shape):
return floatX(numpy.random.randn(*shape))
a = init_weights([3,3])
b = theano.shared(value=a,name="b")
x = T.matrix()
y = T.dot(x,b)
f = theano.function([x],y)
This work for me. So my guess is that you have a problem with your blas installation. Make sure to use Theano development version:
http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions
It have better default for some configuration. If that do not fix the problem, look at the error message. There is main part that is after the code dump. After the stack trace. This is what is the most useful normally.
You can disable direct linking by Theano to blas with this Theano flag: blas.ldflags=
This can cause slowdown. But it is a quick check to confirm the problem is blas.
If you want more help, dump the error message to a text file and put it on the web and link to it from here.
This is my first attempt at using IPython.parallel so please bear with me.
I read this question
Parfor for Python
and am having trouble implementing a simple example as follows:
import gmpy2 as gm
import numpy as np
from IPython.parallel import Client
rc = Client()
lview = rc.load_balanced_view()
lview.block = True
a = 1
def L2(ii,jj):
out = []
out.append(gm.fac(ii+jj+a))
return out
Nloop = 100
ii = range(Nloop)
jj = range(Nloop)
R2 = lview.map(L2, zip(ii, jj))
The problems I have are:
a is defined outside the loop and I think I need to do something like "push" but am a bit confused by that. Do I need to "pull" after?
there are two arguments that are required for the function and I don't know how to pass them correctly. I tried things like zip(ii,jj) but got some errors.
Also,, I assume the fact that I'm using a random library gmpy2 shouldn't affect things. Is this correct? Do I need to do anything special for this?
Ideally I would like your help so on this simple example the code runs error free.
If you think it would be beneficial to post my failed attempts at #2 let me know. I'm in the dark with #1.
I found two ways that make this work:
One is pushing the variable to the cores. There is no need to pull it. The variable will simply be defined in the namespace of each process-engine.
rc.client[:].push({'a':a})
R2 = lview.map(L2, ii, jj)
The other way is as to redefine L2 to take a as an input and pass an array of a's to the map function:
def L2(ii,jj,a):
out = []
out.append(gm.fac(ii+jj+a))
return out
R2 = lview.map(L2, ii, jj, [a]*Nloop)
With regards to the import as per this website:
http://ipython.org/ipython-doc/dev/parallel/parallel_multiengine.html#non-blocking-execution
You simply import the required libraries in the function:
Note the import inside the function. This is a common model, to ensure
that the appropriate modules are imported where the task is run. You
can also manually import modules into the engine(s) namespace(s) via
view.execute('import numpy')().
Or you can do as per this link
http://ipython.org/ipython-doc/dev/parallel/parallel_multiengine.html#remote-imports
Alright, so a couple days ago I decided to try and write a primitive wrapper for the PARI library. Ever since then I've been playing with ctypes library in loading the dll and accessing the functions contained using code similar to the following:
from ctypes import *
libcyg=CDLL("<path/cygwin1.dll") #It needs cygwin to be loaded. Not sure why.
pari=CDLL("<path>/libpari-gmp-2.4.dll")
print pari.fibo #fibonacci function
#prints something like "<_FuncPtr object at 0x00BA5828>"
So the functions are there and they can potentially be accessed, but I always receive an access violation no matter what I try. For example:
pari.fibo(5) #access violation
pari.fibo(c_int(5)) #access violation
pari.fibo.argtypes = [c_long] #setting arguments manually
pari.fibo.restype = long #set the return type
pari.fibo(byref(c_int(5))) #access violation reading 0x04 consistently
and any variation on that, including setting argtypes to receive pointers.
The Pari .dll is written in C and the fibonacci function's syntax within the library is GEN fibo(long x).
Could it be the return type that's causing these errors, as it is not a standard int or long but a GEN type, which is unique to the PARI library? Any help would be appreciated. If anyone is able to successfully load the library and use ANY function from within python, please tell; I've been at this for hours now.
EDIT: Seems as though I was simply forgetting to initialize the library. After a quick pari.pari_init(4000000,500000) it stopped erroring. Now my problem lies in the in the fact that it returns a GEN object; which is fine, but whenever I try to reference the address to which it points, it's always 33554435, which I presume is still an address. I'm trying further commands and I'll update if I succeed in getting the correct value of something.
You have two problems here, one give fibo the correct return type and two convert the GEN return type to the value you are looking for.
Poking around the source code a bit, you'll find that GEN is defined as a pointer to a long. Also, at looks like the library provides some converting/printing GENs. I focused in on GENtostr since it would probably be safer for all the pari functions.
import cytpes
pari = ctypes.CDLL("./libpari.so.2.3.5") #I did this under linux
pari.fibo.restype = ctypes.POINTER(ctypes.c_long)
pari.GENtostr.restype = ctypes.POINTER(ctypes.c_char)
pari.pari_init(4000000,500000)
x = pari.fibo(100)
y = pari.GENtostr(x)
ctypes.string_at(y)
Results in:
'354224848179261915075'