in the example below, there is a 3d numpy matrix of size (4, 3, 3)+ a solution about how to calculate pinv of each of 4 of those 3*3 matrices in numpy. I also tried to use the same function worked in numpy, in theano hoping that it is implemented the same, but it failed. Any idea how to do it in theano?
dt = np.dtype(np.float32)
a=[[[12,3,1],
[2,4,1],
[2,4,2],],
[[12,3,3],
[2,4,4],
[2,4,5],],
[[12,3,6],
[2,4,5],
[2,4,4],],
[[12,3,3],
[2,4,5],
[2,4,6]]]
a=np.asarray(a,dtype=dt)
print(a.shape)
apinv=np.zeros((4,3,3))
print(np.linalg.pinv(a[0,:,:]).shape)
#numpy solution
apinv = map(lambda n: np.linalg.pinv(n), a)
apinv = np.asarray(apinv,dtype=dt)
#theano solution (not working)
at=T.tensor3('a')
apinvt = map(lambda n: T.nlinalg.pinv(n), at)
The error is:
Original exception was:
Traceback (most recent call last):
File "pydevd.py", line 2403, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "pydevd.py", line 1794, in run
launch(file, globals, locals) # execute the script
File "exp_thn_pinv_map.py", line 35, in <module>
apinvt = map(lambda n: T.nlinalg.pinv(n), at)
File "theano/tensor/var.py", line 549, in __iter__
raise TypeError(('TensorType does not support iteration. '
TypeError: TensorType does not support iteration. Maybe you are using builtin.sum instead of theano.tensor.sum? (Maybe .max?)
The error message is
Traceback (most recent call last):
File "D:/Dropbox/source/intro_theano/pinv.py", line 32, in <module>
apinvt = map(lambda n: T.nlinalg.pinv(n), at)
File "d:\dropbox\source\theano\theano\tensor\var.py", line 549, in __iter__
raise TypeError(('TensorType does not support iteration. '
TypeError: TensorType does not support iteration. Maybe you are using builtin.sum instead of theano.tensor.sum? (Maybe .max?)
This is occurring because, as the error message indicates, the symbolic variable at is not iterable.
The fundamental problem here is that you're incorrectly mixing immediately executed Python code with delayed execution Theano symbolic code.
You need to use a symbolic loop, not a Python loop. The correct solution is to use Theano's scan operator:
at=T.tensor3('a')
apinvt, _ = theano.scan(lambda n: T.nlinalg.pinv(n), at, strict=True)
f = theano.function([at], apinvt)
print np.allclose(f(a), apinv)
Related
I have a following problem. I am following this example about spatial regression in Python:
import numpy
import libpysal
import spreg
import pickle
# Read spatial data
ww = libpysal.io.open(libpysal.examples.get_path("baltim_q.gal"))
w = ww.read()
ww.close()
w_name = "baltim_q.gal"
w.transform = "r"
Example above works. But I would like to read my own spatial matrix which I have now as a list of lists. See my approach:
ww = libpysal.io.open(matrix)
But I got this error message:
Traceback (most recent call last):
File "/usr/lib/python3.8/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "/home/vojta/Desktop/INTERNET_HANDEL/ZASILKOVNA/optimal-delivery-branches/venv/lib/python3.8/site-packages/libpysal/io/fileio.py", line 90, in __new__
cls.__registry[cls.getType(dataPath, mode, dataFormat)][mode][0]
File "/home/vojta/Desktop/INTERNET_HANDEL/ZASILKOVNA/optimal-delivery-branches/venv/lib/python3.8/site-packages/libpysal/io/fileio.py", line 105, in getType
ext = os.path.splitext(dataPath)[1]
File "/usr/lib/python3.8/posixpath.py", line 118, in splitext
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not list
this is how matrix looks like:
[[0, 2, 1], [2, 0, 4], [1, 4, 0]]
EDIT:
If I try to insert my matrix into the GM_Lag like this:
model = spreg.GM_Lag(
y,
X,
w=matrix,
)
I got following error:
warn("w must be API-compatible pysal weights object")
Traceback (most recent call last):
File "/usr/lib/python3.8/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 2, in <module>
File "/home/vojta/Desktop/INTERNET_HANDEL/ZASILKOVNA/optimal-delivery-branches/venv/lib/python3.8/site-packages/spreg/twosls_sp.py", line 469, in __init__
USER.check_weights(w, y, w_required=True)
File "/home/vojta/Desktop/INTERNET_HANDEL/ZASILKOVNA/optimal-delivery-branches/venv/lib/python3.8/site-packages/spreg/user_output.py", line 444, in check_weights
if w.n != y.shape[0] and time == False:
AttributeError: 'list' object has no attribute 'n'
EDIT 2:
This is how I read the list of lists:
import pickle
with open("weighted_matrix.pkl", "rb") as f:
matrix = pickle.load(f)
How can I insert list of lists into spreg.GM_Lag ? Thanks
Why do you want to pass it to the libpysal.io.open method? If I understand correctly this code, you first open a file, then read it (and the read method seems to be returning a List). So in your case, where you already have the matrix, you don't need to neither open nor read any file.
What will be needed though is what w is supposed to look like here: w = ww.read(). If it is a simple matrix, then you can initialize w = matrix. If the read method also format the data a certain way, you'll need to do it another way. If you could describe the expected behavior of the read method (e.g. what does the input file contain, and what is returned), it would be useful.
As mentioned, as the data is formatted into a libpysal.weights object, you must build one yourself. This can supposedly be done with this method libpysal.weights.W. (Read the doc too fast).
Performing distributed training, I have the following code like this:
training_sampler = DistributedSampler(training_set, num_replicas=2, rank=0)
training_generator = data.DataLoader(training_set, **params, sampler=training_sampler)
for x, y, z in training_generator: # Error occurs here.
...
Overall, I get the following message:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/ubuntu/VC/ppg_training_extraction/ppg_training_scripts/train_ASR_trim_scp.py", line 336, in train
for local_batch_src, local_batch_tgt, lengths in dataloaders[phase]:
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 352, in __iter__
return self._get_iterator()
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 294, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 827, in __init__
self._reset(loader, first_iter=True)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 857, in _reset
self._try_put_index()
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1091, in _try_put_index
index = self._next_index()
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 427, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 227, in __iter__
for idx in self.sampler:
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/distributed.py", line 97, in __iter__
indices = torch.randperm(len(self.dataset), generator=g).tolist() # type: ignore
RuntimeError: Expected a 'cuda' device type for generator but found 'cpu'
Now at that line, I ran the following instructions in pdb:
(Pdb) g = torch.Generator()
(Pdb) g.manual_seed(0)
<torch._C.Generator object at 0x7ff7f8143110>
(Pdb) indices = torch.randperm(4556, generator=g).tolist()
(Pdb) indices = torch.randperm(455604, generator=g).tolist()
*** RuntimeError: Expected a 'cuda' device type for generator but found 'cpu'
Why am I getting the runtime error when the upperbound integer is high, but not when it’s low enough?
Note, I ran on a clean Python session and found
>>> import torch
>>> g = torch.Generator()
>>> g.manual_seed(0)
<torch._C.Generator object at 0x7f9d2dfb39f0>
>>> indices = torch.randperm(455604, generator=g).tolist()
that this worked fine. Is it some configuration in how I’m handling distributed training among multiple GPUs? Any sort of insights would be appreciated!
I just met the same problem using dataloader and I found the following helps without removing torch.set_default_tensor_type('torch.cuda.FloatTensor')
data.DataLoader(..., generator=torch.Generator(device='cuda'))
since I don't want to manually add .to('cuda') for tons of tensors in my code
I had the same issue. Looks like somebody found the source of the problem here:
torch.set_default_tensor_type('torch.cuda.FloatTensor')
I fixed my problem by getting rid of this line in my code and manually using .to(device).
While trying to create an optimisation algorithm for work, i found a particular problem:
Here is some basic information about the code :
LZ is a nestedlist.
M is a numpy array converted from the nestedlist.
here is the code :
for i in range(len(LZ)):
for j in range(len(LZ[i])):
constraints1 = lambda MA, i=i,j=j: MAXQ - abs(M[i][j]-MA[i][j])
print(M[i][j])
if j <len(LZ[i])-1:
constraints2 = lambda MA, i=i,j=j: PENTEMAX +((MA[i][j]-MA[i][j+1])/LL[i][j])
constraints3 = lambda MA, i=i,j=j: PENTEMAX - ((MA[i][j]-MA[i][j+1])/LL[i][j])
cons.append({'type' : 'ineq','fun' : constraints1})
cons.append({'type' : 'ineq','fun' : constraints2})
cons.append({'type' : 'ineq','fun' : constraints3})
x0 = M
sol = minimize(objective,x0,method='SLSQP',constraints=cons)
I run the code, and here is what i get :
it prints the M[i][j] just fine, the printing is long so i didnt copy it here :
Traceback (most recent call last):
File "D:/Opti Assainissement/VOIRIE5.py", line 118, in <module>
sol = minimize(objective,x0,method='SLSQP',constraints=cons)
File "C:\Users\Asus\AppData\Local\Programs\Python\Python37\lib\site-packages\scipy\optimize\_minimize.py", line 611, in minimize
constraints, callback=callback, **options)
File "C:\Users\Asus\AppData\Local\Programs\Python\Python37\lib\site-packages\scipy\optimize\slsqp.py", line 315, in _minimize_slsqp
for c in cons['ineq']]))
File "C:\Users\Asus\AppData\Local\Programs\Python\Python37\lib\site-packages\scipy\optimize\slsqp.py", line 315, in <listcomp>
for c in cons['ineq']]))
File "D:/Opti Assainissement/VOIRIE5.py", line 101, in <lambda>
constraints1 = lambda MA, i=i,j=j: cdt(MA,i,j)
File "D:/Opti Assainissement/VOIRIE5.py", line 98, in cdt
return MAXQ - abs(M[i][j]-MA[i][j])
IndexError: invalid index to scalar variable.
My first guess was that SciPy doesnt recognise MA as an array, but i can't know if it's related to SciPy or the lambda construction or to my lack of knowledge in the matter. I'll be glad to get somehelp from the community !
I have the following
x=Symbol('x',commutative=False)
y=Symbol('y',commutative=False)
expr = 2*x + 87*x*y + 7*y
Now, this works
integrate(expr,y,manual=True)
because it gives
2*x*y + 87*x*y**2/2 + 7*y**2/2
but the same exact thing with x fails:
integrate(expr,x,manual=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/sympy/integrals/integrals.py", line 1295, in integrate
risch=risch, manual=manual)
File "/usr/local/lib/python2.7/dist-packages/sympy/integrals/integrals.py", line 486, in doit
conds=conds)
File "/usr/local/lib/python2.7/dist-packages/sympy/integrals/integrals.py", line 774, in _eval_integral
poly = f.as_poly(x)
File "/usr/local/lib/python2.7/dist-packages/sympy/core/basic.py", line 706, in as_poly
poly = Poly(self, *gens, **args)
File "/usr/local/lib/python2.7/dist-packages/sympy/polys/polytools.py", line 113, in __new__
opt = options.build_options(gens, args)
File "/usr/local/lib/python2.7/dist-packages/sympy/polys/polyoptions.py", line 731, in build_options
return Options(gens, args)
File "/usr/local/lib/python2.7/dist-packages/sympy/polys/polyoptions.py", line 154, in __init__
preprocess_options(args)
File "/usr/local/lib/python2.7/dist-packages/sympy/polys/polyoptions.py", line 152, in preprocess_options
self[option] = cls.preprocess(value)
File "/usr/local/lib/python2.7/dist-packages/sympy/polys/polyoptions.py", line 293, in preprocess
raise GeneratorsError("non-commutative generators: %s" % str(gens))
sympy.polys.polyerrors.GeneratorsError: non-commutative generators: (x,)
Why Sympy is so weird? How can I fix this?
You seem satisfied with
integrate(2*x + 87*x*y + 7*y, y, manual=True)
returning
2*x*y + 87*x*y**2/2 + 7*y**2/2
But the first term of this answer could also be 2*y*x. Or x*y + y*x. And these are all different answers. So, is the notion of an integral with noncommutative symbols well-defined to begin with? Maybe it's not that SymPy is weird, but the question you are asking it is.
The concrete reason for this behavior is that manual integration is based on matching certain patterns. Such as "constant times something" pattern:
coeff, f = integrand.as_independent(symbol)
The method as_independent splits the product as independent * possibly_dependent, in this order. So,
(x*y).as_independent(y) # returns (x, y)
(x*y).as_independent(x) # returns (1, x*y)
As a result, constant factors are recognized only in front of the expression, when the product is noncommutative.
I don't think this can be fixed without rewriting one of the core methods as_independent to support noncommutative products (possibly returning independent * dependent * independent2) which looks like a lot of work to me. Before doing that work, I'd want to know whether the objective (antiderivative with noncommuting variables) is well defined.
when i try to run this code with keras-2.1.3 and theano-1.0.1
https://github.com/marcellacornia/sam/blob/master/attentive_convlstm.py
def get_initial_states(self, x):
initial_state = K.sum(x, axis=1)
initial_state = K.conv2d(initial_state, K.zeros((self.nb_filters_out, self.nb_filters_in, 1, 1)), border_mode='same')
initial_states = [initial_state for _ in range(len(self.states))]
return initial_states
Traceback (most recent call last):
File "main.py", line 63, in <module>
m = Model(input=[x, x_maps], output=sam_resnet([x, x_maps]))
File "E:\sam-master\models.py", line 136, in sam_resnet
nb_cols=3, nb_rows=3)(att_convlstm)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 617, in __call__
output = self.call(inputs, **kwargs)
File "E:\sam-master\attentive_convlstm.py", line 143, in call
initial_states = self.get_initial_states(x)
File "E:\sam-master\attentive_convlstm.py", line 42, in get_initial_states
initial_state = K.conv2d(initial_state, K.zeros((self.nb_filters_out, self.nb_filters_in, 1, 1)), border_mode='same')
TypeError: conv2d() got an unexpected keyword argument 'border_mode'
Well, there isn't a border_mode in keras.
There is padding='valid' or padding='same'.
Always check the documentation to use the layers and functions properly.
Just as a side note in case anyone still has this issue, this is an issue of backward compatibility with Keras1.x. "border_mode" used to exist as an argument to the Convolution2D class in (at least) Keras1.1.0 and as such is still hanging around in a lot of older code.
In Keras2 series (I referenced 2.3.1) you will see that almost the entire API to Convolution2D has changed, so if you are porting a repository using an older version of Keras I doubt border_mode would be the biggest of your concerns.