I have a list of n matrices where n = 5:
[matrix([[3.62425112, 0.00953506],
[0.00953506, 1.05054417]]), matrix([[4.15808905e+00, 9.27845937e-04],
[9.27845937e-04, 9.88509628e-01]]), matrix([[3.90560856, 0.0504297 ],
[0.0504297 , 0.92587046]]), matrix([[ 3.87347073, -0.12430547],
[-0.12430547, 1.09071475]]), matrix([[ 3.87697392, -0.00475038],
[-0.00475038, 1.01439917]])]
I want to do element-wise addition of these matrices:
I am trying this:
np.add(S_list[0], S_list[1], S_list[2], S_list[3], S_list[4])
It works but I don't want to fix n = 5
Can anyone please help? Thank you.
by the documentation, np.add should add only two matrices.
However np.add.reduce(S_list) or just sum(S_list) will give you what you want.
You could just use Python's built-in function sum
sum(S_list)
Output:
[[19.43839338 -0.06816324]
[-0.06816324 5.07003818]]
Are you sure that np.add(S_list[0], S_list[1], S_list[2], S_list[3], S_list[4]) works ? Because np.add() takes as input arguments two arrays . Anyway , the following code does the work if you want to use np.add():
sum = np.add(S_list[0],S_list[1])
for i in range(len(S_list) - 2):
sum = np.add(sum,S_list[i+2])
print(sum)
Related
My outputs have too much decimal points. But I want to 2 points float results. Can you help me?
EX: 42.44468745 -> 42.44
y_pred=ml.predict(x_test)
print(y_pred)
Output:
[42.44468745 18.38280575 7.75539511 19.05326276 11.87002186 26.89180941
18.97589775 22.01291508 9.08079557 6.72623692 21.81657224 22.51415263
24.46456776 13.75392096 21.57583275 25.73401908 30.95880457 11.38970094
7.28188274 21.98202474 17.24708345 38.7390475 12.68345506 11.2247757
5.32814356 10.41623796 7.30681434]
Since you didn't post all of your code, I can only give you a general answer.
There are several ways to get two decimals.
For example:
num = 1.223362719
print('{:.2f}'.format(num))
print('%.2f' % num)
print(round(num, 2))
print(f"{num:.2f}")
You will get 1.22 as the result of any of these.
------------------Update------------------
Thanks for commenting, and I have updated to deal with your problem.
Using numpy could help you when your data OP is using an ndarray.
You can use data= np.around(a, n) # a is the data that needs to be decimal processed, and n is the reserved number.
Example:
import numpy as np
data= np.around(1.223362719, 2)
You will also get 1.22 as the result.
Use round(num, round_decimal)
For example:
num = 42.44468745
print(round(num, 2))
Output:
42.44
I have a question, under a specific variable that for semplicitity we call a, I have the following arrays written in this way.
[-6.396736847188359, -6.154559100742114, -6.211476547612676]
[-8.006589632001111, -7.826171257487284, -7.71335303949824]
[-6.456557174187878, -6.262447971939394, -6.38657184063457]
[-7.487923068341583, -7.189375715312779, -7.252991999097159]
[-7.532980499994895, -7.44329050097094, -7.529773039725542]
[-7.429923219897081, -6.960840780894108, -7.173489030350187]
[-7.194082458487091, -6.909676564074833, -6.944666159195248]
[-7.734357883680035, -7.512036612219159, -7.607808831503251]
[-7.734008421702387, -7.164880777772352, -7.709697714174302]
[-8.3156235828106, -8.486948182913475, -8.612390113851397]
How can I apply the scipy formula logsumexp to each column? I tried to use the logsumexp(a[0]) but it doesn't work also I try to iterate over a[0] but i got the error about flot64.
Thanks to all
Use the axis parameter: logsumexp(a, axis=?), where ? Is 0 or 1
Hi I am trying to vectorise the QR decomposition in numpy as the documentation suggests here, however I keep getting dimension issues. I am confused as to what I am doing wrong as I believe the following follows the documentation. Does anyone know what is wrong with this:
import numpy as np
X = np.random.randn(100,50,50)
vecQR = np.vectorize(np.linalg.qr)
vecQR(X)
From the doc: "By default, pyfunc is assumed to take scalars as input and output.".
So you need to give it a signature:
vecQR = np.vectorize(np.linalg.qr, signature='(m,n)->(m,p),(p,n)')
How about just map np.linalg.qr to the 1st axis of the arr?:
In [35]: np.array(list(map(np.linalg.qr, X)))
Out[35]:
array([[[[-3.30595447e-01, -2.06613421e-02, 2.50135751e-01, ...,
2.45828025e-02, 9.29150994e-02, -5.02663489e-02],
[-1.04193390e-01, -1.95327811e-02, 1.54158438e-02, ...,
2.62127499e-01, -2.21480958e-02, 1.94813279e-01],
[ 1.62712767e-01, -1.28304663e-01, -1.50172509e-01, ...,
1.73740906e-01, 1.31272690e-01, -2.47868876e-01]
I have to read multiple data from csv files, and when I want to invert matrix from csv data, I get this:
numpy.linalg.linalg.LinAlgError: singular matrix
and the process get stuck on this section :
J = np.mat([dtdx,dtdy,dtdz]).transpose()
dd = np.mat(ttcal-tt)
dm = (scipy.linalg.inv(J.transpose()*J))*((J.transpose())*(dd.transpose()))
and data from "J" like this :
[[-6.81477651e-03 -7.90320450e-03 6.50533437e-05]
[-6.71080644e-03 -6.00135428e-03 6.50533437e-05]]
and data from "dd" like this :
[[0.00621772 0.00537531]]
i has check this data and i find this :
tes = J.transpose()*J
and the result like this :
[[ 9.14761019e-05 9.41324993e-05 -8.79884397e-07]
[ 9.41324993e-05 9.84768945e-05 -9.04538042e-07]
[-8.79884397e-07 -9.04538042e-07 8.46387506e-09]]
I need to invert this matrix but this data is singular matrix. I have to try on matlab r2017b and running well.
I need to solve this problem on python.
Have you tried using pseudo-inverse numpy.linalg.pinv instead? It's supposed to deal with these situations.
B = np.linalg.pinv(a)
But I would suggest to check that you really calculated correctly your matrix and a singular matrix is supposed to appear.
If you are sure the calculations you made are correct and it is what you wanted, then you can go for a psuedo inverse for the singular matrix that you are having.
It can be done in python like this..
mat = np.array([[ 9.14761019e-05, 9.41324993e-05, -8.79884397e-07],
[ 9.41324993e-05, 9.84768945e-05, -9.04538042e-07],
[-8.79884397e-07, -9.04538042e-07, 8.46387506e-09]])
p_inv = np.linalg.pinv(mat)
print(p_inv)
# output
array([[-1.00783988e+13, 5.50963966e+11, -9.88844703e+14],
[ 5.50963966e+11, -3.01194390e+10, 5.40580308e+13],
[-9.88844703e+14, 5.40580308e+13, -9.70207468e+16]])
I'm trying to count number of pairs and save them in two different histograms, one saves the pair in an array where the parent objects are split and the other one just saves the total, that means I have a loop that looks like this:
for k in range(N_parents):
pair_hist[k, bin] +=1
total_pair_hist[bin] +=1
where both pair_hist and total_pair as defined as,
pair_hist = np.zeros((N_parents, bins.shape[0]), dtype = np.uint64)
total_pair_hist = np.zeros(bins.shape[0], dtype = np.uint64)
I'd expect that summing the elements of pair_hist across all parents (axis=0), I'd get the total histogram. The funny thing is, if I take the sum of pair_hist:
onehalo_sum_ind = np.sum(pair_hist, axis = 0)
I don't get exactly total_pair_hist, but something slightly different:
total_pair_hist = [ 287248245 448773033 695820015 1070797576 1634146741 2466680801
3667159080 5334307986 7524739978 10206208064 13237161068 16466436715
19231751113 20949333183 21254336387 19497450101 16459529579 13038604111
9783826702 7006904025 4813946458 3207605915 2097437543 1355158303
869077173 555036759 353732683 225171870 143179912 0]
pair_hist = [ 287267022 448887401 696415932 1073435699 1644677789 2503693266
3784008845 5665555755 8380564635 12201977310 17382403650 23929909625
31103373709 36859534246 38146287402 33454446858 25689430007 18142721164
12224099624 8035266046 5211441720 3353187036 2147027818 1370663213
873519714 556182465 353995293 225224668 143189173 0]
Any idea of what's going on? Thank you in advance :)
Sorry for the late reply, but I didn't have time to work on it before. The problem was caused by numba. I was using it with the parallel=True flag to parallelise one of the loops and that caused the error.