I am trying to create a CSR matrix with m rows, n columns, filled with zeroes and ones (at most one per column). I have a numpy array idcs with indices where my 1s are located, ranging from x to n.
My first approach to create the ROW_INDEX vector was something like :
ROW_INDEX=np.zeros(m+1)
for i in idcs: ROW_INDEX[i+1:]+=1
Unsurprinsingly though, this is rather slow. I then tried the good old space for speed swap :
ROW_INDEX=np.fromfunction(lambda i,j: i>idcs[j],(m+1,n),dtype='uintc')
ROW_INDEX=np.sum(ROW_INDEX,1)
However, m and n both are at 10^5 so the above code raises a MemoryError - even though the large matrix is technically only boolean.
I feel like I missed something obvious here. Anyone has a smarter solution, or should I just increase memory ?
End purpose is to create a PETSc.Mat, hopefully in parallel, starting from something like B=petsc4py.Mat().createAIJ([m, n],csr=[ROW_INDEX,COL_INDEX,V]). I've found little documentation on the subject, any help on that front would also be welcome.
I think you're looking for something like this?
ROW_INDEX=np.zeros(m+1)
np.add.at(ROW_INDEX, idcs+1, 1)
np.cumsum(ROW_INDEX, out=ROW_INDEX)
Related
I am fairly new to using tensorflow so it is possible there is a very obvious solution to my problem that I am missing. I currently have a 3-dimensional array filled with integer values. the specific values are not important so I have put in a smaller array with filler values for the sake of this question
`Array = tf.constant([[[0,0,1000,0],[3000,3000,3000,3000],[0,2500,0,0]],
[[100,200,300,400],[0,0,0,100],[300,300,400,300]]]).eval()`
So the array looks like this when printed I believe.
`[[[0,0,1000,0],
[3000,3000,3000,3000],
[0,2500,0,0]],
[[100,200,300,400],
[0,0,0,100],
[300,300,400,300]]]`
In reality this array has 23 2-D arrays stacked on top of each other. What I want to do is to create an array or 3 separate arrays that contain the range of values in each row of different levels of the 3-D array.
Something like
`Xrange = tf.constant([Array[0,0,:].range(),Array[1,0,:].range(),Array[2,0,:].range()...,Array[22,0,:].range()])`
Firstly, I am having trouble finding a functioning set of commands strung together using tensorflow that allows me to find the range of the row. I know how to do this easily in numpy but have yet to find any way to do this. Secondly, assuming there is a way to do the above, is there a way to consolidate the code without having to write it out 23 times within one line for each unique row. I know that could simply be done with a for loop, but I would also like to avoid using a solution that requires a loop. Is there a good way to do this, or is more information needed? Also please let me know if I'm screwing up my syntax since I'm still fairly new to both python and tensorflow.
So as I expected, my question has a reasonably simple answer. All that was necessary was to use the tf.reduce_max and tf.reduce_min commands
The code I finally ended with looks like:
Range = tf.subtract(tf.reduce_max(tf.constant(Array),axis=2,keep_dims=True),tf.reduce_min(tf.constant(Array),axis=2,keep_dims=True))
This produced:
[[[1000]
[ 0]
[2500]]
[[ 300]
[ 100]
[ 100]]]
Hello, i am new to Python, and i need to create a very special matrix (see above). It just repeats 7 different values per row followed by zeros to the end of the row. After every row two zeros are filled and the array is repeated. When the array reaches the end, it will continue from the start until h0(2) is at index [x,0]. After that another h starts in the same way
I think the naive way is to use nested and loops with counters and breaks.
In this post a similiar question has already been asked:
Creating a special matrix in numpy
but its not exactly what i need.
Is there a smarter way to create this instead of nested loops like in the previous post or is there even a function / name for this kind of matrix?
I would focus on repeated patterns, and try to build the array from blocks.
For example I see 3 sets of rows, with h_0, h_1 and h_2 elements.
Within each of those I see a Hs = [h(0)...h(6)] sequence repeated.
It almost looks like you could concatenate [Hs, zeros(n), Hs, zeros(n),...] in one long 1d array, and reshape it into the (a,b) rows.
Or you could create a A = np.zeros((a,b)) array, and repeatedly insert Hs into the right places. Use A.flat[x:y]=Hs if Hs wraps around to the next line. In other words, even if A is 2d, you can insert Hs values as though it were 1d (which is true of its data buffer).
Your example is too complex to give you an exact answer in this short time - and my attention span isn't long enough to work out the details. But this might give you some ideas to work with. Look for repeated patterns and slices.
An example what I want to do is instead of doing what is shown below:
Z_old = [[0,0,0,0,0],[0,0,0,0,0],[0,0,0,0,0]]
for each_axes in range(len(Z_old)):
for each_point in range(len(Z_old[each_axes])):
Z_old[len(Z_old)-1-each_axes][each_point] = arbitrary_function(each_point, each_axes)
I want now to not initialize the Z_old array with zeroes but rather fill it up with values while iterating through it which is going to be something like the written below although it's syntax is horribly wrong but that's what I want to reach in the end.
Z = np.zeros((len(x_list), len(y_list))) for Z[len(x_list) -1 - counter_1][counter_2] is equal to power_at_each_point(counter_1, counter_2] for counter_1 in range(len(x_list)) and counter_2 in range(len(y_list))]
As I explained in my answer to your previous question, you really need to vectorize arbitrary_function.
You can do this by just calling np.vectorize on the function, something like this:
Z = np.vectorize(arbitrary_function)(np.arange(3), np.arange(5).reshape(5, 1))
But that will only give you a small speedup. In your case, since arbitrary_function is doing a huge amount of work (including opening and parsing an Excel spreadsheet), it's unlikely to make enough difference to even notice, much less to solve your performance problem.
The whole point of using NumPy for speedups is to find the slow part of the code that operates on one value at a time, and replace it with something that operates on the whole array (or at least a whole row or column) at once. You can't do that by looking at the very outside loop, you need to look at the very inside loop. In other words, at arbitrary_function.
In your case, what you probably want to do is read the Excel spreadsheet into a global array, structured in such a way that each step in your process can be written as an array-wide operation on that array. Whether that means multiplying by a slice of the array, indexing the array using your input values as indices, or something completely different, it has to be something NumPy can do for you in C, or NumPy isn't going to help you.
If you can't figure out how to do that, you may want to consider not using NumPy, and instead compiling your inner loop with Cython, or running your code under PyPy. You'll still almost certainly need to move the "open and parse a whole Excel spreadsheet" outside of the inner loop, but at least you won't have to figure out how to rethink your problem in terms of vectorized operations, so it may be easier for you.
rows = 10
cols = 10
Z = numpy.array([ arbitrary_function(each_point, each_axes) for each_axes in range(cols) for each_point in range(rows) ]).reshape((rows,cols))
maybe?
I'm having an error in my code, I hope you can help me!:
(When I paste the code something weird happens (not all of it is written like code) but here we go:
I want to linalg.solve(A,Res) . The first one (A) has 10 rows and 10 columns,i.e, matrix([10 arrays, 10 elements]) and the second one has 10 rows and 1 column, i.e, matrix([1 array, 10 elements]).
When I executed the code it throws the following error:
Singular Matrix
I don't know what to do. When I don't ask to linalg.solve, but ask to print both matrices, both are fine: 10 equations, 10 variables. So I don't know what's going on. Please Help!!!
If you need me to paste the code (as horrible as it looks) I can do it.
Thank you
A singular matrix is a matrix that cannot be inverted, or, equivalently, that has determinant zero. For this reason, you cannot solve a system of equations using a singular matrix (it may have no solution or multiple solutions, but in any case no unique solution). So better make sure your matrix is non-singular (i.e., has non-zero determinant), since numpy.linalg.solve requires non-singular matrices.
Here is some decent explanation about what's going on for 2 x 2 matrices (but the generalization is straightforward to N x N).
Given a large 2d numpy array, I would like to remove a range of rows, say rows 10000:10010 efficiently. I have to do this multiple times with different ranges, so I would like to also make it parallelizable.
Using something like numpy.delete() is not efficient, since it needs to copy the array, taking too much time and memory. Ideally I would want to do something like create a view, but I am not sure how I could do this in this case. A masked array is also not an option since the downstream operations are not supported on masked arrays.
Any ideas?
Because of the strided data structure that defines a numpy array, what you want will not be possible without using a masked array. Your best option might be to use a masked array (or perhaps your own boolean array) to mask the deleted the rows, and then do a single real delete operation of all the rows to be deleted before passing it downstream.
There isn't really a good way to speed up the delete operation, as you've already alluded to, this kind of deleting requires the data to be copied in memory. The one thing you can do, as suggested by #WarrenWeckesser, is combine multiple delete operations and apply them all at once. Here's an example:
ranges = [(10, 20), (25, 30), (50, 100)]
mask = np.ones(len(array), dtype=bool)
# Update the mask with all the rows you want to delete
for start, end in ranges:
mask[start:stop] = False
# Apply all the changes at once
new_array = array[mask]
It doesn't really make sense to parallelize this because you're just copying stuff in memory so this will be memory bound anyways, adding more cpus will not help.
I don't know how fast this is, relative to the above, but say you have a list L of row indices of the rows you wish to keep from array A (by "rows" I mean the first index, for higher dimensional arrays). All other rows will be deleted. We'll let A hold the result.
A = A[np.ix_(L)]