I have need to slice an array where I would like zero to be assumed for every dimension except the first.
Given an array:
x = numpy.zeros((3,3,3))
I would like the following behavior, but without needing to know the number of dimensions before hand:
y = a[:,0,0]
Essentially I am looking for something that would take the place of Ellipsis, but instead of expanding to the needed number of : objects, it would expand into the needed number of zeros.
Is there anything built in for this? If not, what is the best way to get the functionality that I need?
Edit:
One way to do this is to use:
y = x.ravel(0:temp.shape[0])
This works fine, however in some cases (such as mine) ravel will need to create a copy of the array instead of a view. Since I am working with large arrays, I want a more memory efficient way of doing this.
You could create a indexing tuple, like this:
x = arange(3*3*3).reshape(3,3,3)
s = (slice(None),) + (0,)*(x.ndim-1)
print x[s] # array([ 0, 9, 18])
print x[:,0,0] # array([ 0, 9, 18])
I guess you could also do:
x.transpose().flat[:3]
but I prefer the first approach, since it works for any dimension (rather than only the first), and it's obviously equally efficient to just writing x[:,0,0], since it's just a different syntax.
I usually use tom10's method, but here's another:
for i in range(x.ndim-1):
x = x[...,0]
Related
I am trying to randomly select a set of integers in numpy and am encountering a strange error. If I define a numpy array with two sets of different sizes, np.random.choice chooses between them without issue:
Set1 = np.array([[1, 2, 3], [2, 4]])
In: np.random.choice(Set1)
Out: [4, 5]
However, once the numpy array are sets of the same size, I get a value error:
Set2 = np.array([[1, 3, 5], [2, 4, 6]])
In: np.random.choice(Set2)
ValueError: a must be 1-dimensional
Could be user error, but I've checked several times and the only difference is the size of the sets. I realize I can do something like:
Chosen = np.random.choice(N, k)
Selection = Set[Chosen]
Where N is the number of sets and k is the number of samples, but I'm just wondering if there was a better way and specifically what I am doing wrong to raise a value error when the sets are the same size.
Printout of Set1 and Set2 for reference:
In: Set1
Out: array([list([1, 3, 5]), list([2, 4])], dtype=object)
In: type(Set1)
Out: numpy.ndarray
In: Set2
Out:
array([[1, 3, 5],
[2, 4, 6]])
In: type(Set2)
Out: numpy.ndarray
Your issue is caused by a misunderstanding of how numpy arrays work. The first example can not "really" be turned into an array because numpy does not support ragged arrays. You end up with an array of object references that points to two python lists. The second example is a proper 2xN numerical array. I can think of two types of solutions here.
The obvious approach (which would work in both cases, by the way), would be to choose the index instead of the sublist. Since you are sampling with replacement, you can just generate the index and use it directly:
Set[np.random.randint(N, size=k)]
This is the same as
Set[np.random.choice(N, k)]
If you want to choose without replacement, your best bet is to use np.random.choice, with replace=False. This is similar to, but less efficient than shuffling. In either case, you can write a one-liner for the index:
Set[np.random.choice(N, k, replace=False)]
Or:
index = np.arange(Set.shape[0])
np.random.shuffle(index)
Set[index[:k]]
The nice thing about np.random.shuffle, though, is that you can apply it to Set directly, whether it is a one- or many-dimensional array. Shuffling will always happen along the first axis, so you can just take the top k elements afterwards:
np.random.shuffle(Set)
Set[:k]
The shuffling operation works only in-place, so you have to write it out the long way. It's also less efficient for large arrays, since you have to create the entire range up front, no matter how small k is.
The other solution is to turn the second example into an array of list objects like the first one. I do not recommend this solution unless the only reason you are using numpy is for the choice function. In fact I wouldn't recommend it at all, since you can, and probably should, use pythons standard random module at this point. Disclaimers aside, you can coerce the datatype of the second array to be object. It will remove any benefits of using numpy, and can't be done directly. Simply setting dtype=object will still create a 2D array, but will store references to python int objects instead of primitives in it. You have to do something like this:
Set = np.zeros(N, dtype=object)
Set[:] = [[1, 2, 3], [2, 4]]
You will now get an object essentially equivalent to the one in the first example, and can therefore apply np.random.choice directly.
Note
I show the legacy np.random methods here because of personal inertia if nothing else. The correct way, as suggested in the documentation I link to, is to use the new Generator API. This is especially true for the choice method, which is much more efficient in the new implementation. The usage is not any more difficult:
Set[np.random.default_rng().choice(N, k, replace=False)]
There are additional advantages, like the fact that you can now choose directly, even from a multidimensional array:
np.random.default_rng().choice(Set2, k, replace=False)
The same goes for shuffle, which, like choice, now allows you to select the axis you want to rearrange:
np.random.default_rng().shuffle(Set)
Set[:k]
In Python when using np.empty(), for example np.empty((3,1)) we get an array that is of size (3,1) but, in reality, it is not empty and it contains very small values (e.g., 1.7*(10^315)). Is possible to create an array that is really empty/have no values but have given dimensions/shape?
I'd suggest using np.full_like to choose the fill-value directly...
x = np.full_like((3, 1), None, dtype=object)
... of course the dtype you chose kind of defines what you mean by "empty"
I am guessing that by empty, you mean an array filled with zeros.
Use np.zeros() to create an array with zeros. np.empty() just allocates the array, so the numbers in there are garbage. It is provided as a way to even reduce the cost of setting the values to zero. But it is generally safer to use np.zeros().
I suggest to use np.nan. like shown below,
yourdata = np.empty((3,1)) * np.nan
(Or)
you can use np.zeros((3,1)). but it will fill all the values as zero. It is not intuitively well. I feel like using np.nan is best in practice.
Its all upto you and depends on your requirement.
I know that this question party has been answered, but I am looking specifically at numpy and scipy. Say I have a grid
lGrid = linspace(0.1, 8, 50)
and I want to find the index that corresponds best to 2, I do
index = abs(lGrid-2).argmin()
lGrid[index]
2.034
However, what if I have a whole matrix of values instead of 2 here. I guess iteration is pretty slow. abs(lGrid-[2,4]) however will fail due to shape issues. I will need a solution that is easily extendable to N-dim matrices. What is the best course of action in this environment?
You can use broadcasting:
from numpy import arange,linspace,argmin
vals = arange(30).reshape(2,5,3) #your N-dimensional input, like array([2,4])
lGrid = linspace(0.1, 8, 50)
result = argmin(abs(lGrid-vals[...,newaxis]),axis=-1)
for example, with input vals = array([2,4]), you obtain result = array([12, 24]) and lGrid[result]=array([ 2.03469388, 3.96938776])
You "guess that Iteration is pretty slow", but I guess it isn't. So I would just just iterate over the "whole Matrix of values instead of 2". Perhaps:
for val in BigArray.flatten():
index = abs(lGrid-val).argmin()
yield lGrid[index]
If lGrid is failry large, then the overhead of iterating in a Python for loop is probably not big in comparison to the vecotirsed operation Happening inside it.
There might be a way you can use broadcasting and reshaping to do the whole thing in one giant operation, but would be complicated, and you might accidentally allocate such a huge array that your machine slows down to a crawl.
Can anyone recommend a way to do a reverse cumulative sum on a numpy array?
Where 'reverse cumulative sum' is defined as below (I welcome any corrections on the name for this procedure):
if
x = np.array([0,1,2,3,4])
then
np.cumsum(x)
gives
array([0,1,3,6,10])
However, I would like to get
array([10,10,9,7,4]
Can anyone suggest a way to do this?
This does it:
np.cumsum(x[::-1])[::-1]
You can use .flipud() for this as well, which is equivalent to [::-1]
https://docs.scipy.org/doc/numpy/reference/generated/numpy.flipud.html
In [0]: x = np.array([0,1,2,3,4])
In [1]: np.flipud(np.flipud(x).cumsum())
Out[1]: array([10, 10, 9, 7, 4]
.flip() is new as of NumPy 1.12, and combines the .flipud() and .fliplr() into one API.
https://docs.scipy.org/doc/numpy/reference/generated/numpy.flip.html
This is equivalent, and has fewer function calls:
np.flip(np.flip(x, 0).cumsum(), 0)
The answers given so far seem to be all inefficient if you want the result stored in the original array. As well, if you want a copy, keep in mind this will return a view not a contiguous array and np.ascontiguousarray() is still needed.
How about
view=np.flip(x, 0)
np.cumsum(view, 0, out=view)
#x contains the reverse cumsum result and remains contiguous and unflipped
This modifies the flipped view of x which writes the data properly in reverse order back into the original x variable. It requires no non-contiguous views at the end of execution and is about as speed efficient as possible. I am guessing numpy will never add a reversecumsum method namely because the technique I describe is so trivially and efficiently possible. Albeit, it might be ever so slightly more efficient to have the explicit method.
Otherwise if a copy is desired, then the extra flip is required AND conversion back to a contiguous array, mainly if it will be used in many vector operations thereafter. A tricky part of numpy, but views and contiguity are something to be careful with if you are seriously interested in performance.
I can use array.resize(shape) to resize my array and have zeros added to those indices without any value. If my array is [1,2,3,4] and I use array.resize[5,0] I get [1,2,3,4,0]. How can I append / pad the zeros to the front, yielding [0,1,2,3,4]?
I am doing this dynamically - trying to use:
array.resize(arrayb.shape)
I would like to avoid (at all costs) making an in-memory copy of the array. That is to reverse the array, resize, and reverse again. Working with a view would be ideal.
You could try working on an array with negative strides (though you can never be sure that resize may not have to make a copy):
_a = np.empty(0) # original array
a = _a[::-1] # the array you work with...
# now instead of a, resize the original _a:
del a # You need to delete it first. Or resize will want refcheck=False, but that
# will be dangerous!
_a.resize(5)
# And update a to the new array:
a = _a[::-1]
But I would really suggest you make the array large enough if in any way possible, this does not seem very beautiful, but I think this is the only way short of copying around data. Your array will also have a negative stride, so it won't be contiguous, so if that means that some function you use on it must make copy, you are out of luck.
Also if you slice your a or _a you have to either make a copy, or make sure you delete them before resizing. While you can give refcheck=False this seems to invalidate the data.
I believe you can use slice assignment to do this. I see no reason why numpy would need to make a copy for an operation like this, as long as it does the necessary checks for overlaps (though of course as others have noted, resize may itself have to allocate a new block of memory). I tested this method with a very large array, and I saw no jump in memory usage.
>>> a = numpy.arange(10)
>>> a.resize(15)
>>> a[5:] = a[:10]
>>> a[0:5] = 0
>>> a
array([0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
The following showed no jump in memory usage for the assignment operation:
>>> a = numpy.arange(100000000)
>>> a.resize(150000000)
>>> a[50000000:] = a[:100000000]
I don't know of a better way, and this is just a conjecture. Let me know if it doesn't work.