Hello, i am new to Python, and i need to create a very special matrix (see above). It just repeats 7 different values per row followed by zeros to the end of the row. After every row two zeros are filled and the array is repeated. When the array reaches the end, it will continue from the start until h0(2) is at index [x,0]. After that another h starts in the same way
I think the naive way is to use nested and loops with counters and breaks.
In this post a similiar question has already been asked:
Creating a special matrix in numpy
but its not exactly what i need.
Is there a smarter way to create this instead of nested loops like in the previous post or is there even a function / name for this kind of matrix?
I would focus on repeated patterns, and try to build the array from blocks.
For example I see 3 sets of rows, with h_0, h_1 and h_2 elements.
Within each of those I see a Hs = [h(0)...h(6)] sequence repeated.
It almost looks like you could concatenate [Hs, zeros(n), Hs, zeros(n),...] in one long 1d array, and reshape it into the (a,b) rows.
Or you could create a A = np.zeros((a,b)) array, and repeatedly insert Hs into the right places. Use A.flat[x:y]=Hs if Hs wraps around to the next line. In other words, even if A is 2d, you can insert Hs values as though it were 1d (which is true of its data buffer).
Your example is too complex to give you an exact answer in this short time - and my attention span isn't long enough to work out the details. But this might give you some ideas to work with. Look for repeated patterns and slices.
Related
I found this solution concatenate empty array, but I don't believe it fully addresses my issue. I want a more general approach that avoids adding an if statement to every instance of concatenation. Several functions within a MATLAB script that I'm transcribing into Python come after an if statement that initializes FixedDictionaryElement, which is a 2-D array.
if (param.preserveDCAtom>0)
FixedDictionaryElement(1:size(Data,1),1) = 1/sqrt(size(Data,1));
else
FixedDictionaryElement = [];
If this condition is met, a 2-d array is initialized, filled with data, and later concatenated with another 2-D array in several different places; however, if the condition isn't met, an empty array is initialized FixedDictionaryElement = [], but it's still concatenated in the same places like the example I've given below. I assume MATLAB is simply concatenating the empty array, which ends up being like multiplying a number by 1. The filled array is unaffected by the empty array, and the program continues unabated. FixedDictionaryElement is the empty array in this case.
if (param.errorFlag==0)
CoefMatrix = OMP(**[FixedDictionaryElement,Dictionary]**,Data, param.L);
Assume FixedDictionaryElement = [] and Dictionary = 34x80.
From looking at the MATLAB code, I assume the empty array is initialized to allow for the concatenation to be done throughout the script, irrespective of the result of the if statement. Otherwise, you'd get an error that FixedDictionaryElement is undefined without the empty array.
How can I generalize the solution given in the above link without putting a new if statement at every instance of concatenation?
I have built a python program processing the probability of various datasets. I input 'manually' various mean values and standard deviations, and that works, however I need to automate it so that I can upload all my data through a text or csv file. I've got so far but now have a nested for loop query I think with indices problems, but some background follows...
My code works for a small dataset where I can manually key in 6-8 parameters working but now I need to automate it and upload various inputs of unknown sizes by csv / text file. I am copying my existing code and amending it where appropriate but I have run into a problem.
I have a 2_D numpy-array where some probabilities have been reverse sorted. I have a second array which gives me the value of 68.3% of each row, and I want to trim the low value 31.7% data.
I need a solution which can handle an unspecified number of rows.
My pre-existing code worked for a single one-dimensional array was
prob_combine_sum= np.sum(prob_combine)
#Reverse sort the probabilities
prob_combine_sorted=sorted(prob_combine, reverse=True)
#Calculate 1 SD from peak Prob by multiplying Total Prob by 68.3%
sixty_eight_percent=prob_combine_sum*0.68269
#Loop over the sorted list and append the 1SD data into a list
#onesd_prob_combine
onesd_prob_combine=[]
for i in prob_combine_sorted:
onesd_prob_combine.append(i)
if sum(onesd_prob_combine) > sixty_eight_percent:
break
That worked. However, now I have a multi-dimensional array, and I want to take the 1 standard deviation data from that multi-dimensional array and stick it in another.
There's probably more than one way of doing this but I thought I would stick to the for loop, but now it's more complicated by the indices. I need to preserve the data structure, and I need to be able to handle unlimited numbers of rows in the future.
I simulated some data and if I can get this to work with this, I should be able to put it in my program.
sorted_probabilities=np.asarray([[9,8,7,6,5,4,3,2,1],
[87,67,54,43,32,22,16,14,2],[100,99,78,65,45,43,39,22,3],
[67,64,49,45,42,40,28,23,17]])
sd_test=np.asarray([30.7215,230.0699,306.5323,256.0125])
target_array=np.zeros(4).reshape(4,1)
#Task transfer data from sorted_probabilities to target array on
condition that value in each target row is less than the value in the
sd_test array.
#Ignore the problem that data transferred won't add up to 68.3%.
My real data-sample is very big. I just need a way of trimmining
and transferring.
for row in sorted_probabilities:
for element in row:
target_array[row].append[i]
if sum(target[row]) > sd_test[row]:
break
Error: IndexError: index 9 is out of bounds for axis 0 with size 4
I know it's not a very good attempt. My problem is that I need a solution which will work for any 2D array, not just one with 4 rows.
I'd be really grateful for any help.
Thank you
Edit:
Can someone help me out with this? I am struggling.
I think the reason my loop will not work is that the 'index' row I am using is not a number, but in this case a row. I will have a think about this. In meantime has anyone a solution?
Thanks
I tried the following code after reading the comments:
for counter, value in enumerate(sorted_probabilities):
for i, element in enumerate(value):
target_array[counter]=sorted_probabilities[counter][element]
if target_array[counter] > sd_test[counter]:
break
I get an error: IndexError: index 9 is out of bounds for axis 0 with size 9
I think it's because I am trying to add to numpy array of pre-determined dimensions? I am not sure. I am going to try another tack now as I can not do this with this approach. It's having to maintain the rows in the target array that is making it difficult. Each row relates to an object, and if I lose the structure it will be pointless.
I recommend you use pandas. You can read directly the csv in a dataframe and do multiple operations on columns and such, clean and neat.
You are mixing numpy arrays with python lists. Better use only one of these (numpy is preferred). Also try to debug your code, because it has either syntax and logical errors. You don't have variable i, though you're using it as an index; also you are using row as index while it is a numpy array, but not an integer.
I strongly recommend you to
0) debug your code (at least with prints)
1) use enumerate to create both of your for loops;
2) replace append with plain assigning, because you've already created an empty vector (target_array). Or initialize your target_array as empty list and append into it.
3) if you want to use your solution for any 2d array, wrap your code into a function
Try this:
sorted_probabilities=np.asarray([[9,8,7,6,5,4,3,2,1],
[87,67,54,43,32,22,16,14,2],
[100,99,78,65,45,43,39,22,3],
[67,64,49,45,42,40,28,23,17]]
)
sd_test=np.asarray([30.7215,230.0699,306.5323,256.0125])
target_array=np.zeros(4).reshape(4,1)
for counter, value in enumerate(sorted_probabilities):
for i, element in enumerate(value):
target_array[counter] = element # Here I removed the code that produced error
if target_array[counter] > sd_test[counter]:
break
I am fairly new to using tensorflow so it is possible there is a very obvious solution to my problem that I am missing. I currently have a 3-dimensional array filled with integer values. the specific values are not important so I have put in a smaller array with filler values for the sake of this question
`Array = tf.constant([[[0,0,1000,0],[3000,3000,3000,3000],[0,2500,0,0]],
[[100,200,300,400],[0,0,0,100],[300,300,400,300]]]).eval()`
So the array looks like this when printed I believe.
`[[[0,0,1000,0],
[3000,3000,3000,3000],
[0,2500,0,0]],
[[100,200,300,400],
[0,0,0,100],
[300,300,400,300]]]`
In reality this array has 23 2-D arrays stacked on top of each other. What I want to do is to create an array or 3 separate arrays that contain the range of values in each row of different levels of the 3-D array.
Something like
`Xrange = tf.constant([Array[0,0,:].range(),Array[1,0,:].range(),Array[2,0,:].range()...,Array[22,0,:].range()])`
Firstly, I am having trouble finding a functioning set of commands strung together using tensorflow that allows me to find the range of the row. I know how to do this easily in numpy but have yet to find any way to do this. Secondly, assuming there is a way to do the above, is there a way to consolidate the code without having to write it out 23 times within one line for each unique row. I know that could simply be done with a for loop, but I would also like to avoid using a solution that requires a loop. Is there a good way to do this, or is more information needed? Also please let me know if I'm screwing up my syntax since I'm still fairly new to both python and tensorflow.
So as I expected, my question has a reasonably simple answer. All that was necessary was to use the tf.reduce_max and tf.reduce_min commands
The code I finally ended with looks like:
Range = tf.subtract(tf.reduce_max(tf.constant(Array),axis=2,keep_dims=True),tf.reduce_min(tf.constant(Array),axis=2,keep_dims=True))
This produced:
[[[1000]
[ 0]
[2500]]
[[ 300]
[ 100]
[ 100]]]
An example what I want to do is instead of doing what is shown below:
Z_old = [[0,0,0,0,0],[0,0,0,0,0],[0,0,0,0,0]]
for each_axes in range(len(Z_old)):
for each_point in range(len(Z_old[each_axes])):
Z_old[len(Z_old)-1-each_axes][each_point] = arbitrary_function(each_point, each_axes)
I want now to not initialize the Z_old array with zeroes but rather fill it up with values while iterating through it which is going to be something like the written below although it's syntax is horribly wrong but that's what I want to reach in the end.
Z = np.zeros((len(x_list), len(y_list))) for Z[len(x_list) -1 - counter_1][counter_2] is equal to power_at_each_point(counter_1, counter_2] for counter_1 in range(len(x_list)) and counter_2 in range(len(y_list))]
As I explained in my answer to your previous question, you really need to vectorize arbitrary_function.
You can do this by just calling np.vectorize on the function, something like this:
Z = np.vectorize(arbitrary_function)(np.arange(3), np.arange(5).reshape(5, 1))
But that will only give you a small speedup. In your case, since arbitrary_function is doing a huge amount of work (including opening and parsing an Excel spreadsheet), it's unlikely to make enough difference to even notice, much less to solve your performance problem.
The whole point of using NumPy for speedups is to find the slow part of the code that operates on one value at a time, and replace it with something that operates on the whole array (or at least a whole row or column) at once. You can't do that by looking at the very outside loop, you need to look at the very inside loop. In other words, at arbitrary_function.
In your case, what you probably want to do is read the Excel spreadsheet into a global array, structured in such a way that each step in your process can be written as an array-wide operation on that array. Whether that means multiplying by a slice of the array, indexing the array using your input values as indices, or something completely different, it has to be something NumPy can do for you in C, or NumPy isn't going to help you.
If you can't figure out how to do that, you may want to consider not using NumPy, and instead compiling your inner loop with Cython, or running your code under PyPy. You'll still almost certainly need to move the "open and parse a whole Excel spreadsheet" outside of the inner loop, but at least you won't have to figure out how to rethink your problem in terms of vectorized operations, so it may be easier for you.
rows = 10
cols = 10
Z = numpy.array([ arbitrary_function(each_point, each_axes) for each_axes in range(cols) for each_point in range(rows) ]).reshape((rows,cols))
maybe?
I have a pandas Series and a function that I want to apply to each element of the Series. The function have an additional argument too. So far so good: for example
python pandas: apply a function with arguments to a series. Update
What about if the argument varies by itself running over a given list?
I had to face this problem in my code and I have found a straightforward solution but it is quite specific and (even worse) do not use the apply method.
Here is a toy model code:
a=pd.DataFrame({'x'=[1,2]})
t=[10,20]
I want to multiply elements in a['x'] by elements in t. Here the function is quite simple and len(t) matches with len(a['x'].index) so I could just do:
a['t']=t
a['x*t']=a['x']*a['t']
But what about if the function is more elaborate or the two lengths do not match?
What I would like is a command line like:
a['x'].apply(lambda x,y: x*y, arg=t)
The point is that this specific line exits with an error because the arg variable in that case will accept only a tuple of len=1. I do not see any 'place' to put the various element of t.
What you're looking for is similar to what R calls "recycling", where operations on arrays of unequal length loops through the smaller array over and over as many times as needed to match the length of the longer array.
I'm not aware of any simple, built-in way to do this with numpy or pandas. What you can do is use np.tile to repeat your smaller array. Something like:
a.x*np.tile(t, len(a)/len(t))
This will only work if the longer array's length is a simple multiple of the shorter one's.
The behavior you want is somewhat unusual. Depending on what you're doing, there may be a better way to handle it. Relying on the values to match up in the desired way just by repetition is a little fragile. If you have some way to match up the values in each array that you want to multiply, you could use the .map method of Series to select the right "other value" to multiply each element of your Series with.