Sum values in given Index order consecutively in pandas dataframe - python

I have a list of index values that represent the order of the coordinates I should be visiting on a map. Each index represents (x,y).
Out[4]: [0, 8, 11, 6 ,10, 3, 4, 2, 14, 1, 7, 12, 13, 15, 5, 9]
Using these coordinates I generated a distance matrix. My aim is to find the total travel distance.
I manage to achieve it using iloc where I consecutively summed the index values. However need an automated way of doing this as I have a large amount of data to go through. Is there an easier way to do this?
In[6]: dmatrix.iloc[0][8]+dmatrix.iloc[8][11]+dmatrix.iloc[11][6]+dmatrix.iloc[6][10]+dmatrix.iloc[10][3]+dmatrix.iloc[3][4]+dmatrix.iloc[4][2]+dmatrix.iloc[2][14]+dmatrix.iloc[14][1]+dmatrix.iloc[1][7]+dmatrix.iloc[7][12]+dmatrix.iloc[12][13]+dmatrix.iloc[13][15]+dmatrix.iloc[15][5]+dmatrix.iloc[5][9]+dmatrix.iloc[9][0]

You can create a loop and iterate through your list of indexes:
sum = 0
for i in range(0, len(yourList)-1):
sum += dmatrix.iloc[yourList[i]][yourList[i+1]]
Best

Related

Efficient ways to aggregate and replicate values in a numpy matrix

In my work I often need to aggregate and expand matrices of various quantities, and I am looking for the most efficient ways to do these actions. E.g. I'll have an NxN matrix that I want to aggregate from NxN into PxP where P < N. This is done using a correspondence between the larger dimensions and the smaller dimensions. Usually, P will be around 100 or so.
For example, I'll have a hypothetical 4x4 matrix like this (though in practice, my matrices will be much larger, around 1000x1000)
m=np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]])
>>> m
array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12],
[13, 14, 15, 16]])
and a correspondence like this (schematically):
0 -> 0
1 -> 1
2 -> 0
3 -> 1
that I usually store in a dictionary. This means that indices 0 and 2 (for rows and columns) both get allocated to new index 0 and indices 1 and 3 (for rows and columns) both get allocated to new index 1. The matrix could be anything at all, but the correspondence is always many-to-one when I want to compress.
If the input matrix is A and the output matrix is B, then cell B[0, 0] would be the sum of A[0, 0] + A[0, 2] + A[2, 0] + A[2, 2] because new index 0 is made up of original indices 0 and 2.
The aggregation process here would lead to:
array([[ 1+3+9+11, 2+4+10+12 ],
[ 5+7+13+15, 6+8+14+16 ]])
= array([[ 24, 28 ],
[ 40, 44 ]])
I can do this by making an empty matrix of the right size and looping over all 4x4=16 cells of the initial matrix and accumulating in nested loops, but this seems to be inefficient and the vectorised nature of numpy is always emphasised by people. I have also done it by using np.ix_ to make sets of indices and use m[row_indices, col_indices].sum(), but I am wondering what the most efficient numpy-like way to do it is.
Conversely, what is the sensible and efficient way to expand a matrix using the correspondence the other way? For example with the same correspondence but in reverse I would go from:
array([[ 1, 2 ],
[ 3, 4 ]])
to
array([[ 1, 2, 1, 2 ],
[ 3, 4, 3, 4 ],
[ 1, 2, 1, 2 ],
[ 3, 4, 3, 4 ]])
where the values simply get replicated into the new cells.
In my attempts so far for the aggregation, I have used approaches with pandas methods with groupby on index and columns and then extracting the final matrix with, e.g. df.values. However, I don't know the equivalent way to expand a matrix, without using a lot of things like unstack and join and so on. And I see people often say that using pandas is not time-efficient.
Edit 1: I was asked in a comment about exactly how the aggregation should be done. This is how it would be done if I were using nested loops and a dictionary lookup between the original dimensions and the new dimensions:
>>> m=np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]])
>>> mnew=np.zeros((2,2))
>>> big2small={0:0, 1:1, 2:0, 3:1}
>>> for i in range(4):
... inew = big2small[i]
... for j in range(4):
... jnew = big2small[j]
... mnew[inew, jnew] += m[i, j]
...
>>> mnew
array([[24., 28.],
[40., 44.]])
Edit 2: Another comment asked for the aggregation example towards the start to be made more explicit, so I have done so.
Assuming you don't your indices don't have a regular structure I would do it try sparse matrices.
import scipy.sparse as ss
import numpy as np
# your current array of indices
g=np.array([[0,0],[1,1],[2,0],[3,1]])
# a sparse matrix of (data=ones, (row_ind=g[:,0], col_ind=g[:,1]))
# it is one for every pair (g[i,0], g[i,1]), zero elsewhere
u=ss.csr_matrix((np.ones(len(g)), (g[:,0], g[:,1])))
Aggregate
m=np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]])
u.T # m # u
Expand
m2 = np.array([[1,2],[3,4]])
u # m2 # u.T

numpy - column-wise and row-wise sums of a given 2d matrix

I have this numpy matrix (ndarray).
array([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10],
[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20],
[21, 22, 23, 24, 25]])
I want to calculate the column-wise and row-wise sums.
I know this is done by calling respectively
np.sum(mat, axis=0) ### column-wise sums
np.sum(mat, axis=1) ### row-wise sums
but I cannot understand these two calls.
Why is axis 0 giving me the sums column-by-column?!
Shouldn't it be the other way around?
I thought the rows are axis 0, and the columns are axis 1.
What I am seeing as a behavior here looks counter-intuitive
(but I am sure it's OK, I guess I am just missing something important).
I am just looking for some intuitive explanation here.
Thanks in advance.
Intuition around arrays and axes
I want to offer 3 types of intuitions here.
Graphical (How to imagine them visually)
Physical (How they are physically stored)
Logical (How to work with them logically)
Graphical intuition
Consider a numpy array as a n-dimensional object. This n-dimensional object contains elements in each of the directions as below.
Axes in this representation are the direction of the tensor. So, a 2D matrix has only 2 axes, while a 4D tensor has 4 axes.
Sum in a given axis can be essentially considered as a reduction in that direction. Imagine a 3D tensor being squashed in such a way that it becomes flat (a 2D tensor). The axis tells us which direction to squash or reduce it in.
Physical intuition
Numpy stores its ndarrays as contiguous blocks of memory. Each element is stored in a sequential manner every n bytes after the previous.
(images referenced from this excellent SO post)
So if your 3D array looks like this -
Then in memory its stores as -
When retrieving an element (or a block of elements), NumPy calculates how many strides (bytes) it needs to traverse to get the next element in that direction/axis. So, for the above example, for axis=2 it has to traverse 8 bytes (depending on the datatype) but for axis=1 it has to traverse 8*4 bytes, and axis=0 it needs 8*8 bytes.
Axes in this representation is basically the series of next elements after a given stride. Consider the following array -
print(X)
print(X.strides)
[[0 2 1 4 0 0 0]
[5 0 0 0 0 0 0]
[8 0 0 0 0 0 0]
[0 0 0 0 0 0 0]
[0 0 1 0 0 0 0]
[0 0 0 1 0 0 0]]
#Strides (bytes) required to traverse in each axis.
(56, 8)
In the above array, every element after 56 bytes from any element is the next element in axis=0 and every element after 8 bytes from any element is in axis=1. (except from the last element)
Sum or reduction in this regards means taking a sum of every element in that strided series. So, sum over axis=0 means that I need to sum [0,5,8,0,0,0], [2,0,0,0,0,0], ... and sum over axis=1 means just summing [0 2 1 4 0 0 0] , [5 0 0 0 0 0 0], ...
Logical intuition
This interpretation has to do with element groupings. A numpy stores its ndarrays as groups of groups of groups ... of elements. Elements are grouped together and contain the last axis (axis=-1). Then another grouping over them creates another axis before it (axis=-2). The final outermost group is the axis=0.
These are 3 groups of 2 groups of 5 elements.
Similarly, the shape of a NumPy array is also determined by the same.
1D_array = [1,2,3]
2D_array = [[1,2,3]]
3D_array = [[[1,2,3]]]
...
Axes in this representation are the group in which elements are stored. The outermost group is axis=0 and the innermost group is axis=-1.
Sum or reduction in this regard means that I reducing elements across that specific group or axis. So, sum over axis=-1 means I sum over the innermost groups. Consider a (6, 5, 8) dimensional tensor. When I say I want a sum over some axis, I want to reduce the elements lying in that grouping / direction to a single value that is equal to their sum.
So,
np.sum(arr, axis=-1) will reduce the inner most groups (of length 8) into a single value and return (6,5,1) or (6,5).
np.sum(arr, axis=-2) will reduce the elements that lie in the 1st axis (or -2nd axis) direction and reduce those to a single value returning (6,1,8) or (6,8)
np.sum(arr, axis=0) will similarly reduce the tensor to (1,5,8) or (5,8).
Hope these 3 intuitions are beneficial to anyone trying to understand how axes and NumPy tensors work in general and how to build an intuitive understanding to work better with them.
Let's start with a one dimensional example:
a, b, c, d, e = 0, 1, 2, 3, 4
arr = np.array([a, b, c, d, e])
If you do,
arr.sum(0)
Output
10
That is the sum of the elements of the array
a + b + c + d + e
Now before moving on a 2 dimensional example. Let's clarify that in numpy the sum of two 1 dimensional arrays is done element wise, for example:
a = np.array([1, 2, 3, 4, 5])
b = np.array([6, 7, 8, 9, 10])
print(a + b)
Output
[ 7 9 11 13 15]
Now if we change our initial variables to arrays, instead of scalars, to create a two dimensional array and do the sum
a = np.array([1, 2, 3, 4, 5])
b = np.array([6, 7, 8, 9, 10])
c = np.array([11, 12, 13, 14, 15])
d = np.array([16, 17, 18, 19, 20])
e = np.array([21, 22, 23, 24, 25])
arr = np.array([a, b, c, d, e])
print(arr.sum(0))
Output
[55 60 65 70 75]
The output is the same as for the 1 dimensional example, i.e. the sum of the elements of the array:
a + b + c + d + e
Just that now the elements of the arrays are 1 dimensional arrays and the sum of those elements is applied. Now before explaining the results, for axis = 1, let's consider an alternative notation to the notation across axis = 0, basically:
np.array([arr[0, :], arr[1, :], arr[2, :], arr[3, :], arr[4, :]]).sum(0) # [55 60 65 70 75]
That is we took full slices in all other indices that were not the first dimension. If we swap to:
res = np.array([arr[:, 0], arr[:, 1], arr[:, 2], arr[:, 3], arr[:, 4]]).sum(0)
print(res)
Output
[ 15 40 65 90 115]
We get the result of the sum along axis=1. So to sum it up you are always summing elements of the array. The axis will indicate how this elements are constructed.
Intuitively, 'axis 0' goes from top to bottom and 'axis 1' goes from left to right. Therefore, when you sum along 'axis 0' you get the column sum, and along 'axis 1' you get the row sum.
As you go along 'axis 0', the row number increases. As you go along 'axis 1' the column number increases.
Think of a 1-dimension array:
mat=array([ 1, 2, 3, 4, 5])
Its items are called by mat[0], mat[1], etc
If you do:
np.sum(mat, axis=0)
it will return 15
In the background, it sums all items with mat[0], mat[1], mat[2], mat[3], mat[4]
meaning the first index (axis=0)
Now consider a 2-D array:
mat=array([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10],
[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20],
[21, 22, 23, 24, 25]])
When you ask for
np.sum(mat, axis=0)
it will again sum all items based on the first index (axis=0) keeping all the rest same. This means that
mat[0][1], mat[1][1], mat[2][1], mat[3][1], mat[4][1]
will give one sum
mat[0][2], mat[1][2], mat[2][2], mat[3][2], mat[4][2]
will give another one, etc
If you consider a 3-D array, the logic will be the same. Every sum will be calculated on the same axis (index) keeping all the rest same. Sums on axis=0 will be produced by:
mat[0][1][1],mat[1][1][1],mat[2][1][1],mat[3][1][1],mat[4][1][1]
etc
Sums on axis=2 will be produced by:
mat[2][3][0], mat[2][3][1], mat[2][3][2], mat[2][3][3], mat[2][3][4]
etc
I hope you understand the logic. To keep things simple in your mind, consider axis=position of index in a chain index, eg axis=3 on a 7-mensional array will be:
mat[0][0][0][this is our axis][0][0][0]

Eliminating Consecutive Numbers

If you have a range of numbers from 1-49 with 6 numbers to choose from, there are nearly 14 million combinations. Using my current script, I currently have only 7.2 million combinations remaining. Of the 7.2 million remaining combinations, I want to eliminate all 3, 4, 5, 6, dual, and triple consecutive numbers.
Example:
3 consecutive: 1, 2, 3, x, x, x
4 consecutive: 3, 4, 5, 6, x, x
5 consecutive: 4, 5, 6, 7, 8, x
6 consecutive: 5, 6, 7, 8, 9, 10
double separate consecutive: 1, 2, 5, 6, 14, 18
triple separate consecutive: 1, 2, 9, 10, 22, 23
Note: combinations such as 1, 2, 12, 13, 14, 15 must also be eliminated or else they conflict with the rule that double and triple consecutive combinations to be eliminated.
I'm looking to find how many combinations of the 7.2 million remaining combinations have zero consecutive numbers (all mixed) and only 1 consecutive pair.
Thank you!
import functools
_MIN_SUM = 120
_MAX_SUM = 180
_MIN_NUM = 1
_MAX_NUM = 49
_NUM_CHOICES = 6
_MIN_ODDS = 2
_MAX_ODDS = 4
#functools.lru_cache(maxsize=None)
def f(n, l, s = 0, odds = 0):
if s > _MAX_SUM or odds > _MAX_ODDS:
return 0
if n == 0 :
return int(s >= _MIN_SUM and odds >= _MIN_ODDS)
return sum(f(n-1, i+1, s+i, odds + i % 2) for i in range(l, _MAX_NUM+1))
result = f(_NUM_CHOICES, _MIN_NUM)
print('Number of choices = {}'.format(result))
While my answer should work, I think someone might be able to offer a faster solution.
Consider the following code:
not_allowed = []
for x in range(48):
not_allowed.append([x, x+1, x+2])
# not_allowed = [ [0,1,2], [1,2,3], ... [11,12,13], ... [47,48,49] ]
my_numbers = [[1, 2, 5, 9, 11, 33], [1, 3, 7, 8, 9, 31], [12, 13, 14, 15, 23, 43]]
for x in my_numbers:
for y in not_allowed:
if set(y) <= set(x): # if [1,2,3] is a subset of [1,2,5,9,11,33], etc.
# drop x
This code will remove all instances that contain double consecutive numbers, which is all you really need to check for, because triple, quadruple, etc. all imply double consecutive. Try implementing this and let me know how it works.
The easiest approach is probably to generate and filter. I used numpy to try to vectorize as much of this as I could:
import numpy as np
from itertools import combinations
combos = np.array(list(combinations(range(1, 50), 6))) # build all combos
# combos is shape (13983816, 6)
filt = np.where(np.bincount(np.where(np.abs(
np.subtract(combos[:, :-1], combos[:, 1:])) == 1)[0]) <= 1)[0] # magic!
filtered = combos[filt]
# filtered is shape (12489092, 6)
Breaking down that "magic" line
First we subtract the first five items in the list from the last five items to get the differences between them. We do this for the entire set of combinations in one shot with np.subtract(combos[:, :-1], combos[:, 1:]). Note that itertools.combinations produces sorted combinations, on which this depends.
Next we take the absolute value of these differences to make sure we only look at positive distances between numbers with np.abs(...).
Next we grab the indicies from this operation for the entire dataset that indicate a difference of 1 (consecutive numbers) with np.where(... == 1)[0]. Note that np.where returns a tuple where the first item are all of the rows, and the second item are all of the corresponding columns for our condition. This is important because any row value that shows up more than once tells us that we have more than one consecutive number in that row!
So we count how many times each row shows up in our results with np.bincount(...), which will return something like [5, 4, 4, 4, 3, 2, 1, 0] indicating how many consecutive pairs are in each row of our combinations dataset.
Finally we grab only the row numbers where there are 0 or 1 consecutive values with np.where(... <= 1)[0].
I am returning way more combinations than you seem to indicate, but I feel fairly confident that this is working. By all means, poke holes in it in the comments and I will see if I can find fixes!
Bonus, because it's all vectorized, it's super fast!

How to sum a slice from a pandas dataframe

I'm trying to sum the a portion of the sessions in my dictionary so I can get totals for the current and previous week.
I've converted the JSON into a pandas dataframe in one test. I'm summing the total of the sessions using the .sum() function in pandas. However, I also need to know the total sessions from this week and the week prior. I've tried a few methods to sum values (-1:-7) and (-8:-15), but I'm pretty sure I need to use .iloc.
IN:
response = requests.get("url")
data = response.json()
df=pd.DataFrame(data['DailyUsage'])
total_sessions = df['Sessions'].sum()
current_week= df['Sessions'].iloc[-1:-7]
print(current_week)
total_sessions =['current_week'].sum
OUT:
Series([], Name: Sessions, dtype: int64)
AttributeError 'list' object has no attribute 'sum'
Note: I've tried this with and without pd.to_numeric and also with variations on the syntax of the slice and sum methods. Pandas doesn't feel very Pythonic and I'm out of ideas as to what to try next.
Assuming that df['Sessions'] holds each day, and you are comparing current and previous week only, you can use reshape to create a weekly sum for the last 14 values.
weekly_matrix = df['Sessions'][:-15:-1].values.reshape((2, 7))
Then, you can sum each row and get the weekly sum, most recent will be the first element.
import numpy as np
weekly_sum = np.sum(weekly_matrix, axis=1)
current_week = weekly_sum[0]
previous_week = weekly_sum[1]
EDIT: how the code works
Let's take the 1D-array which is accessed by the values attribute of the pandas Series. It contains the last 14 days, which is ordered from most recent to the oldest. I will call it x.
x = array([14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1])
The array's reshape function is then called on x to split this data into a 2D-array (matrix) with 2 rows and 7 columns.
The default behavior of the reshape function is to first fill all columns in a row before moving to the next row. Therefore, x[0] will be the element (1,1) in the reshaped array, x[1] will be the element (1,2), and so on. After the element (1,7) is filled with x[6] (ending the current week), the next element x[7] will then be placed in (2,1). This continues until finishing the reshape operation, with the placement of x[13] in (2,7).
This results in placing the first 7 elements of x (current week) in the first row, and the last 7 elements of x (previous week) in the second row. This was called weekly_matrix.
weekly_matrix = x.reshape((2, 7))
# weekly_matrix = array([[14, 13, 12, 11, 10, 9, 8],
# [ 7, 6, 5, 4, 3, 2, 1]])
Since now we have the values of each week organized in a matrix, we can use numpy.sum function to finish our operation. numpy.sum can take an axis argument, which will control how the value is computed:
if axis=None, all elements are added in a grand total.
if axis=0, all rows in each column will be added. In the case of weekly_matrix, this will result in a 7 element 1D-array ([21, 19,
17, 15, 13, 11, 9], which is not the result we want, as we are
actually adding equivalent days on each week).
if axis=1 (as the case of the solution), all columns in each row will be added, producing a 2 element 1D-array in the case of weekly_matrix. Order of this result
array follows the same order of the rows in the matrix (i.e., element
0 is the total of the first row, and element 1 is the total of the
second row). Since we know that the first row is the current week, and
the second row is the previous week, we can extract the information
using these indexes, which is
# weekly_sum = array([77, 28])
current_week = weekly_sum[0] # sum of [14, 13, 12, 11, 10, 9, 8] = 77
previous_week = weekly_sum[1] # sum of [ 7, 6, 5, 4, 3, 2, 1] = 28
To group and sum by a fixed number of values, for instance with daily data and weekly aggregation, consider groupby. You can do this forwards or backwards by slicing your series as appropriate:
np.random.seed(0)
df = pd.DataFrame({'col': np.random.randint(0, 10, 21)})
print(df['col'].values)
# array([5, 0, 3, 3, 7, 9, 3, 5, 2, 4, 7, 6, 8, 8, 1, 6, 7, 7, 8, 1, 5])
# forwards groupby
res = df['col'].groupby(df.index // 7).sum()
# 0 30
# 1 40
# 2 35
# Name: col, dtype: int32
# backwards groupby
df['col'].iloc[::-1].reset_index(drop=True).groupby(df.index // 7).sum()
# 0 35
# 1 40
# 2 30
# Name: col, dtype: int32

Pandas iloc complex slice every nth row

I have a dataframe with a periodicity in the rows of 14 i.e. there are 14 lines of data per record (means, sdev etc.) and I want to extract the 2nd, 4th, 7th and 9th line, repeatedly for every record (14 lines). My code is:
Mean_df = df.iloc[[1,3,6,8]::14,:].copy()
which does not work
TypeError: cannot do slice indexing on <class 'pandas.core.indexes.range.RangeIndex'> with these indexers [[1, 3, 6, 8]] of <class 'list'>
I got help with the code from here, which has been useful, but not on the multi-row selections --
Pandas every nth row
I can extract as several different slices and combine, but it feels like there may be a more elegant solution.
Any ideas?
Using:
df[np.isin(np.arange(len(df))%14,np.array([1,3,6,8]))]
You can use a tuple comprehension with slice and np.r_:
arr = np.arange(14*3)
slices = tuple(slice(i, len(arr), 14) for i in (1, 3, 6, 8))
res = np.r_[slices]
print(res)
array([ 1, 15, 29, 3, 17, 31, 6, 20, 34, 8, 22, 36])
In this example, indexing dataframe rows with 1::14 is equivalent to indexing with slice(1, df.shape[0], 14).
This is fairly generic, you can define any tuple of slice objects and pass to np.r_.

Categories

Resources