While loop for iterating all combinations between two values - python

I want to create a loop that loads all the iterations of two variables into a dataframe in seperate columns. I want variable "a" to hold values between 0 and 1 in 0.1 increments, and the same for variable "b". In otherwords there should be 100 iterations when complete, starting with 0 & 0, and ending with 1 & 1.
I've tried the following code
data = [['Decile 1', 10], ['Decile_2', 15], ['Decile_3', 14]]
staging_table = pd.DataFrame(data, columns = ['Decile', 'Volume'])
profile_table = pd.DataFrame(columns = ['Decile', 'Volume'])
a = 0
b = 0
finished = False
while not finished:
if b != 1:
if a != 1:
a = a + 0.1
staging_table['CAM1_Modifier'] = a
staging_table['CAM2_Modifier'] = b
profile_table = profile_table.append(staging_table)
else:
b = b + 0.1
else:
finished = True
profile_table

You can use itertools.product to get all the combinations:
import itertools
import pandas as pd
x = [i / 10 for i in range(11)]
df = pd.DataFrame(
list(itertools.product(x, x)),
columns=["a", "b"]
)
# a b
# 0 0.0 0.0
# 1 0.0 0.1
# 2 0.0 0.2
# ... ... ...
# 118 1.0 0.8
# 119 1.0 0.9
# 120 1.0 1.0
#
# [121 rows x 2 columns]

itertools is your friend.
from itertools import product
for a, b in product(map(lambda x: x / 10, range(10)),
map(lambda x: x / 10, range(10))):
...
range(10) gives us the integers from 0 to 10 (regrettably, range fails on floats). Then we divide those values by 10 to get your range from 0 to 1. Then we take the Cartesian product of that iterable with itself to get every combination.

Related

How to search for a string in pandas dataframe and match with another?

I'm trying to compare 2 columns (strings) of 2 different pandas' dataframe (A and B) and if they match a piece of the string, I would like to assign the value of one column in dataframe A to dataframe B.
This is my code:
import numpy as np
import pandas as pd
A = ['DF-PI-05', 'DF-PI-09', 'DF-PI-10', 'DF-PI-15', 'DF-PI-16',
'DF-PI-19', 'DF-PI-89', 'DF-PI-92', 'DF-PI-93', 'DF-PI-94',
'DF-PI-95', 'DF-PI-96', 'DF-PI-25', 'DF-PI-29', 'DF-PI-30',
'DF-PI-34', 'DF-PI-84']
B = ['PI-05', 'PI-10', 'PI-89', 'PI-90', 'PI-93', 'PI-94', 'PI-95',
'PI-96', 'PI-09', 'PI-15', 'PI-16', 'PI-19', 'PI-91A', 'PI-91b',
'PI-92', 'PI-25-CU', 'PI-29', 'PI-30', 'PI-34', 'PI-84-CU-S1',
'PI-84-CU-S2']
import random
sample_size = len(A)
Group = [random.randint(0,1) for _ in range(sample_size)]
A = pd.DataFrame(list(zip(A,Group)),columns=['ID','Group'])
B = pd.DataFrame(B,columns=['Name'])
clus_tx = np.array([])
for date, row in B.iterrows():
for date2, row2 in A.iterrows():
if row2['ID'] in row['Name']:
clus = row['Group']
else:
clus = 999
clus_tx = np.append(clus_tx,clus)
B['Group'] = clus_tx
What I would like to have is a np.array clus_tx with the length of B, where if there is an element with the string that matches in A ('PI-xx'), I would take the value of the column 'Group' from A and assign to B, if there is no string matching, I would assign the value of 999 to the column 'Group' in B.
I think I'm doing the loop wrong because the size of clus_tx is not what I expected...My real dataset is huge, so I can't do this manually.
First, the reason why the size of clus_tx is not what you want is that you put clus_tx = np.append(clus_tx,clus) in the innermost loop, which has no break. So the length of clus_tx will always be len(A) x len(B).
Second, the logic of if statement block is not what you want.
I've changed the code a bit, hope it helps:
import numpy as np
import pandas as pd
A = ['DF-PI-05', 'DF-PI-09', 'DF-PI-10', 'DF-PI-15', 'DF-PI-16',
'DF-PI-19', 'DF-PI-89', 'DF-PI-92', 'DF-PI-93', 'DF-PI-94',
'DF-PI-95', 'DF-PI-96', 'DF-PI-25', 'DF-PI-29', 'DF-PI-30',
'DF-PI-34', 'DF-PI-84']
B = ['PI-05', 'PI-10', 'PI-89', 'PI-90', 'PI-93', 'PI-94', 'PI-95',
'PI-96', 'PI-09', 'PI-15', 'PI-16', 'PI-19', 'PI-91A', 'PI-91b',
'PI-92', 'PI-25-CU', 'PI-29', 'PI-30', 'PI-34', 'PI-84-CU-S1',
'PI-84-CU-S2']
import random
sample_size = len(A)
Group = [random.randint(0,1) for _ in range(sample_size)]
A = pd.DataFrame(list(zip(A,Group)),columns=['ID','Group'])
B = pd.DataFrame(B,columns=['Name'])
clus_tx = np.array([])
for date, row_B in B.iterrows():
clus = 999
for date2, row_A in A.iterrows():
if row_B['Name'] in row_A['ID']:
clus = row_A['Group']
break
clus_tx = np.append(clus_tx,clus)
B['Group'] = clus_tx
print(B)
The print output of B looks like:
Name Group
0 PI-05 0.0
1 PI-10 0.0
2 PI-89 1.0
3 PI-90 999.0
4 PI-93 0.0
5 PI-94 1.0
6 PI-95 1.0
7 PI-96 0.0
8 PI-09 1.0
9 PI-15 0.0
10 PI-16 1.0
11 PI-19 1.0
12 PI-91A 999.0
13 PI-91b 999.0
14 PI-92 1.0
15 PI-25-CU 999.0
16 PI-29 0.0
17 PI-30 1.0
18 PI-34 0.0
19 PI-84-CU-S1 999.0
20 PI-84-CU-S2 999.0

How can I fill NaN values in a dataframe with the average of the values above it?

I'm looking to make it so that NaN values in a dataframe are filled in by the mean of all the values up to that point, as such:
A
0 1
1 2
2 3
3 4
4 5
5 NaN
6 NaN
7 11
8 NaN
Would become
A
0 1
1 2
2 3
3 4
4 5
5 3
6 3
7 11
8 4
You can solve it by running the following code
import numpy as np
import pandas as pd
df = pd.DataFrame({
"A": [ 1, 2, 3, 4, 5, pd.NA, pd.NA, 11, pd.NA ]
})
for idx in df[pd.isna(df["A"])].index:
df.loc[idx, "A"] = np.mean(df.loc[ : idx, "A" ])
It iterates on each NaN and fills it with the mean of the previous values, including those filled NaNs.
At the end you will have:
>>> df
A
0 1
1 2
2 3
3 4
4 5
5 3
6 3
7 11
8 4
EDIT
As stated by RichieV, performance may be an issue with this solution (its runtime complexity is O(N^2)) when there are many NaNs, but we also should avoid python iterations, since they are slow when compared to native pandas / numpy calls.
Here is an optimized version:
last_idx = None
cumsum = 0
cumnum = 0
for idx in df[pd.isna(df["A"])].index:
prev_values = df.loc[ last_idx : idx, "A" ]
# for some reason, pandas includes idx on the slice, so we remove it
prev_values = prev_values[ : -1 ]
cumsum += prev_values.sum()
cumnum += len(prev_values)
df.loc[idx, "A"] = int(cumsum / cumnum)
last_idx = idx
Result:
>>> df
A
0 1
1 2
2 3
3 4
4 5
5 3
6 3
7 11
8 4
Since in the worst case the script should pass on the dataframe twice, the runtime complexity is now O(N).
Marco's answer works fine but it can be optimized with incremental average formulas, from math.stackexchange.com
Here is an adaptation of that other question (not the exact formula, just the concept).
cumsum = 0
expanding_mean = []
for i, xi in enumerate(df['A']):
if pd.isna(xi):
mean = cumsum / i # divide by number of items up to previous row
expanding_mean.append(mean)
cumsum += mean
else:
cumsum += xi
df.loc[df['A'].isna(), 'A'] = expanding_mean
The main advantage with this code is not having to read all items up to the current index on each iteration to get the mean.
This option still uses a python loop--which is not the best choice with pandas--but there seems to be no way around it for this use case (hopefully someone will get inspired by this and find such method without a loop).
Performance tests
Three alternative functions were defined:
incremental: My answer.
from_origin: Marco's original answer.
incremental_pandas: Marco's updated answer.
Tests were done using timeit module with 3 repetitions on random samples with 0.4 probability of NaN.
Full code for testing
import pandas as pd
import numpy as np
import timeit
import collections
from matplotlib import pyplot as plt
def incremental(df: pd.DataFrame):
# error handling
if pd.isna(df.iloc[0, 0]):
df.iloc[0, 0] = 0
cumsum = 0
expanding_mean = []
for i, xi in enumerate(df['A']):
if pd.isna(xi):
mean = cumsum / i # divide by number of items up to previous row
expanding_mean.append(mean)
cumsum += mean
else:
cumsum += xi
df.loc[df['A'].isna(), 'A'] = expanding_mean
return df
def incremental_pandas(df: pd.DataFrame):
# error handling
if pd.isna(df.iloc[0, 0]):
df.iloc[0, 0] = 0
last_idx = None
cumsum = 0
cumnum = 0
for idx in df[pd.isna(df["A"])].index:
prev_values = df.loc[ last_idx : idx, "A" ]
# for some reason, pandas includes idx on the slice, so we remove it
prev_values = prev_values[ : -1 ]
cumsum += prev_values.sum()
cumnum += len(prev_values)
df.loc[idx, "A"] = cumsum / cumnum
last_idx = idx
return df
def from_origin(df: pd.DataFrame):
# error handling
if pd.isna(df.iloc[0, 0]):
df.iloc[0, 0] = 0
for idx in df[pd.isna(df["A"])].index:
df.loc[idx, "A"] = np.mean(df.loc[ : idx, "A" ])
return df
def get_random_sample(n, p):
np.random.seed(123)
return pd.DataFrame({'A':
np.random.choice(list(range(10)) + [np.nan],
size=n, p=[(1 - p) / 10] * 10 + [p])})
r = 3
p = 0.4 # portion of NaNs
# check result from all functions
results = []
for func in [from_origin, incremental, incremental_pandas]:
random_df = get_random_sample(1000, p)
new_df = random_df.copy(deep=True)
results.append(func(new_df))
print('Passed' if all(np.allclose(r, results[0]) for r in results[1:])
else 'Failed', 'implementation test')
timings = {}
for n in np.geomspace(10, 10000, 10):
random_df = get_random_sample(int(n), p)
timings[n] = collections.defaultdict(float)
results = {}
for func in ['incremental', 'from_origin', 'incremental_pandas']:
timings[n][func] = (
timeit.timeit(f'{func}(random_df.copy(deep=True))', number=r, globals=globals())
/ r
)
timings = pd.DataFrame(timings).T
print(timings)
timings.plot()
plt.xlabel('size of array')
plt.ylabel('avg runtime (s)')
plt.ylim(0)
plt.grid(True)
plt.tight_layout()
plt.show()
plt.close('all')

2D Gaussian oversampling over large dataframe

I currently have a dataframe in the following format:
step tag_id x_pos y_pos
1 1 5 3
1 2 3 4
2 1 2 2
2 3 1 6
.........................
.........................
N 1 5 7
For each row in the df, I am aiming to add an additional m rows oversampling from a Gaussian distribution for the x and y values (independent). Thus, a df of N = 100 and m = 10 would result in a df length 1010, including the original and oversampled values.
The code I have for this works, but it is extremely slow over a large dataset (N > 100k). There are many operations (creating new arrays/ dfs, use of itertuples, etc.) that I'm sure are hampering performance; I would appreciate any help as to how I can improve the performance so I can generate higher m values over the whole dataset. For instance: input data is from a pandas dataframe, but the multi-variate normal function operates on numpy arrays. Is there a more natural way to implement this through pandas without the copying between numpy arrays and dataframes? Thanks!
Reproducible example:
import pandas as pd
import numpy as np
import random
def gaussianOversample2(row, n):
sigma = 2
mean_x = float(getattr(row,'x_pos'))
mean_y = float(getattr(row,'y_pos'))
step = getattr(row, 'step')
tag_id = getattr(row, 'tag_id')
sigma = np.array([1,1])
cov = np.diag(sigma ** 2)
x,y = np.random.multivariate_normal([mean_x, mean_y], cov, n).T
x = np.concatenate(([mean_x], x))
y = np.concatenate(([mean_y], y))
steps = np.empty(n+1)
tags = np.empty(n+1)
steps.fill(step)
tags.fill(tag_id)
return x,y, steps, tags
def oversampleDf(df, n):
oversampled_arr = np.empty((0,4), float)
# with input df with step, tag_id, x_pos, y_pos
data = pd.DataFrame(columns = df.columns)
count = 0
for row in df.itertuples(index=False):
count = count + 1
temp = np.zeros((len(row), n+1))
oversample_x, oversample_y, steps, tags = gaussianOversample2(row, n)
temp[0] = steps
temp[1] = tags
temp[2] = oversample_x
temp[3] = oversample_y
temp = pd.DataFrame(temp.T, columns = df.columns)
data = data.append(temp)
if count % 1000 == 0:
print("Row: ", count)
return data
df = pd.DataFrame([[1, 1, 5, 3],[1, 2, 3, 4],[2, 1, 2, 2],[2, 3, 1, 6], columns = ['step', 'tag_id', 'x_pos', 'y_pos']])
res = oversampleDf(df, 20)
"""
# Result should be:
step tag_id x_pos y_pos
0 1.0 1.0 5.000000 3.000000
1 1.0 1.0 3.423492 3.886602
2 1.0 1.0 5.404581 2.177559
3 1.0 1.0 4.023274 2.883737
4 1.0 1.0 3.390710 3.038782
.. ... ... ... ...
16 2.0 3.0 1.894151 5.510321
17 2.0 3.0 1.110932 5.281578
18 2.0 3.0 1.623538 4.529825
19 2.0 3.0 -0.576756 7.476872
20 2.0 3.0 -0.866123 5.898048
"""
This is the solution I have found for myself; it is more of a workaround than a technique using quicker methods. I instead write out to a csv file, which I then read in once complete, as so:
def gaussianOversample3(row, n):
mean_x = float(getattr(row,'x_pos'))
mean_y = float(getattr(row,'y_pos'))
step = getattr(row, 'step')
tag_id = getattr(row, 'tag_id')
sigma = np.array([1,1])
cov = np.diag(sigma ** 2)
x,y = np.random.multivariate_normal([mean_x, mean_y], cov, n).T
x = np.concatenate(([mean_x], x))
y = np.concatenate(([mean_y], y))
steps = np.empty(n+1)
tags = np.empty(n+1)
steps.fill(step)
tags.fill(tag_id)
pd.DataFrame(data = np.column_stack((steps,tags,x,y))).to_csv("oversample.csv", mode = 'a', header = False)
def oversampleDf2(df, n):
filename = "oversample.csv"
d = pd.DataFrame(list())
d.to_csv(filename)
#count = 0
for row in df.itertuples(index=False):
#count = count + 1
gaussianOversample3(row, n)
#if count % 10000 == 0:
# print("Row: ", count)
Because of how it is reading the file, I have to do the following:
oversampleDf2(defensive_df2, num_oversamples)
oversampled_df = pd.read_csv("oversample_10.csv", sep= ' ')
oversampled_df.columns = ['col']
oversampled_df = oversampled_df.col.str.split(",",expand=True)
oversampled_df.columns = ['temp', 'step', 'tag_id', 'x_pos', 'y_pos']
oversampled_df = oversampled_df.drop(['temp'], axis = 1)
oversampled_df = oversampled_df.astype(float)

is it possible to use numpy to calculate on recursive data [duplicate]

I have a time-series A holding several values. I need to obtain a series B that is defined algebraically as follows:
B[t] = a * A[t] + b * B[t-1]
where we can assume B[0] = 0, and a and b are real numbers.
Is there any way to do this type of recursive computation in Pandas? Or do I have no choice but to loop in Python as suggested in this answer?
As an example of input:
> A = pd.Series(np.random.randn(10,))
0 -0.310354
1 -0.739515
2 -0.065390
3 0.214966
4 -0.605490
5 1.293448
6 -3.068725
7 -0.208818
8 0.930881
9 1.669210
As I noted in a comment, you can use scipy.signal.lfilter. In this case (assuming A is a one-dimensional numpy array), all you need is:
B = lfilter([a], [1.0, -b], A)
Here's a complete script:
import numpy as np
from scipy.signal import lfilter
np.random.seed(123)
A = np.random.randn(10)
a = 2.0
b = 3.0
# Compute the recursion using lfilter.
# [a] and [1, -b] are the coefficients of the numerator and
# denominator, resp., of the filter's transfer function.
B = lfilter([a], [1, -b], A)
print B
# Compare to a simple loop.
B2 = np.empty(len(A))
for k in range(0, len(B2)):
if k == 0:
B2[k] = a*A[k]
else:
B2[k] = a*A[k] + b*B2[k-1]
print B2
print "max difference:", np.max(np.abs(B2 - B))
The output of the script is:
[ -2.17126121e+00 -4.51909273e+00 -1.29913212e+01 -4.19865530e+01
-1.27116859e+02 -3.78047705e+02 -1.13899647e+03 -3.41784725e+03
-1.02510099e+04 -3.07547631e+04]
[ -2.17126121e+00 -4.51909273e+00 -1.29913212e+01 -4.19865530e+01
-1.27116859e+02 -3.78047705e+02 -1.13899647e+03 -3.41784725e+03
-1.02510099e+04 -3.07547631e+04]
max difference: 0.0
Another example, in IPython, using a pandas DataFrame instead of a numpy array:
If you have
In [12]: df = pd.DataFrame([1, 7, 9, 5], columns=['A'])
In [13]: df
Out[13]:
A
0 1
1 7
2 9
3 5
and you want to create a new column, B, such that B[k] = A[k] + 2*B[k-1] (with B[k] == 0 for k < 0), you can write
In [14]: df['B'] = lfilter([1], [1, -2], df['A'].astype(float))
In [15]: df
Out[15]:
A B
0 1 1
1 7 9
2 9 27
3 5 59

Using values of a list stored in DataFrame cell in Pandas

I have a CSV file with each cell value a two element list(pair).
| 0 | 1 | 2 |
----------------------------------------
0 |[87, 1.03] | [30, 4.05] | NaN |
1 |[34, 2.01] | NaN | NaN |
2 |[83, 0.2] | [18, 3.4] | NaN |
How do I access the elements of these, separately? The first element of each pair acts as an index for another CSV table.
I have done something like this, but this keeps bugging me on one thing or other.
links = pd.read_csv('buslinks.csv', header = None)
a_list = []
for i in range(0, 100):
l = []
a_list.append(l)
for j in range(0, 100):
a = busStops.iloc[j]
df = pd.DataFrame(columns = ['id', 'Distance'])
l = links.iloc[j]
for i in l:
if(pd.isnull(i)):
continue
else:
x = int(i[0])
d = busStops.iloc[x-1]
id = d['id']
dist = distance(d['xCoordinate'], a['xCoordinate'], d['yCoordinate'], a['yCoordinate'])
df.loc[i] = [id, dist]
a_list[j] = (df.sort('Distance', ascending = True)).tolist()
This approach worked when each cell contained only one element. In that case, np.isnan() was used instead of pd.isnull()
The read CSV file was created as:
a_list = []
for i in range(0, 100):
l = []
a_list.append(l)
for i in range(0, 100):
while(len(a_list[i])<3):
x = random.randint(1, 100)
if(x-1 == i):
continue
a = busStops.iloc[i]
b = busStops.iloc[x-1]
dist = distance(a['xCoordinate'], b['xCoordinate'], a['yCoordinate'], b['yCoordinate'])
if dist>3:
continue
if x in a_list[i]:
continue
a_list[i].append([b['id'], dist])
a_list[x-1].append([a['id'], dist])
for j in range(0, 3):
y = random.randint(0, 1)
while (y == 0):
x = random.randint(1, 100)
if(x-1 == i):
continue
a = busStops.iloc[i]
b = busStops.iloc[x-1]
dist = distance(a['xCoordinate'], b['xCoordinate'], a['yCoordinate'], b['yCoordinate'])
if dist>3:
continue
if x in a_list[i]:
continue
a_list[i].append([b['id'], dist])
a_list[x-1].append([a['id'], dist])
y = 1
dfLinks = pd.DataFrame(a_list)
dfLinks
dfLinks.to_csv('buslinks.csv', index = False, header = False)
BusStops is yet another CSV file, that contains id, xCoordinate, yCoordinate, Population and Priority as columns.
First of all, beware that storing lists in DataFrames dooms you to Python-speed loops. To take advantage of fast Pandas/NumPy routines, you need to use native NumPy dtypes such as np.float64 (whereas, in contrast, list require "object" dtype).
That being said, here is my code i wrote just to show how to do it so you can use something like that in your code:
import pandas as pd
table = pd.DataFrame(columns=['col1', 'col2', 'col3'])
table.loc[0] = [1, 2,3]
table.loc[1] = [1, [2,3], 4]
table.loc[1].iloc[1] # returns [2, 3]
table.loc[1].iloc[1][0] # returns 2
You shouldn't be putting lists in pd.Series objects. It's inefficient and you lose all vectorised functionality. If, however, you are determined that this must be your starting point, you can unravel the lists into multiple columns in a couple of steps.
Setup
df = pd.DataFrame({0: [[87, 1.03], [34, 2.01], [83, 0.2]],
1: [[30, 4.05], np.nan, [18, 3.4]],
2: [np.nan, np.nan, np.nan]})
Step 1: ensure lists have same size
# messy way to ensure all values have length 2
df[1] = np.where(df[1].isnull(), pd.Series([[np.nan, np.nan]]*len(df[1])), df[1])
print(df)
0 1 2
0 [87, 1.03] [30, 4.05] NaN
1 [34, 2.01] [nan, nan] NaN
2 [83, 0.2] [18, 3.4] NaN
Step 2: concatenate dataframes of split series
# create list of dataframes
L = [pd.DataFrame(df[col].values.tolist()) for col in df]
# concatenate dataframes in list
df_new = pd.concat(L, axis=1, ignore_index=True)
print(df_new)
0 1 2 3 4
0 87 1.03 30.0 4.05 NaN
1 34 2.01 NaN NaN NaN
2 83 0.20 18.0 3.40 NaN
You can then access values as you would normally, e.g. df_new[2].

Categories

Resources