How to select remains of data frame after random selection of data?
This will give 80% data. but I want remaining 20% also.
df.sample(frac=0.8)
You can use:
df_sample = df.sample(frac=0.8)
and then:
df_remains = df[~df.index.isin(df_sample.index)]
Since you also have numpy installed, a Pandas dependency, you can do something like this:
import numpy as np
p = .8
msk = np.random.rand(len(df)) < p
sample = df[msk]
remains = df[~msk]
Related
so I need to analyse the peak number & width of a signal (in my case Calcium signal from epidermis cells) that I have stored in an excelsheet. Each column has all the values for one Cell (600 values)
To analyse the peaks, which I will be duing with the scipy.signal.find_peaks() and scipy.signal.peak_widths() function, I put the individual columns in an 1D numpy array containing all the 601 values from that column.
I did this by saving all the individual columns (Columns are named A, B, C, D, etc in Excelsheet) into their own dataframes (df_A, df_B) then putting them in an array :
import numpy as np
import pandas as pd
df = pd.read_excel('test.xlsx')
df_A = df.loc[:,'A']
df_B = df.loc[:,'B']
arrA = np.array(df_A)
arrB = np.array(df_B)
To calculate the the peak number&width i used the following lines :
from scipy.signal import find_peaks, peak_widths
peaks_A, _ = find_peaks(x,height=7000, prominence= 1)
results_peakwidth_A = peak_widths(x, peaks, rel_height=0.5)
Now since I have not only one but > 100 cells/signals to analyse, is there an simple way to do this for all the cells/arrays ? This exceeds my capabilities so I would gladly welcome any help.
The proposal would be as follows. In essence you firstly select the required columns (however many there are). Then you create a function that will take in a column (no need to turn it into arrays, unless scipy disagrees, in that case add column = column.values in the top of process function).
Afterwards use apply, which will loop through each column in the dataframe and pass it into the function that you defined.
import pandas as pd
from scipy.signal import find_peaks, peak_widths
df = pd.read_excel('test.xlsx')
df = ... # select all columns from A-Z into a single dataframe with the columns required.
# The shape here would b
# A B C
# 1 4 4.1
# 2 3 4.0
# ...
# define the function you want to apply to each column
def process(column):
peaks, _ = find_peaks(column,height=7000, prominence= 1)
return peak_widths(column, peaks, rel_height=0.5)
new_columns = df.apply(process)
As I'm unsure of what the actual output should look like, you might want to keep the peaks_A and the width. In which case you could alter the process function slightly:
def process(column):
peaks, _ = find_peaks(column,height=7000, prominence= 1)
width = peak_widths(column, peaks, rel_height=0.5)
return pd.Series({"width": width, "peaks": peaks})
I am using the Housing train.csv data from Kaggle to run a prediction.
https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data?select=train.csv
I am trying to generate a correlation and only keep the features that correlate with SalePrice from 0.5 to 0.9. I tried to use this function to fileter some of it, but I am removing the correlation values that are above .9 only.
How would I update this function to only keep those specific features that I need to generate a correlation heat map?
data = train
corr = data.corr()
columns = np.full((corr.shape[0],), True, dtype=bool)
for i in range(corr.shape[0]):
for j in range(i+1, corr.shape[0]):
if corr.iloc[i,j] >= 0.9:
if columns[j]:
columns[j] = False
selected_columns = data.columns[columns]
data = data[selected_columns]
import pandas as pd
data = pd.read_csv('train.csv')
col = data.columns
c = [i for i in col if data[i].dtypes=='int64' or data[i].dtypes=='float64'] # dropping columns as dtype == object
main_col = ['SalePrice'] # column with which we have to compare correlation
corr_saleprice = data.corr().filter(main_col).drop(main_col)
c1 =(corr_saleprice['SalePrice']>=0.5) & (corr_saleprice['SalePrice']<=0.9)
c2 =(corr_saleprice['SalePrice']>=-0.9) & (corr_saleprice['SalePrice']<=-0.5)
req_index= list(corr_saleprice[c1 | c2].index) # selecting column with given criteria
#req_index.append('SalePrice') #if you want SalePrice column in your final dataframe too , uncomment this line
data = data[req_index]
data
Also using for loops is not so efficient, a direct implementation is favorable. I hope this is what you want!
For generating heatmap , you can use following code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
a =data.corr()
mask = np.triu(np.ones_like(a, dtype=np.bool))
plt.figure(figsize=(10,10))
_ = sns.heatmap(a,cmap=sns.diverging_palette(250, 20, n=250),square=True,mask=mask,annot=True,center=0.5)
Update: Like mentioned in the comments, my indices weren't unique. worked around via a pivot.table
I got the following code to perform a clustering on a df. This df is approximately 80 K rows (df is named 'Kmeans'). I then have another df with a common value with 'Kmeans' (namely 'SKU_NR') with slightly less than 80K rows (this df is named 'Historie'). I want to merge df 'Kmeans' with df 'Historie', but when I do this, it gives me over 2M rows. I've done this before and then it worked. What's going wrong in the code?
#load in libraries
import pandas as pd
import numpy as np
pd.options.mode.chained_assignment = None
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
#Load and prepare data
Historie = pd.read_excel("file.xlsx")
Kmeans = Historie[['SKU_NR','ORDER_ADV_CONS_UNITS_WK_PICK']]
Kmeans = Kmeans.dropna()
from sklearn.cluster import KMeans
km = KMeans(n_clusters=3)
km.fit(Kmeans)
km.predict(Kmeans)
labels = km.labels_
Kmeans["Classification"] = labels
Kmeans = Kmeans[["SKU_NR","Classification"]]
Historie
=Historie[['SKU_NR','WEEKNR','ORDER_ADV_CONS_UNITS_WK_PICK',
'FORECAST_NEC_STOCK_BASE']]
Historie = Historie.merge(Kmeans, on = "SKU_NR")
I'm estimating an OLS model, as seen below. I need the coefficients on the categorical variable along with their values.
Here's my code:
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
np.random.seed(12345)
df = pd.DataFrame(np.random.randn(25, 1), columns=list('A'))
df['groupid'] = [1,1,1,1,1,2,2,2,2,2,3,3,3,3,3,5,5,5,5,5,6,6,6,6,6]
df['groupid'] = df['groupid'].astype('int')
###Fixed effects models
FE_ols = smf.ols(formula = 'A ~ C(groupid) - 1', data=df).fit()
FE_coeffs = FE_ols.params #Save coeffs
FE_coeffs.GroupID = FE_coeffs.index #Extract value of GroupID
FE_coeffs.GroupID = FE_coeffs.GroupID.str.extract('(\d+)') #Parse number from string
I'm able to extract the coefficients on the dummy variables. I put them in a new data frame.
C(groupid)[1] 0.2329694463342642
C(groupid)[2] 0.7567034333090062
C(groupid)[3] 0.31355791920072623
C(groupid)[5] -0.05131898650395289
C(groupid)[6] 0.31757453138500547
However, I want the data frame to be like:
1 0.2329694463342642
2 0.7567034333090062
3 0.31355791920072623
5 -0.05131898650395289
6 0.31757453138500547
The code seems to work, including the parsing. When I do this on Jupyter, it even shows the correct output. But the change isn't saved onto the data frame. There doesn't seem to be a inplace=True kind of command.
Will appreciate any help.
FE_coeffs is a Series, so adding an attribute GroupID as if it's adding a column is the wrong direction. Instead, just overwrite the index with the extracted integer values:
In [80]: FE_coeffs = FE_ols.params.copy()
In [81]: FE_coeffs.index = FE_coeffs.index.str.extract("(\d+)", expand=False).astype(int)
In [82]: FE_coeffs
Out[82]:
1 0.232969
2 0.756703
3 0.313558
5 -0.051319
6 0.317575
dtype: float64
Suppose I have a pandas data frame surveyData:
I want to normalize the data in each column by performing:
surveyData_norm = (surveyData - surveyData.mean()) / (surveyData.max() - surveyData.min())
This would work fine if my data table only contained the columns I wanted to normalize. However, I have some columns containing string data preceding like:
Name State Gender Age Income Height
Sam CA M 13 10000 70
Bob AZ M 21 25000 55
Tom FL M 30 100000 45
I only want to normalize the Age, Income, and Height columns but my above method does not work becuase of the string data in the name state and gender columns.
You can perform operations on a sub set of rows or columns in pandas in a number of ways. One useful way is indexing:
# Assuming same lines from your example
cols_to_norm = ['Age','Height']
survey_data[cols_to_norm] = survey_data[cols_to_norm].apply(lambda x: (x - x.min()) / (x.max() - x.min()))
This will apply it to only the columns you desire and assign the result back to those columns. Alternatively you could set them to new, normalized columns and keep the originals if you want.
I think it's better to use 'sklearn.preprocessing' in this case which can give us much more scaling options.
The way of doing that in your case when using StandardScaler would be:
from sklearn.preprocessing import StandardScaler
cols_to_norm = ['Age','Height']
surveyData[cols_to_norm] = StandardScaler().fit_transform(surveyData[cols_to_norm])
Simple way and way more efficient:
Pre-calculate the mean:
dropna() avoid missing data.
mean_age = survey_data.Age.dropna().mean()
max_age = survey_data.Age.dropna().max()
min_age = survey_data.Age.dropna().min()
dataframe['Age'] = dataframe['Age'].apply(lambda x: (x - mean_age ) / (max_age -min_age ))
this way will work...
I think it's really nice to use built-in functions
# Assuming same lines from your example
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
cols_to_norm = ['Age','Height']
survey_data[cols_to_norm] = scaler.fit_transform(survey_data[cols_to_norm])
MinMax normalize all numeric columns with minmax_scale
import numpy as np
from sklearn.preprocessing import minmax_scale
# cols = ['Age', 'Height']
cols = df.select_dtypes(np.number).columns
df[cols] = minmax_scale(df[cols])
Note: Keeps index, column names or non-numerical variables unchanged.
import pandas as pd
import numpy as np
# let Dataset here be your data#
from sklearn.preprocessing import MinMaxScaler
minmax = MinMaxScaler()
for x in dataset.columns[dataset.dtypes == 'int64']:
Dataset[x] = minmax.fit_transform(np.array(Dataset[I]).reshape(-1,1))