I am working on a notebook with python/pandas, and I have:
a Dataframe, X (with size 20550 rows × 18 columns) and a
a Series, a column, y (with size 20550)
I want to merge (or concatenate, append!) the column 'y' at the end of 'X'
and have a X_total with size 20550 rows, 19 columns
This is probably very simple but I am trying to append or concatenate horizontally, but I end up with dataframes with weird dimensions, at the best case I got a df with more rows (20551 rows × 20565 columns, or 20551 rows × 19 columns, full of NaNs)
EDIT:
I tried:
pd.concat([X,y], axis=1)
X.append(other=y)
dfsv=[X,y]
pd.concat([X,y], axis=1, join='outer', ignore_index=False)
X.append(y, ignore_index=True)
any thoughts?
cheers!
To append a Series as a column to a dataframe, the Series must have a name which will be used as the column name. At the same time, the index of the Series need to match with the index of the dataframe. As such, you can do it this way:
y2 = pd.Series(y.values, name='y', index=X.index)
X.join(y2)
Here, we fulfill 2 prerequisites at one step by defining a Series y2 taking the values of Series y, give it the column name y and set its index to be the same as dataframe X. Then, we can use .join() to join y2 at the end of X.
Edit
Another even much simpler solution:
X['y'] = y.values
If X and Y have same indices:
pd.concat([X, Y], axis=1)
If X and Y have different indices, you can try:
X.append(Y, ignore_index=True)
You can either append or con at. It is important though to specify the Axis to be columns
>>> X = pd.concat([X,Y], axis=1)
Related
I have the following data:
(the data given here is just representational)
`
I want to do the following with this data:
I want to get column only after the 201
i.e. I want to remove the 200-1 to 200-4 column data.
One way to do this is to retrieve only the required column while reading the data from excel, but I want to know how we can filter the column name on the basis of a particular pattern as 200-1 to 200-4 column name has pattern 200-*
I want to make a column after 202-4 which stores the values in the following ways:
201q1= mean of (201-1 and 201-2)
201q2 = mean of(201-3 and 201-4)
Similarly, if 202-1 to 201-4 data would have been there, a similar column should have been formed.
Please help.
Thanks in advance for your support.
This is a rough example but it will get you close. The example assume that there are always four columns per group:
#sample data
np.random.seed(1)
df = pd.DataFrame(np.random.randn(2,12), columns=['200-1','200-2','200-3','200-4', '201-1', '201-2', '201-3','201-4', '202-1', '202-2', '202-3','202-4'])
# remove 200-* columns
df2 = df[df.columns[~df.columns.str.contains('200-')]]
# us np.arange to create groups
new = df2.groupby(np.arange(len(df2.columns))//2, axis=1).mean()
# rename columns
new.columns = [f'{v}{k}' for v,k in zip([x[:3] for x in df2.columns[::2]], ['q1','q2']*int(len(df2.columns[::2])/2))]
# join
df2.join(new)
201-1 201-2 201-3 201-4 202-1 202-2 202-3 \
0 0.865408 -2.301539 1.744812 -0.761207 0.319039 -0.249370 1.462108
1 -0.172428 -0.877858 0.042214 0.582815 -1.100619 1.144724 0.901591
202-4 201q1 201q2 202q1 202q2
0 -2.060141 -0.718066 0.491802 0.034834 -0.299016
1 0.502494 -0.525143 0.312514 0.022052 0.702043
For step 1, you can get away with list comprehension, and the pandas drop function:
dropcols = [x for x in df.columns if '200-' in x]
df.drop(dropcols, axis=1, inplace=True)
Steps 3 and 4 are similar, you could calculate the rolling mean of the columns:
df2 = df.rolling(2, axis = 1).mean() # creates rolling mean
df2.columns = [x.replace('-', 'q') for x in df2.columns] # renames the columns
dfans = pd.concat([df, df2], axis = 1) # concatenate the columns together
Now, you just need to remove the columns that you dont want and rename them.
I've several hundred dataframes that are appended in a list. All the dataframes have same number of columns but the number of rows are not same. The column names are also same.
So i want to take the mean, mad, std of column value of each column and i'm doing something like this:
All the dataframes are appended in a list (lst)
lst = []
for filen, filen1 in zip(filelistn, filelist1):
df1 = pd.read_table(path_to_files+filen, skiprows=0, usecols=(0,1,2,3,4,8),names=['wave','num','stlines','fwhm','EWs','MeasredWave'],delimiter=r'\s+')
df2 = pd.read_table(path_to_files1+filen1, skiprows=0, usecols=(0,1,2,3,4,8),names=['wave','num','stlines','fwhm','EWs','MeasredWave'],delimiter=r'\s+')
dfs = pd.merge(df1,df2, on='wave', how='inner')
dfs = df1 - df2
lst.append(dfs)
df = reduce(lambda x, y: pd.merge(x, y, on = 'wave',how='outer'), lst)
df = df.rename(columns = lambda x: x.split('_')[0]).T
df = df.groupby(df.index).agg(['mean','std','mad','median']).T
But the results that i'm getting are a bit weird, Like in column mad there are values like 21,65,36 which is absurd.
wave mean median mad
0 4050.32 -0.016182 -0.011940 0.008885
1 4208.98 0.023707 0.007189 0.032585
2 4374.94 -0.001321 -0.001196 0.000378
3 4379.74 0.002778 0.003380 0.004685
4 6828.60 -10.604568 -0.000590 21.084799
5 6839.84 -0.003466 -0.001870 0.010169
6 6842.04 -32.751551 -0.002514 65.118329
7 6842.69 18.293519 -0.002158 36.385884
The column wave is same in all the dataframes, but the number of rows are not. Does it has anything to do with that? May be it's taking the mean of the wrong rows?
Can anyone tell me how to solve this?
You can use pandas.concat to concatenate the sequence of data frames into one large data frame and calculate the statistics afterwards like so.
import pandas as pd
# lst = [construct list of dataframes ...]
df = pd.concat(lst, axis=0)
means = df.mean()
stds = df.std()
Edit: if you would like to get the statistics broken down by some key, e.g. wave, you can use the following.
means = df.groupby('wave').mean()
What is the best way to create a pandas Data frame as a function of row index value and column name?
So for DataFrame where index in X, columns in Y, each value would be some f(x,y) where x in X and y in Y (eg could be concatenation of index and column names)
I know I can write a loop to do this, but figure there's a quicker way in pandas?
Thanks!
You could use a list comprehension to prepare the values as a list of lists, and then pass the list of lists to pd.DataFrame:
import pandas as pd
rows = ['1','2','3']
cols = ['X','Y']
df = pd.DataFrame(([col+row for col in cols] for row in rows),
index=rows, columns=cols)
yields
X Y
1 X1 Y1
2 X2 Y2
3 X3 Y3
and of course you could replace col+row with a call to an arbitrary function f:
df = pd.DataFrame(([f(row, col) for col in cols] for row in rows),
index=rows, columns=cols)
If rows and/or cols is large, then a list of lists could require a lot of memory. Calling f once for every cell could require some time. Depending on f, there might be a faster/less memory-intensive way to create df.
For example, to concatenate the row and column labels you could use np.char.add and np.meshgrid:
import numpy as np
rows = ['1','2','3']
cols = ['X','Y']
df = pd.DataFrame(np.char.add(*np.meshgrid(cols, rows, sparse=True, indexing='xy')),
index=rows, columns=cols)
yields the same result.
This creates the NumPy array without creating a temporary list of lists, thus saving memory. Since np.char.add creates the resultant NumPy array in a vectorized way, if rows and cols is large, the result is obtained faster than computing col+row (in Python) for each cell.
I have this code using Pandas in Python:
all_data = {}
for ticker in ['FIUIX', 'FSAIX', 'FSAVX', 'FSTMX']:
all_data[ticker] = web.get_data_yahoo(ticker, '1/1/2010', '1/1/2015')
prices = DataFrame({tic: data['Adj Close'] for tic, data in all_data.iteritems()})
returns = prices.pct_change()
I know I can run a regression like this:
regs = sm.OLS(returns.FIUIX,returns.FSTMX).fit()
but how can I do this for each column in the dataframe? Specifically, how can I iterate over columns, in order to run the regression on each?
Specifically, I want to regress each other ticker symbol (FIUIX, FSAIX and FSAVX) on FSTMX, and store the residuals for each regression.
I've tried various versions of the following, but nothing I've tried gives the desired result:
resids = {}
for k in returns.keys():
reg = sm.OLS(returns[k],returns.FSTMX).fit()
resids[k] = reg.resid
Is there something wrong with the returns[k] part of the code? How can I use the k value to access a column? Or else is there a simpler approach?
for column in df:
print(df[column])
You can use iteritems():
for name, values in df.iteritems():
print('{name}: {value}'.format(name=name, value=values[0]))
This answer is to iterate over selected columns as well as all columns in a DF.
df.columns gives a list containing all the columns' names in the DF. Now that isn't very helpful if you want to iterate over all the columns. But it comes in handy when you want to iterate over columns of your choosing only.
We can use Python's list slicing easily to slice df.columns according to our needs. For eg, to iterate over all columns but the first one, we can do:
for column in df.columns[1:]:
print(df[column])
Similarly to iterate over all the columns in reversed order, we can do:
for column in df.columns[::-1]:
print(df[column])
We can iterate over all the columns in a lot of cool ways using this technique. Also remember that you can get the indices of all columns easily using:
for ind, column in enumerate(df.columns):
print(ind, column)
You can index dataframe columns by the position using ix.
df1.ix[:,1]
This returns the first column for example. (0 would be the index)
df1.ix[0,]
This returns the first row.
df1.ix[:,1]
This would be the value at the intersection of row 0 and column 1:
df1.ix[0,1]
and so on. So you can enumerate() returns.keys(): and use the number to index the dataframe.
A workaround is to transpose the DataFrame and iterate over the rows.
for column_name, column in df.transpose().iterrows():
print column_name
Using list comprehension, you can get all the columns names (header):
[column for column in df]
Based on the accepted answer, if an index corresponding to each column is also desired:
for i, column in enumerate(df):
print i, df[column]
The above df[column] type is Series, which can simply be converted into numpy ndarrays:
for i, column in enumerate(df):
print i, np.asarray(df[column])
I'm a bit late but here's how I did this. The steps:
Create a list of all columns
Use itertools to take x combinations
Append each result R squared value to a result dataframe along with excluded column list
Sort the result DF in descending order of R squared to see which is the best fit.
This is the code I used on DataFrame called aft_tmt. Feel free to extrapolate to your use case..
import pandas as pd
# setting options to print without truncating output
pd.set_option('display.max_columns', None)
pd.set_option('display.max_colwidth', None)
import statsmodels.formula.api as smf
import itertools
# This section gets the column names of the DF and removes some columns which I don't want to use as predictors.
itercols = aft_tmt.columns.tolist()
itercols.remove("sc97")
itercols.remove("sc")
itercols.remove("grc")
itercols.remove("grc97")
print itercols
len(itercols)
# results DF
regression_res = pd.DataFrame(columns = ["Rsq", "predictors", "excluded"])
# excluded cols
exc = []
# change 9 to the number of columns you want to combine from N columns.
#Possibly run an outer loop from 0 to N/2?
for x in itertools.combinations(itercols, 9):
lmstr = "+".join(x)
m = smf.ols(formula = "sc ~ " + lmstr, data = aft_tmt)
f = m.fit()
exc = [item for item in x if item not in itercols]
regression_res = regression_res.append(pd.DataFrame([[f.rsquared, lmstr, "+".join([y for y in itercols if y not in list(x)])]], columns = ["Rsq", "predictors", "excluded"]))
regression_res.sort_values(by="Rsq", ascending = False)
I landed on this question as I was looking for a clean iterator of columns only (Series, no names).
Unless I am mistaken, there is no such thing, which, if true, is a bit annoying. In particular, one would sometimes like to assign a few individual columns (Series) to variables, e.g.:
x, y = df[['x', 'y']] # does not work
There is df.items() that gets close, but it gives an iterator of tuples (column_name, column_series). Interestingly, there is a corresponding df.keys() which returns df.columns, i.e. the column names as an Index, so a, b = df[['x', 'y']].keys() assigns properly a='x' and b='y'. But there is no corresponding df.values(), and for good reason, as df.values is a property and returns the underlying numpy array.
One (inelegant) way is to do:
x, y = (v for _, v in df[['x', 'y']].items())
but it's less pythonic than I'd like.
Most of these answers are going via the column name, rather than iterating the columns directly. They will also have issues if there are multiple columns with the same name. If you want to iterate the columns, I'd suggest:
for series in (df.iloc[:,i] for i in range(df.shape[1])):
...
assuming X-factor, y-label (multicolumn):
columns = [c for c in _df.columns if c in ['col1', 'col2','col3']] #or '..c not in..'
_df.set_index(columns, inplace=True)
print( _df.index)
X, y = _df.iloc[:,:4].values, _df.index.values
I need to edit rows in a pandas.DataFrame by dividing each value by the row.max()
what is the recommended way to do this?
I tried
df.xs('rowlabel') /= df.xs('rowlabel').max()
as I'd do on a numpy array, but it didn't work.
The syntax for a single row is:
df.ix['rowlabel'] /= df.ix['rowlabel'].max()
If you want that done on every row in the dataframe, you can use apply (with axis=1 to select rows instead of columns):
df.apply(lambda x: x / x.max(), axis=1)