Pandas Dataframe add header without replacing current header - python

How can I add a header to a DF without replacing the current one? In other words I just want to shift the current header down and just add it to the dataframe as another record.
*secondary question: How do I add tables (example dataframe) to stackoverflow question?
I have this (Note header and how it is just added as a row:
0.213231 0.314544
0 -0.952928 -0.624646
1 -1.020950 -0.883333
I need this (all other records are shifted down and a new record is added)
(also: I couldn't read the csv properly because I'm using s3_text_adapter for the import and I couldn't figure out how to have an argument that ignores header similar to pandas read_csv):
A B
0 0.213231 0.314544
1 -1.020950 -0.883333

Another option is to add it as an additional level of the column index, to make it a MultiIndex:
In [11]: df = pd.DataFrame(randn(2, 2), columns=['A', 'B'])
In [12]: df
Out[12]:
A B
0 -0.952928 -0.624646
1 -1.020950 -0.883333
In [13]: df.columns = pd.MultiIndex.from_tuples(zip(['AA', 'BB'], df.columns))
In [14]: df
Out[14]:
AA BB
A B
0 -0.952928 -0.624646
1 -1.020950 -0.883333
This has the benefit of keeping the correct dtypes for the DataFrame, so you can still do fast and correct calculations on your DataFrame, and allows you to access by both the old and new column names.
.
For completeness, here's DSM's (deleted answer), making the columns a row, which, as mentioned already, is usually not a good idea:
In [21]: df_bad_idea = df.T.reset_index().T
In [22]: df_bad_idea
Out[22]:
0 1
index A B
0 -0.952928 -0.624646
1 -1.02095 -0.883333
Note, the dtype may change (if these are column names rather than proper values) as in this case... so be careful if you actually plan to do any work on this as it will likely be slower and may even fail:
In [23]: df.sum()
Out[23]:
A -1.973878
B -1.507979
dtype: float64
In [24]: df_bad_idea.sum() # doh!
Out[24]: Series([], dtype: float64)
If the column names are actually a row that was mistaken as a header row then you should correct this on reading in the data (e.g. read_csv use header=None).

The key is to specify header=None and use column to add header:
data = pd.read_csv('file.csv', skiprows=2, header=None ) # skip blank rows if applicable
df = pd.DataFrame(data)
df = df.iloc[ : , [0,1]] # columns 1 and 2
df.columns = ['A','B'] # title

Related

How to change the column type of all columns except the first in Pandas?

I have a 6,000 column table that is loaded into a pandas DataFrame. The first column is an ID, the rest are numeric variables. All the columns are currently strings and I need to convert all but the first column to integer.
Many of the functions I've found don't allow passing a list of column names or drop the first column entirely.
You can do:
df.astype({col: int for col in df.columns[1:]})
An easy trick when you want to perform an operation on all columns but a few is to set the columns to ignore as index:
ignore = ['col1']
df = (df.set_index(ignore, append=True)
.astype(float)
.reset_index(ignore)
)
This should work with any operation even if it doesn't support specifying on which columns to work.
Example input:
df = pd.DataFrame({'col1': list('ABC'),
'col2': list('123'),
'col3': list('456'),
})
output:
>>> df.dtypes
col1 object
col2 float64
col3 float64
dtype: object
Try something like:
df.loc[:, df.columns != 'ID'].astype(int)
Some code that could be used for general cases where you want to convert dtypes
# select columns that need to be converted
cols = df.select_dtypes(include=['float64']).columns.to_list()
cols = ... # here exclude certain columns in cols e.g. the first col
df = df.astype({col:int for col in cols})
You can select str columns and exclude the first column in your case. The idea is basically the same.

How to filter for columns where the first row (not header) starts with string

I'm trying to filter a dataframe by the first row, but can't seem to figure out how to do it.
Here's a sample version of the data I'm working with:
In [11]: df = pd.DataFrame(
...: [['Open-Ended Response', 'Open-Ended Response', 'Response', 'Response'], [1, 2, 3, 4]],
...: columns=list('ABCD'),
...: )
In [12]: df
Out[12]:
A B C D
0 Open-Ended Response Open-Ended Response Response Response
1 1 2 3 4
What I want to do is filter for all columns that start with "Response" in the first non-header row. So in this case, just have the last two columns in there own dataframe.
I can easily filter the header with something like this:
respo = [col for col in df if col.startswith('Response')]
But it doesn't seem to work if it's the 1t non-header row. Importantly, I need to keep the current header after I filter.
Thank you.
First step is to select the values of the first row:
df.iloc[0] # selects the values in the first row
Then, use python's .str StringAccessor methods for working with data values rather than column names:
df.iloc[0].str.startswith('Response') # Test the result of the above line
This will give you a Series with True/False values indexed by column name. Finally, use this to select the columns from your dataframe based on the matched labels:
df.loc[:, df.iloc[0].str.startswith('Response')] # Select columns based on the test
This should do the trick!
See pandas's docs on Indexing and Selecting Data and the StringAccessor methods for more help.

Appending Rows to CSV File

Excuse my being a total novice. I am writing several columns of data to a CSV file where I would like to maintain the headers every time I run the script to write new data to it.
I have successfully appended data to the CSV every time I run the script, but I cannot get the data to write in a new row. It tries to extend the data on the same row. I need it to have a line break.
df = pd.DataFrame([[date, sales_sum, qty_sum, orders_sum, ship_sum]], columns=['Date', 'Sales', 'Quantity', 'Orders', 'Shipping'])
df.to_csv(r'/profit.csv', header=None, index=None, sep=',', mode='a')
I would like the headers to be on the first row "Date, Sales, Quantity, Orders, Shipping"
Second row will display the actual values.
When running the script again, I would like the third row to be appended with the next day's values only. When passing headers it seems it wants to write the headers again, then write the data again below it. I prefer only one set of headers at the top of the CSV. Is this possible?
Thanks in advance.
not sure if I completely understood what you are trying to do, but checking the documentation it seems that you have a header option that can be set to false:
https[://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_csv.html][1]
Header : bool or list of str, default True
Write out the column names. If a list of strings is given it is assumed to be
aliases for the column names.
Changed in version 0.24.0: Previously defaulted to False for Series.
Is this what you are looking for?
You can define the main dataframe with the colmns you want.
then for each day you create a dataframe of only the new rows then append it to the main row.
Like this :
Main_df = pd.DataFrame(values, columns)
New_rows = pd.DataFrame(new_values, columns)
Main_df = Main_df.append(New_rows, ignore_index=True)
For example:
df = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB'))
print(df)
# A B
#0 1 2
#1 3 4
df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('AB'))
df = df.append(df2, ignore_index=True)
print(df)
# A B
#0 1 2
#1 3 4
#2 5 6
#3 7 8

How to delete columns based on condition

I want to delete columns that start the particular "TYPE" word and do not contain _1?
df =
TYPE_1 TYPE_2 TYPE_3 COL1
aaa asb bbb 123
The result should be:
df =
TYPE_1 COL1
aaa 123
Currently I am deleting these columns manually, however this approach is not very efficient if the number of columns is big:
df = df.drop(["TYPE_2","TYPE_3"], axis=1)
A list comprehension can be used. Note: axis=1 denotes that we are referring to the column and inplace=True can also be used as per pandas.DataFrame.drop docs.
droplist = [i for i in df.columns if i.startswith('TYPE') and '_1' not in i]
df1.drop(droplist,axis=1,inplace=True)
This is the fifth answer but I wanted to showcase the power of the filter dataframe method which filters by column names with regex. This searches for columns that don't start with TYPE or have _1 somewhere in them.
df.filter(regex='^(?!TYPE)|_1')
Easy:
unwanted = [column for column in df.columns
if column.startswith("TYPE") and "_1" not in column]
df = df.drop(unwanted)
t_cols = [c for c in df.columns.values if c.startswith('TYPE_') and not c == 'TYPE_1']
df.drop(t_cols)
Should do the job

Creating an empty Pandas DataFrame, and then filling it

I'm starting from the pandas DataFrame documentation here: Introduction to data structures
I'd like to iteratively fill the DataFrame with values in a time series kind of calculation. I'd like to initialize the DataFrame with columns A, B, and timestamp rows, all 0 or all NaN.
I'd then add initial values and go over this data calculating the new row from the row before, say row[A][t] = row[A][t-1]+1 or so.
I'm currently using the code as below, but I feel it's kind of ugly and there must be a way to do this with a DataFrame directly or just a better way in general.
Note: I'm using Python 2.7.
import datetime as dt
import pandas as pd
import scipy as s
if __name__ == '__main__':
base = dt.datetime.today().date()
dates = [ base - dt.timedelta(days=x) for x in range(0,10) ]
dates.sort()
valdict = {}
symbols = ['A','B', 'C']
for symb in symbols:
valdict[symb] = pd.Series( s.zeros( len(dates)), dates )
for thedate in dates:
if thedate > dates[0]:
for symb in valdict:
valdict[symb][thedate] = 1+valdict[symb][thedate - dt.timedelta(days=1)]
print valdict
NEVER grow a DataFrame row-wise!
TLDR; (just read the bold text)
Most answers here will tell you how to create an empty DataFrame and fill it out, but no one will tell you that it is a bad thing to do.
Here is my advice: Accumulate data in a list, not a DataFrame.
Use a list to collect your data, then initialise a DataFrame when you are ready. Either a list-of-lists or list-of-dicts format will work, pd.DataFrame accepts both.
data = []
for row in some_function_that_yields_data():
data.append(row)
df = pd.DataFrame(data)
pd.DataFrame converts the list of rows (where each row is a scalar value) into a DataFrame. If your function yields DataFrames instead, call pd.concat.
Pros of this approach:
It is always cheaper to append to a list and create a DataFrame in one go than it is to create an empty DataFrame (or one of NaNs) and append to it over and over again.
Lists also take up less memory and are a much lighter data structure to work with, append, and remove (if needed).
dtypes are automatically inferred (rather than assigning object to all of them).
A RangeIndex is automatically created for your data, instead of you having to take care to assign the correct index to the row you are appending at each iteration.
If you aren't convinced yet, this is also mentioned in the documentation:
Iteratively appending rows to a DataFrame can be more computationally
intensive than a single concatenate. A better solution is to append
those rows to a list and then concatenate the list with the original
DataFrame all at once.
*** Update for pandas >= 1.4: append is now DEPRECATED! ***
As of pandas 1.4, append has now been deprecated! Use pd.concat instead. See the release notes
These options are horrible
append or concat inside a loop
Here is the biggest mistake I've seen from beginners:
df = pd.DataFrame(columns=['A', 'B', 'C'])
for a, b, c in some_function_that_yields_data():
df = df.append({'A': i, 'B': b, 'C': c}, ignore_index=True) # yuck
# or similarly,
# df = pd.concat([df, pd.Series({'A': i, 'B': b, 'C': c})], ignore_index=True)
Memory is re-allocated for every append or concat operation you have. Couple this with a loop and you have a quadratic complexity operation.
The other mistake associated with df.append is that users tend to forget append is not an in-place function, so the result must be assigned back. You also have to worry about the dtypes:
df = pd.DataFrame(columns=['A', 'B', 'C'])
df = df.append({'A': 1, 'B': 12.3, 'C': 'xyz'}, ignore_index=True)
df.dtypes
A object # yuck!
B float64
C object
dtype: object
Dealing with object columns is never a good thing, because pandas cannot vectorize operations on those columns. You will need to do this to fix it:
df.infer_objects().dtypes
A int64
B float64
C object
dtype: object
loc inside a loop
I have also seen loc used to append to a DataFrame that was created empty:
df = pd.DataFrame(columns=['A', 'B', 'C'])
for a, b, c in some_function_that_yields_data():
df.loc[len(df)] = [a, b, c]
As before, you have not pre-allocated the amount of memory you need each time, so the memory is re-grown each time you create a new row. It's just as bad as append, and even more ugly.
Empty DataFrame of NaNs
And then, there's creating a DataFrame of NaNs, and all the caveats associated therewith.
df = pd.DataFrame(columns=['A', 'B', 'C'], index=range(5))
df
A B C
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 NaN NaN NaN
4 NaN NaN NaN
It creates a DataFrame of object columns, like the others.
df.dtypes
A object # you DON'T want this
B object
C object
dtype: object
Appending still has all the issues as the methods above.
for i, (a, b, c) in enumerate(some_function_that_yields_data()):
df.iloc[i] = [a, b, c]
The Proof is in the Pudding
Timing these methods is the fastest way to see just how much they differ in terms of their memory and utility.
Benchmarking code for reference.
Here's a couple of suggestions:
Use date_range for the index:
import datetime
import pandas as pd
import numpy as np
todays_date = datetime.datetime.now().date()
index = pd.date_range(todays_date-datetime.timedelta(10), periods=10, freq='D')
columns = ['A','B', 'C']
Note: we could create an empty DataFrame (with NaNs) simply by writing:
df_ = pd.DataFrame(index=index, columns=columns)
df_ = df_.fillna(0) # With 0s rather than NaNs
To do these type of calculations for the data, use a NumPy array:
data = np.array([np.arange(10)]*3).T
Hence we can create the DataFrame:
In [10]: df = pd.DataFrame(data, index=index, columns=columns)
In [11]: df
Out[11]:
A B C
2012-11-29 0 0 0
2012-11-30 1 1 1
2012-12-01 2 2 2
2012-12-02 3 3 3
2012-12-03 4 4 4
2012-12-04 5 5 5
2012-12-05 6 6 6
2012-12-06 7 7 7
2012-12-07 8 8 8
2012-12-08 9 9 9
If you simply want to create an empty data frame and fill it with some incoming data frames later, try this:
newDF = pd.DataFrame() #creates a new dataframe that's empty
newDF = newDF.append(oldDF, ignore_index = True) # ignoring index is optional
# try printing some data from newDF
print newDF.head() #again optional
In this example I am using this pandas doc to create a new data frame and then using append to write to the newDF with data from oldDF.
If I have to keep appending new data into this newDF from more than
one oldDFs, I just use a for loop to iterate over
pandas.DataFrame.append()
Note: append() is deprecated since version 1.4.0. Use concat()
Initialize empty frame with column names
import pandas as pd
col_names = ['A', 'B', 'C']
my_df = pd.DataFrame(columns = col_names)
my_df
Add a new record to a frame
my_df.loc[len(my_df)] = [2, 4, 5]
You also might want to pass a dictionary:
my_dic = {'A':2, 'B':4, 'C':5}
my_df.loc[len(my_df)] = my_dic
Append another frame to your existing frame
col_names = ['A', 'B', 'C']
my_df2 = pd.DataFrame(columns = col_names)
my_df = my_df.append(my_df2)
Performance considerations
If you are adding rows inside a loop consider performance issues. For around the first 1000 records "my_df.loc" performance is better, but it gradually becomes slower by increasing the number of records in the loop.
If you plan to do thins inside a big loop (say 10M‌ records or so), you are better off using a mixture of these two;
fill a dataframe with iloc until the size gets around 1000, then append it to the original dataframe, and empty the temp dataframe.
This would boost your performance by around 10 times.
Simply:
import numpy as np
import pandas as pd
df=pd.DataFrame(np.zeros([rows,columns])
Then fill it.
Assume a dataframe with 19 rows
index=range(0,19)
index
columns=['A']
test = pd.DataFrame(index=index, columns=columns)
Keeping Column A as a constant
test['A']=10
Keeping column b as a variable given by a loop
for x in range(0,19):
test.loc[[x], 'b'] = pd.Series([x], index = [x])
You can replace the first x in pd.Series([x], index = [x]) with any value
This is my way to make a dynamic dataframe from several lists with a loop
x = [1,2,3,4,5,6,7,8]
y = [22,12,34,22,65,24,12,11]
z = ['as','ss','wa', 'ss','er','fd','ga','mf']
names = ['Bob', 'Liz', 'chop']
a loop
def dataF(x,y,z,names):
res = []
for t in zip(x,y,z):
res.append(t)
return pd.DataFrame(res,columns=names)
Result
dataF(x,y,z,names)
# import pandas library
import pandas as pd
# create a dataframe
my_df = pd.DataFrame({"A": ["shirt"], "B": [1200]})
# show the dataframe
print(my_df)

Categories

Resources