I tried two merge two data frames by adding the first line of the second df to the first line of the first df. I also tried to concatenate them but eiter failed.
The format of the Data is
1,3,N0128,Durchm.,5.0,0.1,5.0760000000000005,0.076,-----****--
2,0.000,,,,,,,
3,3,N0129,Position,62.2,0.376,62.238,0.136,***---
4,76.1,-36.000,0.300,-36.057,,,,
5,2,N0130,Durchm.,5.0,0.1,5.067,0.067,-----***---
6,0.000,,,,,,,
The expected format of the output should be
1,3,N0128,Durchm.,5.0,0.1,5.0760000000000005,0.076,-----****--,0.000,,,,,,,
2,3,N0129,Position,62.2,0.376,62.238,0.136,***---**,76.1,-36.000,0.300,-36.057,,,,
3,N0130,Durchm.,5.0,0.1,5.067,0.067,-----***---,0.000,,,,,,,
I already splitted the dataframe from above into two frames. The first one contains only the odd indexes and the second one the even one's.
My problem is now, to merge/concatenate the two frames, by adding the first row of the second df to the first row of the first df. I already tried some methods of merging/concatenating but all of them failed. All the print functions are not neccessary, I only use them to have a quick overview in the console.
The code which I felt most comfortable with is:
os.chdir(output)
csv_files = os.listdir('.')
for csv_file in (csv_files):
if csv_file.endswith(".asc.csv"):
df = pd.read_csv(csv_file)
keep_col = ['Messpunkt', 'Zeichnungspunkt', 'Eigenschaft', 'Position', 'Sollmass','Toleranz','Abweichung','Lage']
new_df = df[keep_col]
new_df = new_df[~new_df['Messpunkt'].isin(['**Teil'])]
new_df = new_df[~new_df['Messpunkt'].isin(['**KS-Oben'])]
new_df = new_df[~new_df['Messpunkt'].isin(['**KS-Unten'])]
new_df = new_df[~new_df['Messpunkt'].isin(['**N'])]
print(new_df)
new_df.to_csv(output+csv_file)
df1 = new_df[new_df.index % 2 ==1]
df2 = new_df[new_df.index % 2 ==0]
df1.reset_index()
df2.reset_index()
print (df1)
print (df2)
merge_df = pd.concat([df1,df2], axis=1)
print (merge_df)
merge_df.to_csv(output+csv_file)
I highly appreciate some help.
With this code, the output is:
1,3,N0128,Durchm.,5.0,0.1,5.0760000000000005,0.076,-----****--,,,,,,,,
2,,,,,,,,,0.000,,,,,,,
3,3,N0129,Position,62.2,0.376,62.238,0.136,***---,,,,,,,,
4,,,,,,,,,76.1,-36.000,0.300,-36.057,,,,
5,2,N0130,Durchm.,5.0,0.1,5.067,0.067,-----***---,,,,,,,,
6,,,,,,,,,0.000,,,,,,,
I get expected result when I use reset_index() to have the same index in both DataFrames.
It may need also drop=True to skip index as new column
pd.concat([df1.reset_index(drop=True), df2.reset_index(drop=True)], axis=1)
Minimal working example.
I use io only to simulate file in memory.
text = '''1,3,N0128,Durchm.,5.0,0.1,5.0760000000000005,0.076,-----****--
2,0.000,,,,,,,
3,3,N0129,Position,62.2,0.376,62.238,0.136,***---
4,76.1,-36.000,0.300,-36.057,,,,
5,2,N0130,Durchm.,5.0,0.1,5.067,0.067,-----***---
6,0.000,,,,,,,'''
import pandas as pd
import io
pd.options.display.max_columns = 20 # to display all columns
df = pd.read_csv(io.StringIO(text), header=None, index_col=0)
#print(df)
df1 = df[df.index % 2 == 1] # .reset_index(drop=True)
df2 = df[df.index % 2 == 0] # .reset_index(drop=True)
#print(df1)
#print(df2)
merge_df = pd.concat([df1.reset_index(drop=True), df2.reset_index(drop=True)], axis=1)
print(merge_df)
Result:
1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
0 3.0 N0128 Durchm. 5.0 0.100 5.076 0.076 -----****-- 0.0 NaN NaN NaN NaN NaN NaN NaN
1 3.0 N0129 Position 62.2 0.376 62.238 0.136 ***--- 76.1 -36.000 0.300 -36.057 NaN NaN NaN NaN
2 2.0 N0130 Durchm. 5.0 0.100 5.067 0.067 -----***--- 0.0 NaN NaN NaN NaN NaN NaN NaN
EDIT:
It may need
merge_df.index = merge_df.index + 1
to correct index.
I have pandas.df 233 rows * 234 columns and I need to evaluate every cell and return corresponding column header if not nan, so far I wrote the following:
#First get a list of all column names (except column 0):
col_list=[]
for column in df.columns[1:]:
col_list.append(column)
#Then I try to iterate through every cell and evaluate for Null
#Also a counter is initiated to take the next col_name from col_list
#when count reach 233
for index, row in df.iterrows():
count = 0
for x in row[1:]:
count = count+1
for col_name in col_list:
if count >= 233: break
elif str(x) != 'nan':
print col_name
The code does not do exactly that, what do I need to change to get the code to break after 233 rows and go to the next col_name?
Example:
Col_1 Col_2 Col_3
1 nan 13 nan
2 10 nan nan
3 nan 2 5
4 nan nan 4
output:
1 Col_2
2 Col_1
3 Col_2
4 Col_3
5 Col_3
I think you need if first column is index stack - it remove all NaNs and then get values from second level of Multiindex by reset_index and selecting or by Series constructor with Index.get_level_values:
s = df.stack().reset_index()['level_1'].rename('a')
print (s)
0 Col_2
1 Col_1
2 Col_2
3 Col_3
4 Col_3
Name: a, dtype: object
Or:
s = pd.Series(df.stack().index.get_level_values(1))
print (s)
0 Col_2
1 Col_1
2 Col_2
3 Col_3
4 Col_3
dtype: object
If need output as list:
L = df.stack().index.get_level_values(1).tolist()
print (L)
['Col_2', 'Col_1', 'Col_2', 'Col_3', 'Col_3']
Detail:
print (df.stack())
1 Col_2 13.0
2 Col_1 10.0
3 Col_2 2.0
Col_3 5.0
4 Col_3 4.0
dtype: float64
I'd use jezrael's stack solution.
However, if you're interested in Numpy way, which is usually faster.
In [4889]: np.tile(df.columns, df.shape[0])[~np.isnan(df.values.ravel())]
Out[4889]: array(['Col_2', 'Col_1', 'Col_2', 'Col_3', 'Col_3'], dtype=object)
Timings
In [4913]: df.shape
Out[4913]: (100, 3)
In [4914]: %timeit np.tile(df.columns, df.shape[0])[~np.isnan(df.values.ravel())]
10000 loops, best of 3: 35.8 µs per loop
In [4915]: %timeit df.stack().index.get_level_values(1)
1000 loops, best of 3: 335 µs per loop
In [4905]: df.shape
Out[4905]: (100000, 3)
In [4907]: %timeit np.tile(df.columns, df.shape[0])[~np.isnan(df.values.ravel())]
100 loops, best of 3: 5.98 ms per loop
In [4908]: %timeit df.stack().index.get_level_values(1)
100 loops, best of 3: 11.7 ms per loop
Choose based on your need (readability, speed, maintainability etc)
You can use dropna :
df.dropna(axis=1).columns
axis : {0 or ‘index’, 1 or ‘columns’}
how : {‘any’, ‘all’}
Basically you use dropna to remove the null, axis = 1 is dropping columns, and how="any" to remove is at least one in the columns is null, .columns get the remaining header.
I am trying to assign the output from a value_count to a new df. My code follows.
import pandas as pd
import glob
df = pd.concat((pd.read_csv(f, names=['date','bill_id','sponsor_id']) for f in glob.glob('/home/jayaramdas/anaconda3/df/s11?_s_b')))
column_list = ['date', 'bill_id']
df = df.set_index(column_list, drop = True)
df = df['sponsor_id'].value_counts()
df.columns=['sponsor', 'num_bills']
print (df)
The value count is not being assigned the column headers specified 'sponsor', 'num_bills'. I'm getting the following output from print.head
1036 426
791 408
1332 401
1828 388
136 335
Name: sponsor_id, dtype: int64
your column length doesn't match, you read 3 columns from the csv and then set the index to 2 of them, you calculated value_counts which produces a Series with the column values as the index and the value_counts as the values, you need to reset_index and then overwrite the column names:
df = df.reset_index()
df.columns=['sponsor', 'num_bills']
Example:
In [276]:
df = pd.DataFrame({'col_name':['a','a','a','b','b']})
df
Out[276]:
col_name
0 a
1 a
2 a
3 b
4 b
In [277]:
df['col_name'].value_counts()
Out[277]:
a 3
b 2
Name: col_name, dtype: int64
In [278]:
type(df['col_name'].value_counts())
Out[278]:
pandas.core.series.Series
In [279]:
df = df['col_name'].value_counts().reset_index()
df.columns = ['col_name', 'count']
df
Out[279]:
col_name count
0 a 3
1 b 2
Appending value_counts() to multi-column dataframe:
df = pd.DataFrame({'C1':['A','B','A'],'C2':['A','B','A']})
vc_df = df.value_counts().to_frame('Count').reset_index()
display(df, vc_df)
C1 C2
0 A A
1 B B
2 A A
C1 C2 Count
0 A A 2
1 B B 1