How to join columns in CSV files using Pandas in Python - python

I have a CSV file that looks something like this:
# data.csv (this line is not there in the file)
Names, Age, Names
John, 5, Jane
Rian, 29, Rath
And when I read it through Pandas in Python I get something like this:
import pandas as pd
data = pd.read_csv("data.csv")
print(data)
And the output of the program is:
Names Age Names
0 John 5 Jane
1 Rian 29 Rath
Is there any way to get:
Names Age
0 John 5
1 Rian 29
2 Jane
3 Rath

First, I'd suggest having unique names for each column. Either go into the csv file and change the name of a column header or do so in pandas.
Using 'Names2' as the header of the column with the second occurence of the same column name, try this:
Starting from
datalist = [['John', 5, 'Jane'], ['Rian', 29, 'Rath']]
df = pd.DataFrame(datalist, columns=['Names', 'Age', 'Names2'])
We have
Names Age Names
0 John 5 Jane
1 Rian 29 Rath
So, use:
dff = pd.concat([df['Names'].append(df['Names2'])
.reset_index(drop=True),
df.iloc[:,1]], ignore_index=True, axis=1)
.fillna('').rename(columns=dict(enumerate(['Names', 'Ages'])))
to get your desired result.
From the inside out:
df.append combines the columns.
pd.concat( ... ) combines the results of df.append with the rest of the dataframe.
To discover what the other commands do, I suggest removing them one-by-one and looking at the results.
Please forgive the formating of dff. I'm trying to make everything clear from an educational perspective.
Adjust indents so the code will compile.

You can use:
usecols which helps to read only selected columns.
low_memory is used so that we Internally process the file in chunks.
import pandas as pd
data = pd.read_csv("data.csv", usecols = ['Names','Age'], low_memory = False))
print(data)
Please have unique column name in your csv

Related

How to extract multiples tables from one PDF file using Pandas and tabula-py

Can someone help me to extract multiples tables from ONE pdf file. I have 5 pages, every page have a table with same header column exp:
Table exp in every page
student Score Rang
Alex 50 23
Julia 80 12
Mariana 94 4
I want to extract all this tables in one dataframe, First i did
df = tabula.read_pdf(file_path,pages='all',multiple_tables=True)
But i got a messy output so i try this lines of code that looks like this :
[student Score Rang
Alex 50 23
Julia 80 12
Mariana 94 4 ,student Score Rang
Maxim 43 34
Nourah 93 5]
so i edited my code like this
import pandas as pd
import tabula
file_path = "filePath.pdf"
# read my file
df1 = tabula.read_pdf(file_path,pages=1,multiple_tables=True)
df2 = tabula.read_pdf(file_path,pages=2,multiple_tables=True)
df3 = tabula.read_pdf(file_path,pages=3,multiple_tables=True)
df4 = tabula.read_pdf(file_path,pages=3,multiple_tables=True)
df5 = tabula.read_pdf(file_path,pages=5,multiple_tables=True)
It give me a dataframe for each table but i don't how to regroup it into one single dataframe and any other solution to avoid repeating the line of code.
According to the documentation of tabula, read_pdf returns a list when passed the multiple_table=True option.
Thus, you can use pandas.concat on its output to concatenate the dataframes:
df = pd.concat(tabula.read_pdf(file_path,pages='all',multiple_tables=True))

Compare two columns in two csv files in python

I have two csv files with same columns name:
In file1 I got all the people who made a test and all the status (passed/missed)
In file2 I only have those who missed the test
I'd like to compare file1.column1 and file2.column1
If they match then compare file1.column4 and file2.column4
If they are different remove item line from file2
I can't figure how to do that.
I looked things with pandas but I didn't manage to do anything that works
What I have is:
file1.csv:
name;DOB;service;test status;test date
Smith;12/12/2012;compta;Missed;01/01/2019
foo;02/11/1989;office;Passed;01/01/2019
bar;03/09/1972;sales;Passed;02/03/2018
Doe;25/03/1958;garage;Missed;02/04/2019
Smith;12/12/2012;compta;Passed;04/05/2019
file2.csv:
name;DOB;service;test status;test date
Smith;12/12/2012;compta;Missed;01/01/2019
Doe;25/03/1958;garage;Missed;02/04/2019
What I want to get is:
file1.csv:
name;DOB;service;test status;test date
Smith;12/12/2012;compta;Missed;01/01/2019
foo;02/11/1989;office;Passed;01/01/2019
bar;03/09/1972;sales;Passed;02/03/2018
Doe;25/03/1958;garage;Missed;02/04/2019
Smith;12/12/2012;compta;Passed;04/05/2019
file2.csv:
name;DOB;service;test status;test date
Doe;25/03/1958;garage;Missed;02/04/2019
So first you will have to open:
import pandas as pd
df1 = pd.read_csv('file1.csv',delimiter=';')
df2 = pd.read_csv('file2.csv',delimiter=';')
Treating the data frame, because of white spaces found
df1.columns= df1.columns.str.strip()
df2.columns= df2.columns.str.strip()
# Assuming only strings
df1 = df1.apply(lambda column: column.str.strip())
df2 = df2.apply(lambda column: column.str.strip())
The solution expected, Assuming that your name is UNIQUE.
Merging the files
new_merged_df = df2.merge(df1[['name','test status']],'left',on=['name'],suffixes=('','file1'))
DataFrame Result:
name DOB service test status test date test statusfile1
0 Smith 12/12/2012 compta Missed 01/01/2019 Missed
1 Smith 12/12/2012 compta Missed 01/01/2019 Passed
2 Doe 25/03/1958 garage Missed 02/04/2019 Missed
Filtering based on the requirements and removing the rows with the name with different test status.
filter = new_merged_df['test status'] != new_merged_df['test statusfile1']
# Check if there is different values
if len(new_merged_df[filter]) > 0:
drop_names = list(new_merged_df[filter]['name'])
# Removing the values that we don't want
new_merged_df = new_merged_df[~new_merged_df['name'].isin(drop_names)]
Removing columns and storing
# Saving as a file with the same schema as file2
new_merged_df.drop(columns=['test statusfile1'],inplace=True)
new_merged_df.to_csv('file2.csv',delimiter=';',index=False)
Result
name DOB service test status test date
2 Doe 25/03/1958 garage Missed 02/04/2019

How to concatenate sum on apply function and print dataframe as a table format within a file

I am trying to concatenate the 'count' value into the top row of my dataframe.
Here is an example of my starting data:
Name,IP,Application,Count
Tom,100.100.100,MsWord,5
Tom,100.100.100,Excel,10
Fred,200.200.200,Python,1
Fred,200.200.200,MsWord,5
df = pd.DataFrame(data, columns=['Name', 'IP', 'Application', 'Count'])
df_new = df.groupby(['Name', 'IP'])['Count'].apply(lambda x:x.astype(int).sum())
If I print df_new this produces the following output:
Name,IP,Application,Count
Tom,100.100.100,MsWord,15
................Excel,15
Fred,200.200.200,MsWord,6
................Python,6
As you can see, the count has correctly been calculated, for Tom it has added 5 to 10 and got an output of 15. However, this is displayed on every row of the group.
Is there any way to get the output as follows - so the count is only on the first line of the group:
Name,IP,Application,Count
Tom,100.100.100,MsWord,15
.................Excel
Fred,200.200.200,MsWord,6
.................Python
Is there anyway to write dt_new to a file in this nice format?
I would like the output to appear like a table and almost look like an excel sheet with merged cells.
I have tried dt_new.to.csv('path') but this removes the nice formatting I am seeing when I output dt to the console.
It is a bit of a challenge to treat a DataFrame and have it provide summary rows. Generally, the DataFrame lends itself to results that are not dependent on position, such as the last item in a group. Can be done, but better to separate those concerns.
import pandas as pd
from StringIO import StringIO
data = StringIO("""Name,IP,Application,Count
Tom,100.100.100,MsWord,5
Tom,100.100.100,Excel,10
Fred,200.200.200,Python,1
Fred,200.200.200,MsWord,5""")
#df = pd.DataFrame(data, columns=['Name', 'IP', 'Application', 'Count'])
#df_new = df.groupby(['Name', 'IP', 'Application'])['Count'].apply(lambda x:x.astype(int).sum())
df = pd.read_csv(data)
new_df = df.groupby(['Name', 'IP']).sum()
# reset the two levels of columns resulting from the groupby()
new_df.reset_index(inplace=True)
df.set_index(['Name', 'IP'], inplace=True)
new_df.set_index(['Name', 'IP'], inplace=True)
print(df)
Application Count
Name IP
Tom 100.100.100 MsWord 5
100.100.100 Excel 10
Fred 200.200.200 Python 1
200.200.200 MsWord 5
print(new_df)
Count
Name IP
Fred 200.200.200 6
Tom 100.100.100 15
print(new_df.join(df, lsuffix='_lsuffix', rsuffix='_rsuffix'))
Count_lsuffix Application Count_rsuffix
Name IP
Fred 200.200.200 6 Python 1
200.200.200 6 MsWord 5
Tom 100.100.100 15 MsWord 5
100.100.100 15 Excel 10
From here, you can use the multiindex to access the sum of the groups.

Shift down one row then rename the column

My data is looking like this:
pd.read_csv('/Users/admin/desktop/007538839.csv').head()
105586.18
0 105582.910
1 105585.230
2 105576.445
3 105580.016
4 105580.266
I want to move that 105568.18 to the 0 index because now it is the column name. And after that I want to name this column 'flux'. I've tried
pd.read_csv('/Users/admin/desktop/007538839.csv', sep='\t', names = ["flux"])
but it did not work, probably because the dataframe is not in the right format.
How can I achieve that?
For me your code working very nice:
import pandas as pd
temp=u"""105586.18
105582.910
105585.230
105576.445
105580.016
105580.266"""
#after testing replace 'pd.compat.StringIO(temp)' to '/Users/admin/desktop/007538839.csv'
df = pd.read_csv(pd.compat.StringIO(temp), sep='\t', names = ["flux"])
print (df)
flux
0 105586.180
1 105582.910
2 105585.230
3 105576.445
4 105580.016
5 105580.266
For overwrite original file with same data with new header flux:
df.to_csv('/Users/admin/desktop/007538839.csv', index=False)
Try this:
df=pd.read_csv('/Users/admin/desktop/007538839.csv',header=None)
df.columns=['flux']
header=None is the friend of yours.

pandas - write to csv only if column contains certain values

I have a pandas df that looks like this:
df = pd.DataFrame([[1,'hello,bye','USA','3/20/2016 7:00:17 AM'],[2,'good morning','UK','3/20/2016 7:00:20 AM']],columns=['id','text','country','datetime'])
id text country datetime
0 1 hello,bye USA 3/20/2016 7:00:17 AM
1 2 good morning UK 3/20/2016 7:00:20 AM
I want to print this output to csv but only if the country column contains 'USA'.
This is what I tried:
if 'USA' in df.country.values:
df.to_csv('test.csv')
but it prints the entire df to the test.csv file still.
Here is a simple solution to your problem:
df = pd.DataFrame([[1,'hello,bye','USA','3/20/2016 7:00:17 AM'],[2,'good morning','UK','3/20/2016 7:00:20 AM']],columns=['id','text','country','datetime'])
if 'USA' in df.country.tolist():
df.to_csv('test.csv')
Alternatively, you can also do this by:
df['country'].tolist()
Hope this helps you :)

Categories

Resources