when i transpose a dataframe, it shows nan values - python

import pymysql
import pandas as pd
import numpy
conn = pymysql.connect(host="localhost",port=3306,db="school",user="root",password="#mit123")
print("Connection established sucessfully")
cursor = conn.cursor()
sql = "SELECT * FROM records"
cursor.execute(sql)
result = cursor.fetchall()
data= result
df = pd.DataFrame(data)
df1=df.T
print(df)
print(df1)
df2 = pd.DataFrame(df1,index=["id","name","rollno.","city"])
print(df2)
The following is the output. What could be causing the problem? Can't I transpose a data frame into another data frame?
Connection established sucessfully
0 1 2 3 4
0 1 amit 1 92 jorhat
1 2 subham 2 93 jorhat
2 3 ram 3 89 surat
3 4 anil 4 91 delhi
4 5 abdul 5 81 bhopal
5 6 joseph 6 90 sikkim
6 7 Ben 7 94 indore
7 8 tom 8 99 goa
0 1 2 3 4 5 6 7
0 1 2 3 4 5 6 7 8
1 amit subham ram anil abdul joseph Ben tom
2 1 2 3 4 5 6 7 8
3 92 93 89 91 81 90 94 99
4 jorhat jorhat surat delhi bhopal sikkim indore goa
0 1 2 3 4 5 6 7
id NaN NaN NaN NaN NaN NaN NaN NaN
name NaN NaN NaN NaN NaN NaN NaN NaN
rollno. NaN NaN NaN NaN NaN NaN NaN NaN
city NaN NaN NaN NaN NaN NaN NaN NaN
Process finished with exit code 0
This is my sql table:
Also when I use an index in the data frame, it says shape error:
Shape of passed values is (5, 8), indices imply (4, 8)

I could reproduce the NaN error using my database. So I think the reason is that there are no column names there.
So you can do following:
import pymysql
import pandas as pd
import numpy
conn = pymysql.connect(host="localhost",
port=3306,
db="school",
user="root",
password="#mit123")
print("Connection established sucessfully")
sql = "SELECT * FROM records"
df = pd.read_sql(con=conn,sql=sql)
df1=df.T
print(df)
print(df1)
df2 = pd.DataFrame(df1,index=["id","name","roll_number","city"])
print(df2)
This solves the NaN error.
The shape error may be due to the fact that you are not passing the "percentage" column to the index, but I was unable to reproduce this error.

Related

Insert rows from dataframeB to DataframeA with keys and without Merge

I have a dataframe with thousand records as:
ID to from Date price Type
1 69 18 2/2020 10 A
2 11 12 2/2020 5 A
3 18 10 3/2020 4 B
4 10 11 3/2020 10 A
5 12 69 3/2020 4 B
6 12 20 3/2020 3 B
7 69 21 3/2020 3 A
The output that i want is :
ID to from Date price Type ID to from Date price Type
1 69 18 2/2020 4 A 5 12 69 3/2020 4 B
1' 69 18 2/2020 6 A Nan Nan Nan Nan Nan Nan
2 11 12 2/2020 5 A Nan Nan Nan Nan Nan Nan
4 10 11 3/2020 4 A 3 18 10 3/2020 4 B
4' 10 11 3/2020 6 A Nan Nan Nan Nan Nan Nan
Nan Nan Nan Nan Nan Nan 6 12 20 3/2020 3 B
7 69 21 3/2020 3 A Nan Nan Nan Nan Nan Nan
The idea is to iterate over row , if the type is B , put the row next to the first record with type A and from = TO ,
if the price are equals its ok , if its not split the row with higher price , and the new price will be soustracted.
i divise the dataframe in type A and B , and im trying to iterate both of them
grp = df.groupby('type')
transformed_df_list = []
for idx, frame in grp:
frame.reset_index(drop=True, inplace=True)
transformed_df_list.append(frame.copy())
A = pd.DataFrame([transformed_df_list[0])
B= pd.DataFrame([transformed_df_list[1])
for i , row in A.iterrows():
for i, row1 in B.iterrows():
if row['to'] == row1['from']:
if row['price'] == row1['price']:
row_df = pd.DataFrame([row1])
output = pd.merge(A ,B, how='left' , left_on =['to'] , right_on =['from'] )
The problem is that with merge function a get several duplicate rows and i cant check the price to split the row ?
There is way to insert B row in A dataframe witout merge function ?

Setting subset of a pandas DataFrame by a DataFrame

I feel like this question has been asked a millions times before, but I just can't seem to get it to work or find a SO-post answering my question.
So I am selecting a subset of a pandas DataFrame and want to change these values individually.
I am subselecting my DataFrame like this:
df.loc[df[key].isnull(), [keys]]
which works perfectly. If I try and set all values to the same value such as
df.loc[df[key].isnull(), [keys]] = 5
it works as well. But if I try and set it to a DataFrame it does not, however no error is produced either.
So for example I have a DataFrame:
data = [['Alex',10,0,0,2],['Bob',12,0,0,1],['Clarke',13,0,0,4],['Dennis',64,2],['Jennifer',56,1],['Tom',95,5],['Ellen',42,2],['Heather',31,3]]
df1 = pd.DataFrame(data,columns=['Name','Age','Amount_of_cars','cars_per_year','some_other_value'])
Name Age Amount_of_cars cars_per_year some_other_value
0 Alex 10 0 0.0 2.0
1 Bob 12 0 0.0 1.0
2 Clarke 13 0 0.0 4.0
3 Dennis 64 2 NaN NaN
4 Jennifer 56 1 NaN NaN
5 Tom 95 5 NaN NaN
6 Ellen 42 2 NaN NaN
7 Heather 31 3 NaN NaN
and a second DataFrame:
data = [[2/64,5],[1/56,1],[5/95,7],[2/42,5],[3/31,7]]
df2 = pd.DataFrame(data,columns=['cars_per_year','some_other_value'])
cars_per_year some_other_value
0 0.031250 5
1 0.017857 1
2 0.052632 7
3 0.047619 5
4 0.096774 7
and I would like to replace those nans with the second DataFrame
df1.loc[df1['cars_per_year'].isnull(),['cars_per_year','some_other_value']] = df2
Unfortunately this does not work as the index does not match. So how do I ignore the index, when setting values?
Any help would be appreciated. Sorry if this has been posted before.
It is possible only if number of mising values is same like number of rows in df2, then assign array for prevent index alignment:
df1.loc[df1['cars_per_year'].isnull(),['cars_per_year','some_other_value']] = df2.values
print (df1)
Name Age Amount_of_cars cars_per_year some_other_value
0 Alex 10 0 0.000000 2.0
1 Bob 12 0 0.000000 1.0
2 Clarke 13 0 0.000000 4.0
3 Dennis 64 2 0.031250 5.0
4 Jennifer 56 1 0.017857 1.0
5 Tom 95 5 0.052632 7.0
6 Ellen 42 2 0.047619 5.0
7 Heather 31 3 0.096774 7.0
If not, get errors like:
#4 rows assigned to 5 rows
data = [[2/64,5],[1/56,1],[5/95,7],[2/42,5]]
df2 = pd.DataFrame(data,columns=['cars_per_year','some_other_value'])
df1.loc[df1['cars_per_year'].isnull(),['cars_per_year','some_other_value']] = df2.values
ValueError: shape mismatch: value array of shape (4,) could not be broadcast to indexing result of shape (5,)
Another idea is set index of df2 by index of filtered rows in df1:
df2 = df2.set_index(df1.index[df1['cars_per_year'].isnull()])
df1.loc[df1['cars_per_year'].isnull(),['cars_per_year','some_other_value']] = df2
print (df1)
Name Age Amount_of_cars cars_per_year some_other_value
0 Alex 10 0 0.000000 2.0
1 Bob 12 0 0.000000 1.0
2 Clarke 13 0 0.000000 4.0
3 Dennis 64 2 0.031250 5.0
4 Jennifer 56 1 0.017857 1.0
5 Tom 95 5 0.052632 7.0
6 Ellen 42 2 0.047619 5.0
7 Heather 31 3 0.096774 7.0
Just add .values or .to_numpy() if using pandas v 0.24 +
df1.loc[df1['cars_per_year'].isnull(),['cars_per_year','some_other_value']] = df2.values
Name Age Amount_of_cars cars_per_year some_other_value
0 Alex 10 0 0.000000 2.0
1 Bob 12 0 0.000000 1.0
2 Clarke 13 0 0.000000 4.0
3 Dennis 64 2 0.031250 5.0
4 Jennifer 56 1 0.017857 1.0
5 Tom 95 5 0.052632 7.0
6 Ellen 42 2 0.047619 5.0
7 Heather 31 3 0.096774 7.0

Use mapping between two columns to create chain in pandas dataframe

Here is a test dataframe. I want to use the relationship between EmpID and MgrID to further map the manager of MgrID in a new column.
Test_df = pd.DataFrame({'EmpID':['1','2','3','4','5','6','7','8','9','10'],
'MgrID':['4','4','4','6','8','8','10','10','10','12']})
Test_df
If I create a dictionary for the initial relationship, I will be able to create the first link of the chain, but I affraid I need to loop through each of the new columns to create a new one.
ID_Dict = {'1':'4',
'2':'4',
'3':'4',
'4':'6',
'5':'8',
'6':'8',
'7':'10',
'8':'10',
'9':'10',
'10':'12'}
Test_df['MgrID_L2'] = Test_df['MgrID'].map(ID_Dict)
Test_df
What is the most efficient way to do this?
Thank you!
Here's a way with a simple while loop. Note I changed the name of MgrID to MgrID_1
Test_df = pd.DataFrame({'EmpID':['1','2','3','4','5','6','7','8','9','10'],
'MgrID_1':['4','4','4','6','8','8','10','10','10','12']})
d = Test_df.set_index('EmpID').MgrID_1.to_dict()
s = 2
while s:
Test_df['MgrID_'+str(s)] = Test_df['MgrID_'+str(s-1)].map(d)
if Test_df['MgrID_'+str(s)].isnull().all():
Test_df = Test_df.drop(columns='MgrID_'+str(s))
s = 0
else:
s+=1
Ouptut: Test_df
EmpID MgrID_1 MgrID_2 MgrID_3 MgrID_4 MgrID_5
0 1 4 6 8 10 12
1 2 4 6 8 10 12
2 3 4 6 8 10 12
3 4 6 8 10 12 NaN
4 5 8 10 12 NaN NaN
5 6 8 10 12 NaN NaN
6 7 10 12 NaN NaN NaN
7 8 10 12 NaN NaN NaN
8 9 10 12 NaN NaN NaN
9 10 12 NaN NaN NaN NaN

combine data in pandas

I have a pandas dataframe like this:
index integer_2_x integer_2_y
0 49348 NaN
1 26005 NaN
2 5 NaN
3 NaN 26
4 26129 NaN
5 129 NaN
6 NaN 26
7 NaN 17
8 60657 NaN
9 17031 NaN
I want to make a third column that looks like this by taking the numeric value in the first and the second and eliminating the NaN. How do I do this?
index integer_2_z
0 49348
1 26005
2 5
3 26
4 26129
5 129
6 26
7 17
8 60657
9 17031
One way is to use the update function.
import pandas as np
import numpy as np
# some artificial data
# ========================
df = pd.DataFrame({'X':[10,20,np.nan,40,np.nan], 'Y':[np.nan,np.nan,30,np.nan,50]})
print(df)
X Y
0 10 NaN
1 20 NaN
2 NaN 30
3 40 NaN
4 NaN 50
# processing
# =======================
df['Z'] = df['X']
# for every missing value in column Z, replace it with value in column Y
df['Z'].update(df['Y'])
print(df)
X Y Z
0 10 NaN 10
1 20 NaN 20
2 NaN 30 30
3 40 NaN 40
4 NaN 50 50
I used http://pandas.pydata.org/pandas-docs/stable/basics.html#general-dataframe-combine
import pandas as pd
import numpy as np
df = pd.read_csv("data", sep="\s*") # cut and pasted your data into 'data' file
df["integer_2_z"] = df["integer_2_x"].combine(df["integer_2_y"], lambda x, y: np.where(pd.isnull(x), y, x))
Output
index integer_2_x integer_2_y integer_2_z
0 0 49348 NaN 49348
1 1 26005 NaN 26005
2 2 5 NaN 5
3 3 NaN 26 26
4 4 26129 NaN 26129
5 5 129 NaN 129
6 6 NaN 26 26
7 7 NaN 17 17
8 8 60657 NaN 60657
9 9 17031 NaN 17031
Maybe you can simply use the fillna function.
# Creating the DataFrame
df = pd.DataFrame({'integer_2_x': [49348, 26005, 5, np.nan, 26129, 129, np.nan, np.nan, 60657, 17031],
'integer_2_y': [np.nan, np.nan, np.nan, 26, np.nan, np.nan, 26, 17, np.nan, np.nan]})
# Using fillna to fill a new column
df['integer_2_z'] = df['integer_2_x'].fillna(df['integer_2_y'])
# Printing the result below, you can also drop x and y columns if they are no more required
print(df)
integer_2_x integer_2_y integer_2_z
0 49348 NaN 49348
1 26005 NaN 26005
2 5 NaN 5
3 NaN 26 26
4 26129 NaN 26129
5 129 NaN 129
6 NaN 26 26
7 NaN 17 17
8 60657 NaN 60657
9 17031 NaN 17031

How to process excel file headers using pandas/python

I am trying to read https://www.whatdotheyknow.com/request/193811/response/480664/attach/3/GCSE%20IGCSE%20results%20v3.xlsx using pandas.
Having saved it my script is
import sys
import pandas as pd
inputfile = sys.argv[1]
xl = pd.ExcelFile(inputfile)
# print xl.sheet_names
df = xl.parse(xl.sheet_names[0])
print df.head()
However this does not seem to process the headers properly as it gives
GCSE and IGCSE1 results2,3 in selected subjects4 of pupils at the end of key stage 4 Unnamed: 1 Unnamed: 2 Unnamed: 3 Unnamed: 4 Unnamed: 5 Unnamed: 6 Unnamed: 7 Unnamed: 8 Unnamed: 9 Unnamed: 10
0 Year: 2010/11 (Final) NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 Coverage: England NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 1. Includes International GCSE, Cambridge Inte... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 2. Includes attempts and achievements by these... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
All of this should be treated as comments.
If you load the spreadsheet into libreoffice, for example, you can see that the column headings are correctly parsed and appear in row 15 with drop down menus to let you select the items you want.
How can you get pandas to automatically detect where the column headers are just as libreoffice does?
pandas is (are?) processing the file correctly, and exactly the way you asked it (them?) to. You didn't specify a header value, which means that it defaults to picking up the column names from the 0th row. The first few rows of cells aren't comments in some fundamental way, they're just not cells you're interested in.
Simply tell parse you want to skip some rows:
>>> xl = pd.ExcelFile("GCSE IGCSE results v3.xlsx")
>>> df = xl.parse(xl.sheet_names[0], skiprows=14)
>>> df.columns
Index([u'Local Authority Number', u'Local Authority Name', u'Local Authority Establishment Number', u'Unique Reference Number', u'School Name', u'Town', u'Number of pupils at the end of key stage 4', u'Number of pupils attempting a GCSE or an IGCSE', u'Number of students achieving 8 or more GCSE or IGCSE passes at A*-G', u'Number of students achieving 8 or more GCSE or IGCSE passes at A*-A', u'Number of students achieving 5 A*-A grades or more at GCSE or IGCSE'], dtype='object')
>>> df.head()
Local Authority Number Local Authority Name \
0 201 City of london
1 201 City of london
2 202 Camden
3 202 Camden
4 202 Camden
Local Authority Establishment Number Unique Reference Number \
0 2016005 100001
1 2016007 100003
2 2024104 100049
3 2024166 100050
4 2024196 100051
School Name Town \
0 City of London School for Girls London
1 City of London School London
2 Haverstock School London
3 Parliament Hill School London
4 Regent High School London
Number of pupils at the end of key stage 4 \
0 105
1 140
2 200
3 172
4 174
Number of pupils attempting a GCSE or an IGCSE \
0 104
1 140
2 194
3 169
4 171
Number of students achieving 8 or more GCSE or IGCSE passes at A*-G \
0 100
1 108
2 SUPP
3 22
4 0
Number of students achieving 8 or more GCSE or IGCSE passes at A*-A \
0 87
1 75
2 0
3 7
4 0
Number of students achieving 5 A*-A grades or more at GCSE or IGCSE
0 100
1 123
2 0
3 34
4 SUPP
[5 rows x 11 columns]

Categories

Resources