Breaking up a dataframe column into two columns - python

I am currently having a problem where I am splitting a column into two separate columns. When I perform my code it runs without any errors, but the dataframe is still the same. I'm not sure where my mistake is.
url = 'https://www.teamrankings.com/nba/player/clint-capela/game-log'
html = requests.get(url).content
get_log = pd.read_html(html)
players_log = get_log[0]
game_log = players_log.head()
players_log.info()
players_log.join(players_log['FGM-FGA'].str.split('-', 1, expand=True).rename(columns={0:'A', 1:'B'}))
players_log.info()
print(game_log)

Pandas operations by default doesn't change the original dataframe and returns a new object, so
players_log = players_log.join(players_log['FGM-FGA'].str.split('-', 1, expand=True).rename(columns={0:'A', 1:'B'}))
should do it

Related

How to delete specific values from a column in a dataset (Python)?

I have a data set as below:
I want to remove 'Undecided' from my ['Grad Intention'] column. For this, I created a copy DataFrame and using the code as follows:
df_copy=df_copy.drop([df_copy['Grad Intention'] =='Undecided'], axis=1)
However, this is giving me an error.
How can I remove the row with 'Undecided'? Also, what's wrong with my code?
you could simply use:
df = df[df['Grad Intention'] != 'Undecided']
or
df.drop(df[df['Grad Intention'] == 'Undecided'].index, inplace = True)

Fill specific columns in a pandas dataframe rows with values from another dataframes values

I am trying to replace some missing and incorrect values in my master dataset by filling it in with correct values from two different datasets.
I created a miniature version of the full dataset like so (note the real dataset is several thousand rows long):
import pandas as pd
data = {'From':['GA0251','GA5201','GA5551','GA510A','GA5171','GA5151'],
'To':['GA0201_T','GA5151_T','GA5151_R','GA5151_V','GA5151_P','GA5171_B'],
'From_Latitude':[55.86630869,0,55.85508787,55.85594626,55.85692217,55.85669934],
'From_Longitude':[-4.27138731,0,-4.24126866,-4.24446585,-4.24516129,-4.24358251,],
'To_Latitude':[55.86614756,0,55.85522197,55.85593762,55.85693878,0],
'To_Longitude':[-4.271040979,0,-4.241466534,-4.244607602,-4.244905037,0]}
dataset_to_correct = pd.DataFrame(data)
However, some values in the From lat/long and the To lat/long are incorrect. I have two tables like the one below for each of From and To, which I would like to substitute into the table in place of the two values for that row.
Table of Corrected From lat/long:
data = {'Site':['GA5151_T','GA5171_B'],
'Correct_Latitude':[55.85952791,55.87044558],
'Correct_Longitude':[55.85661767,-4.24358251,]}
correct_to_coords = pd.DataFrame(data)
I would like to match this table to the From column and then replace the From_Latitude and From_Longitude with the correct values.
Table of Corrected To lat/long:
data = {'Site':['GA5201','GA0251'],
'Correct_Latitude':[55.857577,55.86616756],
'Correct_Longitude':[-4.242770,-4.272140979]}
correct_from_coords = pd.DataFrame(data)
I would like to match this table to the To column and then replace the To_Latitude and To_Longitude with the correct values.
Is there a way to match the site in each table to the corresponding From or To column and then replace only the values in the respective columns?
I have tried using code from this answer (Elegant way to replace values in pandas.DataFrame from another DataFrame) but it seems to have no effect on the database.
(correct_to_coords.set_index('Site').rename(columns = {'Correct_Latitude':'To_Latitude'}) .combine_first(dataset_to_correct.set_index('To')))
#zswqa 's answer produces right result, #Anurag Dabas 's doesn't.
Another possible solution, It is a bit faster than merge method suggested above, although both are correct.
dataset_to_correct.set_index("To",inplace=True)
correct_to_coords.set_index("Site",inplace=True)
dataset_to_correct.loc[correct_to_coords.index, "To_Latitude"] = correct_to_coords["Correct_Latitude"]
dataset_to_correct.loc[correct_to_coords.index, "To_Longitude"] = correct_to_coords["Correct_Longitude"]
dataset_to_correct.reset_index(inplace=True)
dataset_to_correct.set_index("From",inplace=True)
correct_from_coords.set_index("Site",inplace=True)
dataset_to_correct.loc[correct_from_coords.index, "From_Latitude"] = correct_from_coords["Correct_Latitude"]
dataset_to_correct.loc[correct_from_coords.index, "From_Longitude"] = correct_from_coords["Correct_Longitude"]
dataset_to_correct.reset_index(inplace=True)
merge = dataset_to_correct.merge(correct_to_coords, left_on='To', right_on='Site', how='left')
merge.loc[(merge.To == merge.Site), 'To_Latitude'] = merge.Correct_Latitude
merge.loc[(merge.To == merge.Site), 'To_Longitude'] = merge.Correct_Longitude
# del merge['Site']
# del merge['Correct_Latitude']
# del merge['Correct_Longitude']
merge = merge.drop(columns = ['Site','Correct_Latitude','Correct_Longitude'])
merge = merge.merge(correct_from_coords, left_on='From', right_on='Site', how='left')
merge.loc[(merge.From == merge.Site), 'From_Latitude'] = merge.Correct_Latitude
merge.loc[(merge.From == merge.Site), 'From_Longitude'] = merge.Correct_Longitude
# del merge['Site']
# del merge['Correct_Latitude']
# del merge['Correct_Longitude']
merge = merge.drop(columns = ['Site','Correct_Latitude','Correct_Longitude'])
merge
lets try dual merge by merge()+pop()+fillna()+drop():
dataset_to_correct=dataset_to_correct.merge(correct_to_coords,left_on='To',right_on='Site',how='left').drop('Site',1)
dataset_to_correct['From_Latitude']=dataset_to_correct.pop('Correct_Latitude').fillna(dataset_to_correct['From_Latitude'])
dataset_to_correct['From_Longitude']=dataset_to_correct.pop('Correct_Longitude').fillna(dataset_to_correct['From_Longitude'])
dataset_to_correct=dataset_to_correct.merge(correct_from_coords,left_on='From',right_on='Site',how='left').drop('Site',1)
dataset_to_correct['To_Latitude']=dataset_to_correct.pop('Correct_Latitude').fillna(dataset_to_correct['To_Latitude'])
dataset_to_correct['To_Longitude']=dataset_to_correct.pop('Correct_Longitude').fillna(dataset_to_correct['To_Longitude'])

ipysheet.sheet converting to DataFrame with saving manual changes done

The aim is to create an interaction dataframe where I can handle values of the cells without coding.
For me it seems should be in the following way:
creating ipysheet.sheet
handling cells manually
converting it to pandas dataframe
the problem is:
after creating a ipysheet.sheet I manualy changed the values of some cells and then convert it to pandas dataframe, but changes done are not reflected in this datafarme; if you just call this sheet without converting you can see these changes;
d = {'col1': [2,8], 'col2': [3,6]}
df = pd.DataFrame(data=d)
sheet1 = ipysheet.sheet(rows=len(df.columns) +1 , columns=3)
first_col = df.columns.to_list()
first_col.insert(0, 'Attribute')
column = ipysheet.column(0, value = first_col, row_start = 0)
cell_value1 = ipysheet.cell(0, 1, 'Format')
cell_value2 = ipysheet.cell(0, 2, 'Description')
sheet1
creating a sheet1
ipysheet.to_dataframe(sheet1)
converting to pd.DataFrame
Solved by predefining all empty spaces as np.nan. You can handle it manually and it transmits to DataFrame when converting.

Append Data Frame in Python

I have multiple sheets that are identical in column headers but not in terms of the number of rows. I want to combine the sheets to make one master sheet.
At the moment this is the code that I get, for which the output is blank and I end up with data = to that in the last sheet.
I decided to utilise a for loop iterated through data_sheetnames which is a list.
Below is the code I have utilised
combineddata = pd.DataFrame()
for club in data_sheetnames:
data = pd.read_excel(r'''C:\Users\me\Desktop\Data.xlsx''', header = 1, index_col = 2, sheet_name = club)
combineddata.append(data)
If I were to change combineddata to a blank list then I get a dictionary of dataframes.
The solution is that append does not work in place.
It returns the appended DataFrame.
Therefore
combineddata = pd.DataFrame()
for club in data_sheetnames:
data = pd.read_excel(r'''C:\Users\me\Desktop\Data.xlsx''', header = 1, index_col = 2, sheet_name = club)
combineddata = combineddata.append(data)
should solve the issue
An easier way is just to do this:
combined_data = pd.concat([pd.read_excel(sheet_name) for sheet_name in data_sheetnames])

Extracting specific columns from pandas.dataframe

I'm trying to use python to read my csv file extract specific columns to a pandas.dataframe and show that dataframe. However, I don't see the data frame, I receive Series([], dtype: object) as an output. Below is the code that I'm working with:
My document consists of:
product sub_product issue sub_issue consumer_complaint_narrative
company_public_response company state zipcode tags
consumer_consent_provided submitted_via date_sent_to_company
company_response_to_consumer timely_response consumer_disputed?
complaint_id
I want to extract :
sub_product issue sub_issue consumer_complaint_narrative
import pandas as pd
df=pd.read_csv("C:\\....\\consumer_complaints.csv")
df=df.stack(level=0)
df2 = df.filter(regex='[B-F]')
df[df2]
import pandas as pd
input_file = "C:\\....\\consumer_complaints.csv"
dataset = pd.read_csv(input_file)
df = pd.DataFrame(dataset)
cols = [1,2,3,4]
df = df[df.columns[cols]]
Here specify your column numbers which you want to select. In dataframe, column start from index = 0
cols = []
You can select column by name wise also. Just use following line
df = df[["Column Name","Column Name2"]]
A simple way to achieve this would be as follows:
df = pd.read_csv("C:\\....\\consumer_complaints.csv")
df2 = df.loc[:,'B':'F']
Hope that helps.
This worked for me, using slicing:
df=pd.read_csv
df1=df[n1:n2]
Where $n1<n2# are both columns in the range, e.g:
if you want columns 3-5, use
df1=df[3:5]
For the first column, use
df1=df[0]
Though not sure how to select a discontinuous range of columns.
We can also use i.loc. Given data in dataset2:
dataset2.iloc[:3,[1,2]]
Will spit out the top 3 rows of columns 2-3 (Remember numbering starts at 0)
Then dataset2.iloc[:3,[1,2]] spits out

Categories

Resources