I have a data frame that looks like this:
data = {'State': ['24', '24', '24',
'24','24','24','24','24','24','24','24','24'],
'County code': ['001', '001', '001',
'001','002','002','002','002','003','003','003','003'],
'TT code': ['123', '123', '123',
'123','124','124','124','124','125','125','125','125'],
'BLK code': ['221', '221', '221',
'221','222','222','222','222','223','223','223','223'],
'Age Code': ['1', '1', '2', '2','2','2','2','2','2','1','2','1']}
df = pd.DataFrame(data)
essentially I want to just have where only the TT code where the age code is 2 and there are no 1's. So I just want to have the data frame where:
'State': ['24', '24', '24', '24'],
'County code': ['002','002','002','002',],
'TT code': ['124','124','124','124',],
'BLK code': ['222','222','222','222'],
'Age Code': ['2','2','2','2']
is there a way to do this?
IIUC, you want to keep only the TT groups where there are only Age groups with value '2'?
You can use a groupby.tranform('all') on the boolean Series:
df[df['Age Code'].eq('2').groupby(df['TT code']).transform('all')]
output:
State County code TT code BLK code Age Code
4 24 002 124 222 2
5 24 002 124 222 2
6 24 002 124 222 2
7 24 002 124 222 2
This should work.
df111['Age Code'] = "2"
I am just wondering why the choice of string for valueType of integer
Related
Lets say I have the following dataframe:
fix_id lg home_team away_team
9887 30 Leganes Alaves
9886 30 Valencia Las Palmas
9885 30 Celta Vigo Real Sociedad
9884 30 Girona Atletico Madrid
and I run an apply function over all the rows of the dataframe. The output of the apply function is the following pandas series:
9887 ({'defense': '74', 'midfield': '75', 'attack': '74', 'overall': '75'},
{'defense': '74', 'midfield': '75', 'attack': '77', 'overall': '75'}),
9886 ({'defense': '80', 'midfield': '80', 'attack': '80', 'overall': '80'},
{'defense': '75', 'midfield': '74', 'attack': '77', 'overall': '75'}),
...
How could add the output dictionaries as new columns to my dataframe. I want to add all eight of them to the same row.
I will be glad to get any guidance. Not necessarily a code. Maybe just instruct me how to, and I will try?
Thanks.
Supposing your output is stored in Series s you can do the following:
pd.concat([df, s.apply(pd.Series)[0].apply(pd.Series), s.apply(pd.Series)[1].apply(pd.Series)], axis=1)
Example
df = pd.DataFrame({'lg': {9887: 30, 9886: 30, 9885: 30, 9884: 30}, 'home_team': {9887: 'Leganes', 9886: 'Valencia', 9885: 'Celta Vigo', 9884: 'Girona'}, 'away_team': {9887: 'Alaves', 9886: 'Las Palmas', 9885: 'Real Sociedad', 9884: 'Atletico Madrid'}})
s = pd.Series({9887: ({'defense': '74', 'midfield': '75', 'attack': '74', 'overall': '75'}, {'defense': '74', 'midfield': '75', 'attack': '77', 'overall': '75'}), 9886: ({'defense': '80', 'midfield': '80', 'attack': '80', 'overall': '80'}, {'defense': '75', 'midfield': '74', 'attack': '77', 'overall': '75'})})
print(df)
# lg home_team away_team
#9887 30 Leganes Alaves
#9886 30 Valencia Las Palmas
#9885 30 Celta Vigo Real Sociedad
#9884 30 Girona Atletico Madrid
print(s)
#9887 ({'defense': '74', 'midfield': '75', 'attack':...
#9886 ({'defense': '80', 'midfield': '80', 'attack':...
#dtype: object
df = pd.concat([df, s.apply(pd.Series)[0].apply(pd.Series), s.apply(pd.Series)[1].apply(pd.Series)], axis=1)
# lg home_team away_team defense ... defense midfield attack overall
#9884 30 Girona Atletico Madrid NaN ... NaN NaN NaN NaN
#9885 30 Celta Vigo Real Sociedad NaN ... NaN NaN NaN NaN
#9886 30 Valencia Las Palmas 80 ... 75 74 77 75
#9887 30 Leganes Alaves 74 ... 74 75 77 75
[4 rows x 11 columns]
Try something like this:
def mymethod(row):
# Here whatever operation you have in mind, for example summing two columns of the row:
return row['A']+row['B']
df['newCol'] = df.apply(lambda row: mymethod(row), axis=1)
df.merge(df.textcol.apply(lambda s: pd.Series({'feature1':s+1, 'feature2':s-1})),
left_index=True, right_index=True)
I am new to Python, kindly help me to understand and go forward in python learning. Find below the sample data:
Country Age Sal OnWork
USA 52 12345 No
UK 23 1142 Yes
MAL 25 4456 No
I would like to find the mean value in SAL column if OnWork is NO
Let's Say that your data looks like the following,
{'Country': 'USA', 'Age': '52', 'Sal': '12345', 'OnWork': 'No'}
{'Country': 'UK', 'Age': '23', 'Sal': '1142', 'OnWork': 'Yes'}
{'Country': 'MAL', 'Age': '25', 'Sal': '4456', 'OnWork': 'No'}
The below code will be of help in your case:
df = your_dataframe
df[df["OnWork"]=="No"]["Sal"].mean()
What I'm trying to achieve is kind of the reverse of a pivot_table.
Basically, I'm starting with this Dataframe:
The code to generate this is:
df = pd.DataFrame({'bus ticket type':['student', 'student', 'student', 'senior', 'senior', 'senior'],
'distance (km)':['5', '10', '15', '5', '10', '15'],
'bus fare':['100', '120', '130', '90', '100', '110']})
You see how there's 3 unique values in 'distance (km)', 5, 10, and 15km? I'm trying to make the unique values in this column the index of the dataframe.
So I want to transform it into:
The code to generate the 2nd dataframe is:
df2 = pd.DataFrame({'distance (km)':['5', '10', '15'],
'student_bus_fare':['100', '120', '130'],
'senior_bus_fare':['90', '100', '110']})
I'm not trying to calculate the mean or sum scores for either the 'students' or 'seniors' category, nor am i trying to achieve some kind of similar aggfunc usage based on distance.
I purely want to re-shape it so that unique values in distance are the index. All the original values representing fares are still in-tact.
.pivot
df = pd.DataFrame({'bus ticket type':['student', 'student', 'student', 'senior', 'senior', 'senior'],
'distance (km)':['5', '10', '15', '5', '10', '15'],
'bus fare':['100', '120', '130', '90', '100', '110']})
df2 = df.pivot(index='distance (km)', columns='bus ticket type', values='bus fare')
yields
bus ticket type senior student
distance (km)
10 100 120
15 110 130
5 90 100
I have a pandas frame. When I print the columns (shown below), its turns out that my columns are out of order. Is there a way to sort only the first 30 columns so they are in order (30,60,90...900)?
[in] df.columns
[out] Index(['120', '150', '180', '210', '240', '270', '30', '300', '330', '360',
'390', '420', '450', '480', '510', '540', '570', '60', '600', '630',
'660', '690', '720', '750', '780', '810', '840', '870', '90', '900',
'Item', 'Price', 'Size', 'Time', 'Type', 'Unnamed: 0'],
dtype='object')
The fixed frame would be as follows:
[out] Index(['30','60','90,'120', '150', '180', '210', '240', '270','300', '330', '360',
'390', '420', '450', '480', '510', '540', '570','600', '630',
'660', '690', '720', '750', '780', '810', '840', '870','900',
'Item', 'Price', 'Size', 'Time', 'Type', 'Unnamed: 0'],
dtype='object')
If you know that the columns will be named 30 through 900 in multiples of 30, you can generate that explicitly like this:
c = [str(i) for i in range(30, 901, 30)]
Then add it to the other columns:
c = c + ['Item', 'Price', 'Size', 'Time', 'Type', 'Unnamed: 0']
Then you should be able to access it as df[c]
You need select first column names, convert to int and sort. Then convert back to str if necessary and use reindex_axis:
np.sort(df.columns[:30].astype(int)).astype(str).tolist() +
df.columns[30:].tolist()
Sample:
df = pd.DataFrame(np.arange(36).reshape(1,-1),
columns=['120', '150', '180', '210', '240', '270', '30', '300',
'330', '360','390', '420', '450', '480', '510', '540', '570', '60', '600', '630',
'660', '690', '720', '750', '780', '810', '840', '870', '90', '900',
'Item', 'Price', 'Size', 'Time', 'Type', 'Unnamed: 0'])
print (df)
120 150 180 210 240 270 30 300 330 360 ... 840 870 90 \
0 0 1 2 3 4 5 6 7 8 9 ... 26 27 28
900 Item Price Size Time Type Unnamed: 0
0 29 30 31 32 33 34 35
[1 rows x 36 columns]
df = df.reindex_axis(np.sort(df.columns[:30].astype(int)).astype(str).tolist() +
df.columns[30:].tolist(), axis=1)
print (df)
30 60 90 120 150 180 210 240 270 300 ... 810 840 870 \
0 6 17 28 0 1 2 3 4 5 7 ... 25 26 27
900 Item Price Size Time Type Unnamed: 0
0 29 30 31 32 33 34 35
[1 rows x 36 columns]
I'm trying to scrape the content from this URL which contains multiple tables. The desired output would be:
NAME FG% FT% 3PM REB AST STL BLK TO PTS SCORE
Team Jackson (0-8) .4313 .7500 21 71 34 11 12 15 189 1-8-0
Team Keyrouze (4-4) .4441 .8090 31 130 71 18 13 45 373 8-1-0
Nutz Vs. Draymond Green (4-4) .4292 .8769 30 86 66 15 9 28 269 3-6-0
Team Pauls 2 da Wall (3-5) .4784 .8438 40 123 64 18 20 30 316 6-3-0
Team Noey (2-6) .4350 .7679 21 125 62 20 9 33 278 7-2-0
YOU REACH, I TEACH (2-5-1) .4810 .7432 20 114 56 30 7 50 277 2-7-0
Kris Kaman His Pants (5-3) .4328 .8000 20 74 59 20 5 27 238 3-6-0
Duke's Balls In Daniels Face (3-4-1) .5000 .7045 42 139 38 27 22 30 303 6-3-0
Knicks Tape (5-3) .5000 .8152 34 143 92 12 9 47 397 4-5-0
Suck MyDirk (5-3) .4734 .8814 29 106 86 22 17 40 435 5-4-0
In Porzingod We Trust (4-4) .4928 .7222 27 180 95 16 16 46 423 7-2-0
Team Aguilar (6-1-1) .4718 .7053 28 177 65 12 35 48 413 2-7-0
Team Li (7-0-1) .4714 .8118 35 134 74 17 17 47 368 6-3-0
Team Iannetta (4-4) .4527 .7302 22 125 90 20 13 44 288 3-6-0
If it's too difficult to format the tables like that, I'd like to know how I can scrape all the tables? My code to scrape all rows is like this:
tableStats = soup.find('table', {'class': 'tableBody'})
rows = tableStats.findAll('tr')
for row in rows:
print(row.string)
But it only prints the value "TEAM" and nothing else... Why doesn't it contain all the rows in the table?
Thanks.
Instead of looking for the table tag, you should look for the rows directly with a more dependable class, such as linescoreTeamRow. This code snippet does the trick,
from bs4 import BeautifulSoup
import requests
a = requests.get("http://games.espn.com/fba/scoreboard?leagueId=224165&seasonId=2017")
soup = BeautifulSoup(a.text, 'lxml')
# searching for the rows directly
rows = soup.findAll('tr', {'class': 'linescoreTeamRow'})
# you will need to isolate elements in the row for the table
for row in rows:
print row.text
Found a way to exactly get the 2-D matrix I specified in the question. It's stored as the list teams.
Code:
from bs4 import BeautifulSoup
import requests
source_code = requests.get("http://games.espn.com/fba/scoreboard?leagueId=224165&seasonId=2017")
plain_text = source_code.text
soup = BeautifulSoup(plain_text, 'lxml')
teams = []
rows = soup.findAll('tr', {'class': 'linescoreTeamRow'})
# Creates a 2-D matrix.
for row in range(len(rows)):
team_row = []
columns = rows[row].findAll('td')
for column in columns:
team_row.append(column.getText())
print(team_row)
# Add each team to a teams matrix.
teams.append(team_row)
Output:
['Team Jackson (0-10)', '', '.4510', '.8375', '41', '135', '101', '23', '11', '50', '384', '', '5-4-0']
['YOU REACH, I TEACH (3-6-1)', '', '.4684', '.7907', '22', '169', '103', '22', '10', '32', '342', '', '4-5-0']
['Nutz Vs. Draymond Green (4-6)', '', '.4552', '.8372', '30', '157', '68', '15', '16', '39', '356', '', '2-7-0']
["Jesse's Blue Balls (4-5-1)", '', '.4609', '.7576', '47', '158', '71', '30', '20', '38', '333', '', '7-2-0']
['Team Noey (4-6)', '', '.4763', '.8261', '42', '164', '70', '25', '29', '44', '480', '', '5-4-0']
['Suck MyDirk (6-3-1)', '', '.4733', '.8403', '54', '160', '132', '23', '11', '47', '544', '', '4-5-0']
['Kris Kaman His Pants (5-5)', '', '.4569', '.8732', '53', '138', '105', '27', '21', '53', '465', '', '6-3-0']
['Team Aguilar (6-3-1)', '', '.4433', '.7229', '40', '202', '68', '30', '22', '54', '452', '', '3-6-0']
['Knicks Tape (6-3-1)', '', '.4406', '.8824', '52', '172', '108', '24', '13', '49', '513', '', '6-3-0']
['Team Iannetta (4-6)', '', '.5321', '.6923', '24', '146', '94', '32', '16', '60', '428', '', '3-6-0']
['In Porzingod We Trust (6-4)', '', '.4694', '.6364', '37', '216', '133', '31', '21', '77', '468', '', '4-5-0']
['Team Keyrouze (6-4)', '', '.4705', '.8854', '51', '135', '108', '25', '17', '43', '550', '', '5-4-0']
['Team Li (8-1-1)', '', '.4369', '.8182', '57', '203', '130', '34', '22', '54', '525', '', '6-3-0']
['Team Pauls 2 da Wall (5-5)', '', '.4780', '.5970', '27', '141', '47', '19', '25', '28', '263', '', '3-6-0']