I have several tables imported from an Excel file:
df = pd.read_excel(ffile, 'Constraints', header = None, names = range(13))
table_names = ['A', ...., 'W']
groups = df[0].isin(table_names).cumsum()
tables = {g.iloc[0,0]: g.iloc[1:] for k,g in df.groupby(groups)}
This is the first time I've tried to read multiple tables from a single sheet, so I'm not sure if this is the best manner. If printed like this:
for k,v in tables.items():
print("table:", k)
print(v)
print()
The output is:
table: A
0 1 2 ... 10 11 12
2 Sxxxxxx Dxxx 21 20 ... 22 19 22
3 Rxxx Sxxxx / Lxxx Cxxxxxxxxxxx 7 7 ... 7 7 7
4 AVG Sxxxx per xxx # xx% Pxxxxxxxxxxxx 5 X 5.95 5.95 ... 5.95 5.95 5.95
...
...
...
table: W
0 1 2 ... 10 11 12
6 Sxxxxxx Dxxx 21 20 ... 22 19 22
7 Rxxx Sxxxx / Lxxx Cxxxxxxxxxxx 30 30 ... 30 30 30
8 AVG Sxxxx per xxx # xx% Pxxxxxxxxxxxx 5 x 28.5 28.5 ... 28.5 28.5 28.5
I tried to combine them all into one DataFrame using dfa = pd.DataFrame(tables['A'])
for each table, and then using fdf = pd.concat([dfa,...,dwf], keys =['A', ... 'W']).
The keys are hierarchically placed, but the autonumbered index column inserts itself after the keys and before the first column:
0 1 2 ... 10 11 12
A 2 Sxxxxxx Dxxx 21 20 ... 22 19 22
3 Rxxx Sxxxx / Lxxx Cxxxxxxxxxxx 7 7 ... 7 7 7
4 AVG Sxxxx per xxx # xx% Pxxxxxxxxxxxx 5 X 5.95 5.95 ... 5.95 5.95 5.95
I would like to convert the keys to an actual column and switch places with the pandas numbered index, but I'm not sure how to do that. I've tried pd.reset_index() in various configurations, but am wondering if I maybe constructed the tables wrong in the first place?
If any of this information is not necessary, please let me know and I will remove it. I'm trying to follow the MCV guidelines and am not sure how much people need to know.
After you get the your tables, Just do
pd.concat(tables)
Related
I have the following dataframe :
time bk1_lvl0_id bk2_lvl0_id pr_ss order_upto_level initial_inventory leadtime1 leadtime2 adjusted_leadtime
0 2020 1000 3 16 18 17 3 0.100000 1
1 2020 10043 3 65 78 72 12 0.400000 1
2 2020 1005 3 0 1 1 9 0.300000 1
3 2020 1009 3 325 363 344 21 0.700000 1
4 2020 102 3 0 1 1 7 0.233333 1
I want a function to get the pr_ss for example for (bk1_lvl0_id=1000,bk2_lvl0_id=3).
that's the code i've tried but it takes time :
def get_safety_stock(df,bk1,bk2):
##a function that returns the safety stock for any given (bk1,bk2)
for index,row in df.iterrows():
if (row["bk1_lvl0_id"]==bk1) and (row["bk2_lvl0_id"]==bk2):
return int(row["pr_ss"])
break
If your dataframe has no duplicate values based on bk1_lvl0_id and bk2_lvl0_id, You can make function as follows:
def get_safety_stock(df,bk1,bk2):
return df.loc[df.bk1_lvl0_id.eq(bk1) & df.bk2_lvl0_id.eq(bk2), 'pr_ss'][0]
Note that its accessing the first value in the Series which shouldnt be an issue if there are no duplicates in data. If you want all of them, just remove the [0] from the end and it should give you the whole series. This can be called as follows:
get_safety_stock(df, 1000,3)
>>>16
I have a code where it outputs the amount of times a product is bought in a specific month in all stores; however, I was wondering how I would be able to have the sum of 3 conditions, where python would add the products from a specific month and a specific store.
This is my code so far:
df = df.groupby(['Month_Bought'])['Amount_Bought'].sum()
print(df)
Output:
01-2020 27
02-2020 26
03-2020 24
04-2020 23
05-2020 31
06-2020 33
07-2020 26
08-2020 30
09-2020 33
10-2020 26
11-2020 30
12-2020 30
Need to separate the data to make the dataframe look like this:
Store1 Store2
01-2020 3 24
02-2020 4 22
03-2020 8 16
04-2020 4 19
05-2020 10 21
06-2020 11 21
07-2020 12 14
08-2020 10 20
09-2020 3 30
10-2020 14 12
11-2020 21 9
12-2020 9 21
Assuming your data is long (a column contains values for which store a product was purchased), you could group by store and month:
import pandas as pd
records = [
{'Month_Bought':'01-2020', 'Amount_Bought':1, 'Store': 'Store1'},
{'Month_Bought':'01-2020', 'Amount_Bought':2, 'Store': 'Store2'},
{'Month_Bought':'02-2020', 'Amount_Bought':2, 'Store': 'Store1'},
{'Month_Bought':'02-2020', 'Amount_Bought':4, 'Store': 'Store2'}
]
df = pd.DataFrame.from_records(records)
# Initial dataframe
Month_Bought Amount_Bought Store
0 01-2020 1 Store1
1 01-2020 2 Store2
2 02-2020 2 Store1
3 02-2020 4 Store2
# Now groupby store and month
df_agg = df.groupby(['Store', 'Month_Bought'], as_index=False)['Amount_Bought'].sum()
# Convert from long to wide:
df_agg_pivot = df_agg.pivot(index='Month_Bought', columns='Store', values='Amount_Bought')
# Reformat
df_agg_pivot = df_agg_pivot.reset_index()
df_agg_pivot.columns.name = None
# Final result:
Month_Bought Store1 Store2
0 01-2020 1 2
1 02-2020 2 4
I have a dataframe which contains youtube videos views, I want to scale these values in the range of 1-10.
Below is the sample of how values look like? How do i normalize it in the range of 1-10 or is there any more efficient way to do this thing?
rating
4394029
274358
473691
282858
703750
255967
3298456
136643
796896
2932
220661
48688
4661584
2526119
332176
7189818
322896
188162
157437
1153128
788310
1307902
One possibility is performing a scaling with max.
1 + df / df.max() * 9
rating
0 6.500315
1 1.343433
2 1.592952
3 1.354073
4 1.880933
5 1.320412
6 5.128909
7 1.171046
8 1.997531
9 1.003670
10 1.276217
11 1.060946
12 6.835232
13 4.162121
14 1.415808
15 10.000000
16 1.404192
17 1.235536
18 1.197075
19 2.443451
20 1.986783
21 2.637193
Similar solution by Wen (now deleted):
1 + (df - df.min()) * 9 / (df.max() - df.min())
rating
0 6.498887
1 1.339902
2 1.589522
3 1.350546
4 1.877621
5 1.316871
6 5.126922
7 1.167444
8 1.994266
9 1.000000
10 1.272658
11 1.057299
12 6.833941
13 4.159739
14 1.412306
15 10.000000
16 1.400685
17 1.231960
18 1.193484
19 2.440368
20 1.983514
21 2.634189
I have 10 million rows to go through and it will take many hours to process, I must be doing something wrong
I converted the names of my df variables for ease in typing
Close=df['Close']
eqId=df['eqId']
date=df['date']
IntDate=df['IntDate']
expiry=df['expiry']
delta=df['delta']
ivMid=df['ivMid']
conf=df['conf']
The below code works fine, just ungodly slow, any suggestions?
print(datetime.datetime.now().time())
for i in range(2,1000):
if delta[i]==90:
if delta[i-1]==50:
if delta[i-2]==10:
if expiry[i]==expiry[i-2]:
df.Skew[i]=ivMid[i]-ivMid[i-2]
print(datetime.datetime.now().time())
14:02:11.014396
14:02:13.834275
df.head(100)
Close eqId date IntDate expiry delta ivMid conf Skew
0 37.380005 7 2008-01-02 39447 1 50 0.3850 0.8663
1 37.380005 7 2008-01-02 39447 1 90 0.5053 0.7876
2 36.960007 7 2008-01-03 39448 1 50 0.3915 0.8597
3 36.960007 7 2008-01-03 39448 1 90 0.5119 0.7438
4 35.179993 7 2008-01-04 39449 1 50 0.4055 0.8454
5 35.179993 7 2008-01-04 39449 1 90 0.5183 0.7736
6 33.899994 7 2008-01-07 39452 1 50 0.4464 0.8400
7 33.899994 7 2008-01-07 39452 1 90 0.5230 0.7514
8 31.250000 7 2008-01-08 39453 1 10 0.4453 0.7086
9 31.250000 7 2008-01-08 39453 1 50 0.4826 0.8246
10 31.250000 7 2008-01-08 39453 1 90 0.5668 0.6474 0.1215
11 30.830002 7 2008-01-09 39454 1 10 0.4716 0.7186
12 30.830002 7 2008-01-09 39454 1 50 0.4963 0.8479
13 30.830002 7 2008-01-09 39454 1 90 0.5735 0.6704 0.1019
14 31.460007 7 2008-01-10 39455 1 10 0.4254 0.6737
15 31.460007 7 2008-01-10 39455 1 50 0.4929 0.8218
16 31.460007 7 2008-01-10 39455 1 90 0.5902 0.6411 0.1648
17 30.699997 7 2008-01-11 39456 1 10 0.4868 0.7183
18 30.699997 7 2008-01-11 39456 1 50 0.4965 0.8411
19 30.639999 7 2008-01-14 39459 1 10 0.5117 0.7620
20 30.639999 7 2008-01-14 39459 1 50 0.4989 0.8804
21 30.639999 7 2008-01-14 39459 1 90 0.5887 0.6845 0.077
22 29.309998 7 2008-01-15 39460 1 10 0.4956 0.7363
23 29.309998 7 2008-01-15 39460 1 50 0.5054 0.8643
24 30.080002 7 2008-01-16 39461 1 10 0.4983 0.6646
At this rate it will take 7.77 hrs to process
Basically, the whole point of numpy & pandas is to avoid loops like the plague, and do things in a vectorial way. As you noticed, without that, speed is gone.
Let's break your problem into steps.
The Conditions
Here, your your first condition can be written like this:
df.delta == 90
(Note how this compares the entire column at once. This is much much faster than your loop!).
and the second one can be written like this (using shift):
df.delta.shift(1) == 50
The rest of your conditions are similar.
Note that to combine conditions, you need to use parentheses. So, the first two conditions, together, should be written as:
(df.delta == 90) & (df.delta.shift(1) == 50)
You should be able to now write an expression combining all your conditions. Let's call it cond, i.e.,
cond = (df.delta == 90) & (df.delta.shift(1) == 50) & ...
The assignment
To assign things to a new column, use
df['skew'] = ...
We just need to figure out what to put on the right-hand-sign
The Right Hand Side
Since we have cond, we can write the right-hand-side as
np.where(cond, df.ivMid - df.ivMid.shift(2), 0)
What this says is: when condition is true, take the second term; when it's not, take the third term (in this case I used 0, but do whatever you like).
By combining all of this, you should be able to write a very efficient version of your code.
I feel that extracting data from html tables is extremely difficult and requires custom build for each site.. I would very much like to be proved wrong here..
Is there an simple pythonic way to extract strings and numbers out of a website by just using the url and xpath of the table of interest?
Example:
url_str = 'http://www.fdmbenzinpriser.dk/searchprices/5/'
xpath_str = //*[#id="sortabletable"]
I once had a script that could fetch data from this site. But lost it. As I recall it I was using the tag '' and some string logic.. not very pretty
I know that sites like thingspeak can do these things..
There is a fairly general pattern which you could use to parse many, though not
all, tables.
import lxml.html as LH
import requests
import pandas as pd
def text(elt):
return elt.text_content().replace(u'\xa0', u' ')
url = 'http://www.fdmbenzinpriser.dk/searchprices/5/'
r = requests.get(url)
root = LH.fromstring(r.content)
for table in root.xpath('//table[#id="sortabletable"]'):
header = [text(th) for th in table.xpath('//th')] # 1
data = [[text(td) for td in tr.xpath('td')]
for tr in table.xpath('//tr')] # 2
data = [row for row in data if len(row)==len(header)] # 3
data = pd.DataFrame(data, columns=header) # 4
print(data)
You can use table.xpath('//th') to find the column names.
table.xpath('//tr') returns the rows, and for each row, tr.xpath('td')
returns the element representing one "cell" of the table.
Sometimes you may need to filter out certain rows, such as in this case, rows
with fewer values than the header.
What you do with the data (a list of lists) is up to you. Here I use Pandas for presentation only:
Pris Adresse Tidspunkt
0 8.04 Brovejen 18 5500 Middelfart 3 min 38 sek
1 7.88 Hovedvejen 11 5500 Middelfart 4 min 52 sek
2 7.88 Assensvej 105 5500 Middelfart 5 min 56 sek
3 8.23 Ejby Industrivej 111 2600 Glostrup 6 min 28 sek
4 8.15 Park Alle 125 2605 Brøndby 25 min 21 sek
5 8.09 Sletvej 36 8310 Tranbjerg J 25 min 34 sek
6 8.24 Vindinggård Center 29 7100 Vejle 27 min 6 sek
7 7.99 * Søndergade 116 8620 Kjellerup 31 min 27 sek
8 7.99 * Gertrud Rasks Vej 1 9210 Aalborg SØ 31 min 27 sek
9 7.99 * Sorøvej 13 4200 Slagelse 31 min 27 sek
If you mean all the text:
from bs4 import BeautifulSoup
url_str = 'http://www.fdmbenzinpriser.dk/searchprices/5/'
import requests
r = requests.get(url_str).content
print([x.text for x in BeautifulSoup(r).find_all("table",attrs={"id":"sortabletable"})]
['Pris\nAdresse\nTidspunkt\n\n\n\n\n* Denne pris er indberettet af selskabet Indberet pris\n\n\n\n\n\n\xa08.24\n\xa0Gladsaxe Møllevej 33 2860 Søborg\n7 min 4 sek \n\n\n\n\xa08.89\n\xa0Frederikssundsvej 356 2700 Brønshøj\n9 min 10 sek \n\n\n\n\xa07.98\n\xa0Gartnerivej 1 7500 Holstebro\n14 min 25 sek \n\n\n\n\xa07.99 *\n\xa0Søndergade 116 8620 Kjellerup\n15 min 7 sek \n\n\n\n\xa07.99 *\n\xa0Gertrud Rasks Vej 1 9210 Aalborg SØ\n15 min 7 sek \n\n\n\n\xa07.99 *\n\xa0Sorøvej 13 4200 Slagelse\n15 min 7 sek \n\n\n\n\xa08.08 *\n\xa0Tørholmsvej 95 9800 Hjørring\n15 min 7 sek \n\n\n\n\xa08.09 *\n\xa0Nordvej 6 9900 Frederikshavn\n15 min 7 sek \n\n\n\n\xa08.09 *\n\xa0Skelmosevej 89 6980 Tim\n15 min 7 sek \n\n\n\n\xa08.09 *\n\xa0Højgårdsvej 2 4000 Roskilde\n15 min 7 sek']