How to trim down a Pandas data frame rows? - python

I'm trying so hard to shorten this awful lot of rows from an XML sitemap but I can't find a solution to trim it down.
import advertools as adv
import pandas as pd
site = "https://www.halfords.com/sitemap_index.xml"
sitemap = adv.sitemap_to_df(site)
sitemap = sitemap.dropna(subset=["loc"]).reset_index(drop=True)
# Some sitemaps keeps urls with "/" on the end, some is with no "/"
# If there is "/" on the end, we take the second last column as slugs
# Else, the last column is the slug column
slugs = sitemap['loc'].dropna()[~sitemap['loc'].dropna().str.endswith('/')].str.split('/').str[-2].str.replace('-', ' ')
slugs2 = sitemap['loc'].dropna()[~sitemap['loc'].dropna().str.endswith('/')].str.split('/').str[-1].str.replace('-', ' ')
# Merge two series
slugs = list(slugs) + list(slugs2)
# adv.word_frequency automatically removes the stop words
word_counts_onegram = adv.word_frequency(slugs)
word_counts_twogram = adv.word_frequency(slugs, phrase_len=2)
competitor = pd.concat([word_counts_onegram, word_counts_twogram])\
.rename({'abs_freq':'Count','word':'Ngram'}, axis=1)\
.sort_values('Count', ascending=False)
competitor.to_csv('competitor.csv',index=False)
competitor
competitor.shape
(67758, 2)
(67758, 2)
I've been raveling around several blogs included resources on Stack Overflow but nothing seemed to work.
This is definitely something going on with my zero expertise in coding I suppose

Two things:
You can use adv.url_to_df to split URLs and get the slugs (there should be a column called last_dir:
urldf = adv.url_to_df(sitemap['loc'].dropna())
urldf
url
scheme
netloc
path
query
fragment
dir_1
dir_2
dir_3
dir_4
dir_5
dir_6
dir_7
dir_8
dir_9
last_dir
0
https://www.halfords.com/cycling/cycling-technology/helmet-cameras/removu-k1-4k-camera-and-stabiliser-694977.html
https
www.halfords.com
/cycling/cycling-technology/helmet-cameras/removu-k1-4k-camera-and-stabiliser-694977.html
nan
nan
cycling
cycling-technology
helmet-cameras
removu-k1-4k-camera-and-stabiliser-694977.html
nan
nan
nan
nan
nan
removu-k1-4k-camera-and-stabiliser-694977.html
1
https://www.halfords.com/technology/bluetooth-car-kits/jabra-drive-bluetooth-speakerphone---white-695094.html
https
www.halfords.com
/technology/bluetooth-car-kits/jabra-drive-bluetooth-speakerphone---white-695094.html
nan
nan
technology
bluetooth-car-kits
jabra-drive-bluetooth-speakerphone---white-695094.html
nan
nan
nan
nan
nan
nan
jabra-drive-bluetooth-speakerphone---white-695094.html
2
https://www.halfords.com/tools/power-tools-and-accessories/power-tools/stanley-fatmax-v20-18v-combi-drill-kit-695102.html
https
www.halfords.com
/tools/power-tools-and-accessories/power-tools/stanley-fatmax-v20-18v-combi-drill-kit-695102.html
nan
nan
tools
power-tools-and-accessories
power-tools
stanley-fatmax-v20-18v-combi-drill-kit-695102.html
nan
nan
nan
nan
nan
stanley-fatmax-v20-18v-combi-drill-kit-695102.html
3
https://www.halfords.com/technology/dash-cams/mio-mivue-c450-695262.html
https
www.halfords.com
/technology/dash-cams/mio-mivue-c450-695262.html
nan
nan
technology
dash-cams
mio-mivue-c450-695262.html
nan
nan
nan
nan
nan
nan
mio-mivue-c450-695262.html
4
https://www.halfords.com/technology/dash-cams/mio-mivue-818-695270.html
https
www.halfords.com
/technology/dash-cams/mio-mivue-818-695270.html
nan
nan
technology
dash-cams
mio-mivue-818-695270.html
nan
nan
nan
nan
nan
nan
mio-mivue-818-695270.html
There are options that pandas provides, which you can change. For example:
pd.options.display.max_rows
60
# change it to display more/fewer rows:
pd.options.display.max_rows = 100
As you did, you can easily create onegrams and bigrams, combine them, and display them:
text_list = urldf['last_dir'].str.replace('-', ' ').dropna()
one_grams = adv.word_frequency(text_list, phrase_len=1)
bigrams = adv.word_frequency(text_list, phrase_len=2)
print(pd.concat([one_grams, bigrams])
.sort_values('abs_freq', ascending=False)
.head(15) # <-- change this to 100 for example
.reset_index(drop=True))
word
abs_freq
0
halfords
2985
1
car
1430
2
bike
922
3
kit
829
4
black
777
5
laser
686
6
set
614
7
wheel
540
8
pack
524
9
mats
511
10
car mats
478
11
thule
453
12
paint
419
13
4
413
14
spray
382
Hope that helps?

Related

Pandas dataframe merge row by addition

I want to create a dataframe from census data. I want to calculate the number of people that returned a tax return for each specific earnings group.
For now, I wrote this
census_df = pd.read_csv('../zip code data/19zpallagi.csv')
sub_census_df = census_df[['zipcode', 'agi_stub', 'N02650', 'A02650', 'ELDERLY', 'A07180']].copy()
num_of_returns = ['Number_of_returns_1_25000', 'Number_of_returns_25000_50000', 'Number_of_returns_50000_75000',
'Number_of_returns_75000_100000', 'Number_of_returns_100000_200000', 'Number_of_returns_200000_more']
for i, column_name in zip(range(1, 7), num_of_returns):
sub_census_df[column_name] = sub_census_df[sub_census_df['agi_stub'] == i]['N02650']
I have 6 groups attached to a specific zip code. I want to get one row, with the number of returns for a specific zip code appearing just once as a column. I already tried to change NaNs to 0 and to use groupby('zipcode').sum(), but I get 50 million rows summed for zip code 0, where it seems that only around 800k should exist.
Here is the dataframe that I currently get:
zipcode agi_stub N02650 A02650 ELDERLY A07180 Number_of_returns_1_25000 Number_of_returns_25000_50000 Number_of_returns_50000_75000 Number_of_returns_75000_100000 Number_of_returns_100000_200000 Number_of_returns_200000_more Amount_1_25000 Amount_25000_50000 Amount_50000_75000 Amount_75000_100000 Amount_100000_200000 Amount_200000_more
0 0 1 778140.0 10311099.0 144610.0 2076.0 778140.0 NaN NaN NaN NaN NaN 10311099.0 NaN NaN NaN NaN NaN
1 0 2 525940.0 19145621.0 113810.0 17784.0 NaN 525940.0 NaN NaN NaN NaN NaN 19145621.0 NaN NaN NaN NaN
2 0 3 285700.0 17690402.0 82410.0 9521.0 NaN NaN 285700.0 NaN NaN NaN NaN NaN 17690402.0 NaN NaN NaN
3 0 4 179070.0 15670456.0 57970.0 8072.0 NaN NaN NaN 179070.0 NaN NaN NaN NaN NaN 15670456.0 NaN NaN
4 0 5 257010.0 35286228.0 85030.0 14872.0 NaN NaN NaN NaN 257010.0 NaN NaN NaN NaN NaN 35286228.0 NaN
And here is what I want to get:
zipcode Number_of_returns_1_25000 Number_of_returns_25000_50000 Number_of_returns_50000_75000 Number_of_returns_75000_100000 Number_of_returns_100000_200000 Number_of_returns_200000_more
0 0 778140.0 525940.0 285700.0 179070.0 257010.0 850.0
here is one way to do it using groupby and sum the desired columns
num_of_returns = ['Number_of_returns_1_25000', 'Number_of_returns_25000_50000', 'Number_of_returns_50000_75000',
'Number_of_returns_75000_100000', 'Number_of_returns_100000_200000', 'Number_of_returns_200000_more']
df.groupby('zipcode', as_index=False)[num_of_returns].sum()
zipcode Number_of_returns_1_25000 Number_of_returns_25000_50000 Number_of_returns_50000_75000 Number_of_returns_75000_100000 Number_of_returns_100000_200000 Number_of_returns_200000_more
0 0 778140.0 525940.0 285700.0 179070.0 257010.0 0.0
This question needs more information to actually give a proper answer. For example you leave out what is meant by certain columns in your data frame:
- `N1: Number of returns`
- `agi_stub: Size of adjusted gross income`
According to IRS this has the following levels.
Size of adjusted gross income "0 = No AGI Stub
1 = ‘Under $1’
2 = '$1 under $10,000'
3 = '$10,000 under $25,000'
4 = '$25,000 under $50,000'
5 = '$50,000 under $75,000'
6 = '$75,000 under $100,000'
7 = '$100,000 under $200,000'
8 = ‘$200,000 under $500,000’
9 = ‘$500,000 under $1,000,000’
10 = ‘$1,000,000 or more’"
I got the above from https://www.irs.gov/pub/irs-soi/16incmdocguide.doc
With this information, I think what you want to find is the number of
people who filed a tax return for each of the income levels of agi_stub.
If that is what you mean then, this can be achieved by:
import pandas as pd
data = pd.read_csv("./data/19zpallagi.csv")
## select only the desired columns
data = data[['zipcode', 'agi_stub', 'N1']]
## solution to your problem?
df = data.pivot_table(
index='zipcode',
values='N1',
columns='agi_stub',
aggfunc=['sum']
)
## bit of cleaning up.
PREFIX = 'agi_stub_level_'
df.columns = [PREFIX + level for level in df.columns.get_level_values(1).astype(str)]
Here's the output.
In [77]: df
Out[77]:
agi_stub_level_1 agi_stub_level_2 ... agi_stub_level_5 agi_stub_level_6
zipcode ...
0 50061850.0 37566510.0 ... 21938920.0 8859370.0
1001 2550.0 2230.0 ... 1420.0 230.0
1002 2850.0 1830.0 ... 1840.0 990.0
1005 650.0 570.0 ... 450.0 60.0
1007 1980.0 1530.0 ... 1830.0 460.0
... ... ... ... ... ...
99827 470.0 360.0 ... 170.0 40.0
99833 550.0 380.0 ... 290.0 80.0
99835 1250.0 1130.0 ... 730.0 190.0
99901 1960.0 1520.0 ... 1030.0 290.0
99999 868450.0 644160.0 ... 319880.0 142960.0
[27595 rows x 6 columns]

Rolling window produces no effect on dataframe

So I have to perform a rolling window to a set of rows inside a dataframe. The problem is that when I do full_df = full_df.rolling(window=5).mean() the output of full_df.head(2000) shows all NaN values. Does anyone know why this happens? I have to perform a time series exercise with this.
This is the dataset: https://github.com/plotly/datasets/blob/master/all_stocks_5yr.csv
This is what I have:
df = pd.read_csv('all_stocks_5yr.csv', usecols=["date", "close",
"Name"])
gp = df.groupby("Name")
my_dict = {key: group['close'].to_numpy() for key, group in gp}
full_df = pd.DataFrame.from_dict(my_dict, orient='index')
for i in full_df:
full_df = full_df.rolling(window=5).mean()
An image of the output:
First off, your loop for i in full_df is not doing what you think; instead of running the rolling mean in each row, you're running it over and over again on the whole dataframe, averaging along columns.
If we just do the rolling average once the way you're implemting it:
full_df = full_df.rolling(window=5).mean()
print(full_df)
0 1 2 3 ... 1255 1256 1257 1258
A NaN NaN NaN NaN ... NaN NaN NaN NaN
AAL NaN NaN NaN NaN ... NaN NaN NaN NaN
AAP NaN NaN NaN NaN ... NaN NaN NaN NaN
AAPL NaN NaN NaN NaN ... NaN NaN NaN NaN
ABBV 48.56684 48.37228 47.95056 48.07312 ... 102.590 98.768 101.212 100.510
... ... ... ... ... ... ... ... ... ...
XYL 45.58400 45.60000 45.74000 45.96200 ... 64.504 61.854 61.596 61.036
YUM 51.14200 51.01800 51.17400 51.28400 ... 66.902 64.420 63.914 63.668
ZBH 48.59000 48.49200 48.57000 48.75000 ... 75.154 73.112 72.704 72.436
ZION 44.84400 44.76600 44.89400 45.08200 ... 73.972 71.734 71.516 71.580
ZTS 45.08600 45.02600 45.27400 45.39200 ... 83.002 80.224 80.000 80.116
[505 rows x 1259 columns]
The first four rows are all NaN because the rolling mean isn't defined for fewer than 5 rows.
If we do it again (making a total of two times):
full_df = full_df.rolling(window=5).mean()
print(full_df.head(9))
0 1 2 ... 1256 1257 1258
A NaN NaN NaN ... NaN NaN NaN
AAL NaN NaN NaN ... NaN NaN NaN
AAP NaN NaN NaN ... NaN NaN NaN
AAPL NaN NaN NaN ... NaN NaN NaN
ABBV NaN NaN NaN ... NaN NaN NaN
ABC NaN NaN NaN ... NaN NaN NaN
ABT NaN NaN NaN ... NaN NaN NaN
ACN NaN NaN NaN ... NaN NaN NaN
ADBE 49.619072 49.471424 49.192048 ... 108.3420 110.4848 110.4976
You can see the first 8 rows are all NaN since the fourth row reaches down to the eighth in the rolling mean. Given the size of your data frame (505 rows) if you ran the rolling mean 127 times, the entire df would be consumed withNaN values, and your for loop is doing it even more times than that, which is why your df is filled with NaN values.
Also, note that you're averaging across different stock tickers, which doesn't make sense. What I believe you want to be doing is averaging the rows, not the columns in which case you simply need to do
full_df = full_df.rolling(axis = 'columns', window=5).mean()
print(full_df)
0 1 2 3 4 5 ... 1253 1254 1255 1256 1257 1258
A NaN NaN NaN NaN 44.72600 44.1600 ... 73.926 73.720 73.006 71.744 70.836 69.762
AAL NaN NaN NaN NaN 14.42600 14.3760 ... 53.142 53.308 53.114 52.530 52.248 51.664
AAP NaN NaN NaN NaN 78.74000 78.7600 ... 120.742 120.016 118.074 115.468 114.054 112.642
AAPL NaN NaN NaN NaN 67.32592 66.9025 ... 168.996 168.330 166.128 163.834 163.046 161.468
ABBV NaN NaN NaN NaN 35.87200 36.1380 ... 116.384 117.992 116.384 113.824 112.888 113.168
... ... ... ... ... ... ... ... ... ... ... ... ... ...
XYL NaN NaN NaN NaN 27.84600 28.0840 ... 73.278 73.598 73.848 73.698 73.350 73.256
YUM NaN NaN NaN NaN 64.58000 64.3180 ... 85.504 85.168 84.454 83.118 82.316 81.424
ZBH NaN NaN NaN NaN 75.85600 75.8660 ... 126.284 126.974 126.886 126.044 125.316 124.048
ZION NaN NaN NaN NaN 24.44200 24.4820 ... 53.838 54.230 54.256 53.748 53.466 53.464
ZTS NaN NaN NaN NaN 33.37400 33.5600 ... 78.720 78.434 77.772 76.702 75.686 75.112
Again, your first four columns are not managed here.
To correct for that, we add one more term:
full_df = full_df.rolling(axis = 'columns', window=5, min_periods = 1).mean()
print(full_df)
0 1 2 3 4 5 ... 1253 1254 1255 1256 1257 1258
A 45.0800 44.8400 44.766667 44.7625 44.72600 44.1600 ... 73.926 73.720 73.006 71.744 70.836 69.762
AAL 14.7500 14.6050 14.493333 14.5350 14.42600 14.3760 ... 53.142 53.308 53.114 52.530 52.248 51.664
AAP 78.9000 78.6450 78.630000 78.7150 78.74000 78.7600 ... 120.742 120.016 118.074 115.468 114.054 112.642
AAPL 67.8542 68.2078 67.752800 67.4935 67.32592 66.9025 ... 168.996 168.330 166.128 163.834 163.046 161.468
ABBV 36.2500 36.0500 35.840000 35.6975 35.87200 36.1380 ... 116.384 117.992 116.384 113.824 112.888 113.168
... ... ... ... ... ... ... ... ... ... ... ... ... ...
XYL 27.0900 27.2750 27.500000 27.6900 27.84600 28.0840 ... 73.278 73.598 73.848 73.698 73.350 73.256
YUM 65.3000 64.9250 64.866667 64.7525 64.58000 64.3180 ... 85.504 85.168 84.454 83.118 82.316 81.424
ZBH 75.8500 75.7500 75.646667 75.7350 75.85600 75.8660 ... 126.284 126.974 126.886 126.044 125.316 124.048
ZION 24.1400 24.1750 24.280000 24.3950 24.44200 24.4820 ... 53.838 54.230 54.256 53.748 53.466 53.464
ZTS 33.0500 33.1550 33.350000 33.4000 33.37400 33.5600 ... 78.720 78.434 77.772 76.702 75.686 75.112
In the above data frame the first column is just the value at time 0, the second is the average of times 0 and 1, the third is the average of times 0, 1, and 2, etc. The window size continues growing until you get to your value of window=5, at which point the window moves along with your rolling average. Note that you can also center the rolling mean if you want to rather than have a trailing window. You can see the documentation here.
I'm not quite sure what you are trying to do. Could you explain in more detail, what the goal of your operation is? I assume you try to build a moinving (rolling) average with a 5 day intervall across each asset and calculate the mean prices for each intervall.
But first, let me answer why you see all the NaNs:
What you are doing with this code below, is thare you are just doing the same operation over and over again and the result of it is always NaNs. That is, because you are doing something weird with the dict and the first rows all have NaNs so average will also be NaNs. And since you overwrite the variable full_df by the result of this computation, your dataframe shows only NaNs.
for i in full_df:
full_df = full_df.rolling(window=5).mean()
Let me explain in more detail. You were (probably) trying to iterate over the dataframe (using a window of 5 days) and compute the mean. The function full_df.rolling(window=5).mean() already does exactly that, and the output is a new dataframe, with the mean of each window over the entire datafrane full_df. By running this function in a loop, without additional indexing you are only running the same function across the entire dataframe over an over again.
Maybe this will get you what you want:
import pandas as pd
df = pd.read_csv("all_stocks_5yr.csv", index_col=[0,6])
means = df.rolling(window=5).mean()

Is there a possibility to use a bigger List in phython?

For school I have to make a project about wifisignals and I am trying put the data in a dataframe.
There are 208.000 rows of data.
And when it comes to the code below, the code does not complete. The code is like it is stuck in an infinite loop.
But when I use only a 1000 rows my program works. So I think that my list are to small if that is possible.
Do bigger Lists exist in phython? Or is it because I use bad coding?
Thanks in advance.
edit 1:
(data is the original dataframe and wifiinfo is a column of that)
i have this format:
df = pd.DataFrame(columns=['Sender','Time','Date','Place','X','Y','Bezetting','SSID','BSSID','Signal'])
And i am trying to fill SSID, BSSID and Signal from the Column WifiInfo for this i have to split the data.
this is how 1 WifiInfo looks like:
ODISEE#88-1d-fc-41-dc-50:-83,ODISEE#88-1d-fc-2c-c0-00:-72,ODISEE#88-1d-fc-41-d2-d0:-82,CiscoC5976#58-6d-8f-19-14-38:-78,CiscoC5959#58-6d-8f-19-13-f4:-93,SNB#c8-d7-19-6f-be-b7:-99,ODISEE#88-1d-fc-2c-c5-70:-94,HackingDemo#58-6d-8f-19-11-48:-156,ODISEE#88-1d-fc-30-d4-40:-85,ODISEE#88-1d-fc-41-ac-50:-100
My current approach looks like:
for index, row in data.iterrows():
bezettingList = list()
ssidList = list()
bssidList = list()
signalList = list()
#WifiInfo splitting
wifis = row.WifiInfo.split(',')
for wifi in wifis:
#split wifi and add to List
ssid, bssid = wifi.split('#')
bssid, signal = bssid.split(':')
ssidList.append(ssid)
bssidList.append(bssid)
signalList.append(int(signal))
#add bezettingen to List
bezettingen = row.Bezetting.split(',')
for bezetting in bezettingen:
bezettingList.append(bezetting)
#add list to dataframe
df.loc[index,'SSID'] = ssidList
df.loc[index,'BSSID'] = bssidList
df.loc[index,'Signal'] = signalList
df.loc[index,'Bezetting'] = bezettingList
df.head()
IIUC, you need to first explode the row by commas so that this:
SSID BSSID Signal WifiInfo
0 NaN NaN NaN ODISEE#88-1d-fc-41-dc-50:-83,ODISEE#88- ...
becomes this:
SSID BSSID Signal WifiInfo
0 NaN NaN NaN ODISEE#88-1d-fc-41-dc-50:-83
1 NaN NaN NaN ODISEE#88-1d-fc-2c-c0-00:-72
2 NaN NaN NaN ODISEE#88-1d-fc-41-d2-d0:-82
3 NaN NaN NaN CiscoC5976#58-6d-8f-19-14-38:-78
4 NaN NaN NaN CiscoC5959#58-6d-8f-19-13-f4:-93
5 NaN NaN NaN SNB#c8-d7-19-6f-be-b7:-99
6 NaN NaN NaN ODISEE#88-1d-fc-2c-c5-70:-94
7 NaN NaN NaN HackingDemo#58-6d-8f-19-11-48:-156
8 NaN NaN NaN ODISEE#88-1d-fc-30-d4-40:-85
9 NaN NaN NaN ODISEE#88-1d-fc-41-ac-50:-100
# use `.explode`
data = data.assign(WifiInfo=data.WifiInfo.str.split(',')).explode('WifiInfo')
Now you could use .str.extract:
data['SSID'] = data['WifiInfo'].str.extract(r'(.*)#')
data['BSSID'] = data['WifiInfo'].str.extract(r'#(.*):')
data['Signal'] = data['WifiInfo'].str.extract(r':(.*)')
SSID BSSID Signal WifiInfo
0 ODISEE 88-1d-fc-41-dc-50 -83 ODISEE#88-1d-fc-41-dc-50:-83
1 ODISEE 88-1d-fc-2c-c0-00 -72 ODISEE#88-1d-fc-2c-c0-00:-72
2 ODISEE 88-1d-fc-41-d2-d0 -82 ODISEE#88-1d-fc-41-d2-d0:-82
3 CiscoC5976 58-6d-8f-19-14-38 -78 CiscoC5976#58-6d-8f-19-14-38:-78
4 CiscoC5959 58-6d-8f-19-13-f4 -93 CiscoC5959#58-6d-8f-19-13-f4:-93
5 SNB c8-d7-19-6f-be-b7 -99 SNB#c8-d7-19-6f-be-b7:-99
6 ODISEE 88-1d-fc-2c-c5-70 -94 ODISEE#88-1d-fc-2c-c5-70:-94
7 HackingDemo 58-6d-8f-19-11-48 -156 HackingDemo#58-6d-8f-19-11-48:-156
8 ODISEE 88-1d-fc-30-d4-40 -85 ODISEE#88-1d-fc-30-d4-40:-85
9 ODISEE 88-1d-fc-41-ac-50 -100 ODISEE#88-1d-fc-41-ac-50:-100
If you want to keep data grouped after column explosion, I'd assign an ID for each group of entries first:
data['Group'] = pd.factorize(data['WifiInfo'])[0]+1
SSID BSSID Signal WifiInfo Group
0 NaN NaN NaN ODISEE#88-1d-fc-41-dc-50:-83,ODISEE#88- ... 1
1 NaN NaN NaN ASD#22-1d-fc-41-dc-50:-83,QWERTY#88- ... 2
# after you explode the column
SSID BSSID Signal WifiInfo Group
ODISEE 88-1d-fc-41-dc-50 -83 ODISEE#88-1d-fc-41-dc-50:-83 1
ODISEE 88-1d-fc-2c-c0-00 -72 ODISEE#88-1d-fc-2c-c0-00:-72 1
...
...
ASD 22-1d-fc-41-dc-50 -83 ASD#88-1d-fc-41-dc-50:-83 2
QWERTY 88-1d-fc-2c-c0-00 -72 QWERTY#88-1d-fc-2c-c0-00:-72 2

How to handle Cells containing only NaN values in pandas?

I am setting up a stock price prediction data set,in that while applying the following code for Ichimoku Cloud Indicator:
from datetime import timedelta
high_9 = df['High'].rolling(window= 9).max()
low_9 = df['Low'].rolling(window= 9).min()
df['tenkan_sen'] = (high_9 + low_9) /2
high_26 = df['High'].rolling(window= 26).max()
low_26 = df['Low'].rolling(window= 26).min()
df['kijun_sen'] = (high_26 + low_26) /2
# this is to extend the 'df' in future for 26 days
# the 'df' here is numerical indexed df
# the problem is here
last_index = df.iloc[-1:].index[0]
last_date = df['Date'].iloc[-1].date()
for i in range(26):
df.loc[last_index+1 +i, 'Date'] = last_date + timedelta(days=i)
df['senkou_span_a'] = ((df['tenkan_sen'] + df['kijun_sen']) / 2).shift(26)
high_52 = df['High'].rolling(window= 52).max()
low_52 = df['Low'].rolling(window= 52).min()
df['senkou_span_b'] = ((high_52 + low_52) /2).shift(26)
# most charting softwares dont plot this line
df['chikou_span'] = df['Close'].shift(-26)
The above code works great but the problem is while extending to the next 26 time steps(rows) in 'senoku span a' and 'b' columns it turns other rest columns row's values to NaN.
So i need the help to make 'Senoku span a' & 'Senoku span b' predicted rows in my data set without making other rows vlaues to NaN.
The current output is:
Date Open High Low Close Senoku span a Senoku span b
2019-03-16 50 51 52 53 56.0 55.82
2019-03-17 NaN NaN NaN NaN 55.0 56.42
2019-03-18 NaN NaN NaN NaN 54.0 57.72
2019-03-19 NaN NaN NaN NaN 53.0 58.12
2019-03-20 NaN NaN NaN NaN 52.0 59.52
The expected output is:
Date Open High Low Close Senoku span a Senoku span b
2019-03-16 50 51 52 53 56.0 55.82
2019-03-17 55.0 56.42
2019-03-18 54.0 57.72
2019-03-19 53.0 58.12
2019-03-20 52.0 59.52

Append records to bottom of dataframe

I have two dataframes like the ones sampled below. I'm trying to append the records from one of the dataframes to the bottom of the first. So the final data frame should only have two columns. Instead I seem to be appending the columns from one dataframe on to the right side of the first. Does anyone see what I'm doing wrong?
Code:
appendDf=df1.append(df2)
df1
28343 \
0 42267
1 157180
2 186320
https://s.m.com/is/ime/M/ts/mized/5_fpx.tif
0 https://sl.com/is/i/M/...
1 https://sl.com/is/i/M/…
2 https://sl.com/is/im/M/...
df2
454 \
0 223
1 155
2 334
https://s.m.com/is/ime/M/ts/mized/5.tif
0 https://slret.com/is/i/M/...
1 https://slfdsd.com/is/i/M/…
2 https://slfd.com/is/im/M/...
appendDf.head()
28343 https://s.m.com/is/ime/M/ts/mized/5_fpx.tif 454 https://s.m.com/is/ime/M/ts/mized/5.tif
Your DataFrames do not seem to have column headers (I imagine the first row of your data is being used as the column headers), which is likely the root of your issue. When you append the second DataFrame, the program doesn't know which columns the data correspond to, so it adds them as new columns. See the following example:
import pandas as pd
df1 = pd.DataFrame([[28343, 'http://link1'], [42267, 'http://link2'],
[157180, 'http://link3'], [186320, 'http://link4']], columns=['ID','Link'])
df2 = pd.DataFrame([[454, 'http://link5'], [223, 'http://link6'],
[155, 'http://link7'], [334, 'http://link8']])
appendedDF = df1.append(df2)
Yields:
ID Link 0 1
0 28343.0 http://link1 NaN NaN
1 42267.0 http://link2 NaN NaN
2 157180.0 http://link3 NaN NaN
3 186320.0 http://link4 NaN NaN
0 NaN NaN 454.0 http://link5
1 NaN NaN 223.0 http://link6
2 NaN NaN 155.0 http://link7
3 NaN NaN 334.0 http://link8
Correct implementation:
import pandas as pd
df1 = pd.DataFrame([[28343, 'http://link1'], [42267, 'http://link2'],
[157180, 'http://link3'], [186320, 'http://link4']], columns=['ID','Link'])
df2 = pd.DataFrame([[454, 'http://link5'], [223, 'http://link6'],
[155, 'http://link7'], [334, 'http://link8']], columns=['ID','Link'])
appendedDF = df1.append(df2).reset_index(drop=True)
Yields:
ID Link
0 28343 http://link1
1 42267 http://link2
2 157180 http://link3
3 186320 http://link4
4 454 http://link5
5 223 http://link6
6 155 http://link7
7 334 http://link8

Categories

Resources