So, I am trying to create a Python program that reads a password protected excel file. The program is intended to report any names expiring between 90 and 105 days. The problem I am running into right now is getting the program to read multiple rows. I've been using import xlrd. I was hoping that 'counter' would change the row being read, but only the first row is being read.
Edit: Solved. I was able to use the code below to get my program to display entries that are expiring within my time field.
import pandas as pd
from datetime import date, timedelta
today = date.today()
ninety_Days = (date.today()+timedelta(days=90))
hundred_Days = (date.today()+timedelta(days=105))
hundred_Days = '%s-%s-%s' % (hundred_Days.month, hundred_Days.day,
hundred_Days.year)
ninety_Days = '%s-%s-%s' % (ninety_Days.month, ninety_Days.day,
ninety_Days.year)
wkbk = pd.read_excel('Practice Inventory.xlsx', 'Sheet1')
mask = (wkbk['Expiration'] >= ninety_Days) & (wkbk['Expiration'] <=
hundred_Days)
wkbk = wkbk.loc[mask]
print(wkbk)
Use Pandas!
import pandas as pd
df = pd.read_excel('Practice Inventory.xlsx')
new_df = df[df['days to expiration'] >= 90]
final_df = pd.concat([df[df['days to expiration'] <= 120], new_df]
The final_df will hold all the rows with days of expiration greater than 90 and less than 120.
Related
I am working on a personal project collecting the data on Covid-19 cases. The data set only shows the total number of Covid-19 cases per state cumulatively. I would like to add a column that contains the new cases added that day. This is what I have so far:
import pandas as pd
from datetime import date
from datetime import timedelta
import numpy as np
#read the CSV from github
hist_US_State = pd.read_csv("https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv")
#some code to get yesterday's date and the day before which is needed later.
today = date.today()
yesterday = today - timedelta(days = 1)
yesterday = str(yesterday)
day_before_yesterday = today - timedelta(days = 2)
day_before_yesterday = str(day_before_yesterday)
#Extracting yesterday's and the day before cases and combine them in one dataframe
yesterday_cases = hist_US_State[hist_US_State["date"] == yesterday]
day_before_yesterday_cases = hist_US_State[hist_US_State["date"] == day_before_yesterday]
total_cases = pd.DataFrame()
total_cases = day_before_yesterday_cases.append(yesterday_cases)
#Adding a new column called "new_cases" and this is where I get into trouble.
total_cases["new_cases"] = yesterday_cases["cases"] - day_before_yesterday_cases["cases"]
Can you please point out what I am doing wrong?
Because you defined total_cases as a concatenation (via append) of yesterday_cases and day_before_yesterday_cases, its number of rows is equal to the sum of the other two dataframes. It looks like yesterday_cases and day_before_yesterday_cases both have 55 rows, and so total_cases has 110 rows. Thus your last line is trying to assign 55 values to a series of 110 values.
You may either want to reshape your data so that each date is its own column, or work in arrays of dataframes.
I am fairly new to python and coding in general.
I have a big data file that provides daily data for the period 2011-2018 for a number of stock tickers (300~).
The data is a .csv file with circa 150k rows and looks as follows (short example):
Date,Symbol,ShortExemptVolume,ShortVolume,TotalVolume
20110103,AAWW,0.0,28369,78113.0
20110103,AMD,0.0,3183556,8095093.0
20110103,AMRS,0.0,14196,18811.0
20110103,ARAY,0.0,31685,77976.0
20110103,ARCC,0.0,177208,423768.0
20110103,ASCMA,0.0,3930,26527.0
20110103,ATI,0.0,193772,301287.0
20110103,ATSG,0.0,23659,72965.0
20110103,AVID,0.0,7211,18896.0
20110103,BMRN,0.0,21740,213974.0
20110103,CAMP,0.0,2000,11401.0
20110103,CIEN,0.0,625165,1309490.0
20110103,COWN,0.0,3195,24293.0
20110103,CSV,0.0,6133,25394.0
I have a function that allows me to filter for a specific symbol and get 10 observations before and after a specified date (could be any date between 2011 and 2018).
import pandas as pd
from datetime import datetime
import urllib
import datetime
def get_data(issue_date, stock_ticker):
df = pd.read_csv (r'D:\Project\Data\Short_Interest\exampledata.csv')
df['Date'] = pd.to_datetime(df['Date'], format="%Y%m%d")
d = df
df = pd.DataFrame(d)
short = df.loc[df.Symbol.eq(stock_ticker)]
# get the index of the row of interest
ix = short[short.Date.eq(issue_date)].index[0]
# get the item row for that row's index
iloc_ix = short.index.get_loc(ix)
# get the +/-1 iloc rows (+2 because that is how slices work), basically +1 and -1 trading days
short_data = short.iloc[iloc_ix-10: iloc_ix+11]
return [short_data]
I want to create a script that iterates a list of 'issue_dates' and 'stock_tickers'. The list (a .csv) looks as following:
ARAY,07/08/2017
ARAY,24/04/2014
ACETQ,16/11/2015
ACETQ,16/11/2015
NVLNA,15/08/2014
ATSG,29/09/2017
ATI,24/05/2016
MDRX,18/06/2013
MDRX,18/06/2013
AMAGX,10/05/2017
AMAGX,14/02/2014
AMD,14/09/2016
To break down my problem and question I would like to know how to do the following:
First, how do I load the inputs?
Second, how do I call the function on each input?
And last, how do I accumulate all the function returns in one dataframe?
To load the inputs and call the function for each row; iterate over the csv file and pass each row's values to the function and accumulate the resulting Seriesin a list.
I modified your function a bit: removed the DataFrame creation so it is only done once and added a try/except block to account for missing dates or tickers (your example data didn't match up too well). The dates in the second csv look like they are day/month/year so I converted them for that format.
import pandas as pd
import datetime, csv
def get_data(df, issue_date, stock_ticker):
'''Return a Series for the ticker centered on the issue date.
'''
short = df.loc[df.Symbol.eq(stock_ticker)]
# get the index of the row of interest
try:
ix = short[short.Date.eq(issue_date)].index[0]
# get the item row for that row's index
iloc_ix = short.index.get_loc(ix)
# get the +/-1 iloc rows (+2 because that is how slices work), basically +1 and -1 trading days
short_data = short.iloc[iloc_ix-10: iloc_ix+11]
except IndexError:
msg = f'no data for {stock_ticker} on {issue_date}'
#log.info(msg)
print(msg)
short_data = None
return short_data
df = pd.read_csv (datafile)
df['Date'] = pd.to_datetime(df['Date'], format="%Y%m%d")
results = []
with open('issues.csv') as issues:
for ticker,date in csv.reader(issues):
day,month,year = map(int,date.split('/'))
# dt = datetime.datetime.strptime(date, r'%d/%m/%Y')
date = datetime.date(year,month,day)
s = get_data(df,date,ticker)
results.append(s)
# print(s)
Creating a single DataFrame or table for all that info may be problematic especially since the date ranges are all different. Probably should ask a separate question regarding that. Its mcve should probably just include a few minimal Pandas Series with a couple of different date ranges and tickers.
I have an efficiency question for you. I wrote some code to analyze a report that holds over 70k records and over 400+ unique organizations to allow my supervisor to enter in year/month/date they are interested in and have it pop out the information.
The beginning of my code is:
import pandas as pd
import numpy as np
import datetime
main_data = pd.read_excel("UpdatedData.xlsx", encoding= 'utf8')
#column names from DF
epi_expose = "EpitheliumExposureSeverity"
sloughing = "EpitheliumSloughingPercentageSurface"
organization = "OrgName"
region = "Region"
date = "DeathOn"
#list storage of definitions
sl_list = ["",'None','Mild','Mild to Moderate']
epi_list= ['Moderate','Moderate to Severe','Severe']
#Create DF with four columns
df = main_data[[region, organization, epi_expose, sloughing, date]]
#filter it down to months
starting_date = datetime.date(2017,2,1)
ending_date = datetime.date(2017,2,28)
df = df[(df[date] > starting_date) & (df[date] < ending_date)]
I am then performing conditional filtering below to get counts by region and organization. It works, but is slow. Is there a more efficient way to query my DF and set up a DF that ONLY has the dates that it is supposed to sit between? Or is this the most efficient way without altering how the Database I am using is set up?
I can provide more of my code but if I filter it out by month before exporting to excel, the code runs in a matter of seconds so I am not concerned about the speed of it besides getting the correct date fields.
Thank you!
If this question is unclear, I am very open to constructive criticism.
I have an excel table with about 50 rows of data, with the first column in each row being a date. I need to access all the data for only one date, and that date appears only about 1-5 times. It is the most recent date so I've already organized the table by date with the most recent being at the top.
So my goal is to store that date in a variable and then have Python look only for that variable (that date) and take only the columns corresponding to that variable. I need to use this code on 100's of other excel files as well, so it would need to arbitrarily take the most recent date (always at the top though).
My current code below simply takes the first 5 rows because I know that's how many times this date occurs.
import os
from numpy import genfromtxt
import pandas as pd
path = 'Z:\\folderwithcsvfile'
for filename in os.listdir(path):
file_path = os.path.join(path, filename)
if os.path.isfile(file_path):
broken_df = pd.read_csv(file_path)
df3 = broken_df['DATE']
df4 = broken_df['TRADE ID']
df5 = broken_df['AVAILABLE STOCK']
df6 = broken_df['AMOUNT']
df7 = broken_df['SALE PRICE']
print (df3)
#print (df3.head(6))
print (df4.head(6))
print (df5.head(6))
print (df6.head(6))
print (df7.head(6))
This is a relatively simple filtering operation. You state that you want to "take only the columns" that are the latest date, so I assume that an acceptable result will be a filter DataFrame with just the correct columns.
Here's a simple CSV that is similar to your structure:
DATE,TRADE ID,AVAILABLE STOCK
10/11/2016,123,123
10/11/2016,123,123
10/10/2016,123,123
10/9/2016,123,123
10/11/2016,123,123
Note that I mixed up the dates a little bit, because it's hacky and error-prone to just assume that the latest dates will be on the top. The following script will filter it appropriately:
import pandas as pd
import numpy as np
df = pd.read_csv('data.csv')
# convert the DATE column to datetimes
df['DATE'] = pd.to_datetime(df['DATE'])
# find the latest datetime
latest_date = df['DATE'].max()
# use index filtering to only choose the columns that equal the latest date
latest_rows = df[df['DATE'] == latest_date]
print (latest_rows)
# now you can perform your operations on latest_rows
In my example, this will print:
DATE TRADE ID AVAILABLE STOCK
0 2016-10-11 123 123
1 2016-10-11 123 123
4 2016-10-11 123 123
I've used:
data = DataReader("yhoo", "yahoo", datetime.datetime(2000, 1, 1),
datetime.datetime.today())
in pandas (python) to get history data of yahoo, but it cannot show today's price (the market has not yet closed) how can I resolve such problem, thanks in advance.
import pandas
import pandas.io.data
import datetime
import urllib2
import csv
YAHOO_TODAY="http://download.finance.yahoo.com/d/quotes.csv?s=%s&f=sd1ohgl1vl1"
def get_quote_today(symbol):
response = urllib2.urlopen(YAHOO_TODAY % symbol)
reader = csv.reader(response, delimiter=",", quotechar='"')
for row in reader:
if row[0] == symbol:
return row
## main ##
symbol = "TSLA"
history = pandas.io.data.DataReader(symbol, "yahoo", start="2014/1/1")
print history.tail(2)
today = datetime.date.today()
df = pandas.DataFrame(index=pandas.DatetimeIndex(start=today, end=today, freq="D"),
columns=["Open", "High", "Low", "Close", "Volume", "Adj Close"],
dtype=float)
row = get_quote_today(symbol)
df.ix[0] = map(float, row[2:])
history = history.append(df)
print "today is %s" % today
print history.tail(2)
just to complete perigee's answer, it cost me quite some time to find a way to append the data.
Open High Low Close Volume Adj Close
Date
2014-02-04 180.7 181.60 176.20 178.73 4686300 178.73
2014-02-05 178.3 180.59 169.36 174.42 7268000 174.42
today is 2014-02-06
Open High Low Close Volume Adj Close
2014-02-05 178.30 180.59 169.36 174.420 7268000 174.420
2014-02-06 176.36 180.11 176.00 178.793 5199297 178.793
Find a way to work around, just use urllib to fetch the data with:
http://download.finance.yahoo.com/d/quotes.csv?s=yhoo&f=sd1ohgl1l1v
then add it to dataframe
This code uses the pandas read_csv method to get the new quote from yahoo, and it checks if the new quote is an update from the current date or a new date in order to update the last record in history or append a new record.
If you add a while(true) loop and a sleep around the new_quote section, you can have the code refresh the quote during the day.
It also has duplicate last trade price to fill in the Close and the Adjusted Close, given that intraday close and adj close are always the same value.
import pandas as pd
import pandas.io.data as web
def get_quote_today(symbol):
url="http://download.finance.yahoo.com/d/quotes.csv?s=%s&f=d1t1ohgl1vl1"
new_quote= pd.read_csv(url%symbol,
names=[u'Date',u'time',u'Open', u'High', u'Low',
u'Close', u'Volume', u'Adj Close'])
# generate timestamp:
stamp = pd.to_datetime(new_quote.Date+" "+new_quote.time)
new_quote.index= stamp
return new_quote.iloc[:, 2:]
if __name__ == "__main__":
symbol = "TSLA"
history = web.DataReader(symbol, "yahoo", start="2014/1/1")
print history.tail()
new_quote = get_quote_today(symbol)
if new_quote.index > history.index[-1]:
if new_quote.index[-1].date() == history.index[-1].date():
# if both quotes are for the first date, update history's last record.
history.iloc[-1]= new_quote.iloc[-1]
else:
history=history.append(new_quote)
history.tail()
So from trying this out and looking at the dataframe, it doesn't look too possible. You tell it to go from a specific day until today, yet the dataframe stops at may 31st 2013. This tells me that yahoo probably has not made it available for you to use in the past couple days or somehow pandas is just not picking it up. It is not just missing 1 day, it is missing 3.
If I do the following:
>>> df = DataReader("yhoo", "yahoo", datetime.datetime(2013, 6, 1),datetime.datetime.today())
>>> len(df)
0
it shows me that there simply is no data to pick up in those days so far. If there is some way around this then I cannot figure it out, but it just seems that the data is not available for you yet, which is hard to believe.
The module from pandas doesn't work anymore, because the google and yahoo doens't provide support anymore. So you can create a function to take the data direct from the Google Finance using the url. Here is a part of a code to do this
import csv
import datetime
import re
import codecs
import requests
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
You can wrote a function to get data from Google Finance using the url, you have to indent the parte bellow.
#You have to indent this part
def get_google_finance_intraday(ticker, period=60, days=1, exchange='NASD'):
"""
Retrieve intraday stock data from Google Finance.
Parameters
----------------
ticker : str
Company ticker symbol.
period : int
Interval between stock values in seconds.
i = 60 corresponds to one minute tick data
i = 86400 corresponds to daily data
days : int
Number of days of data to retrieve.
exchange : str
Exchange from which the quotes should be fetched
Returns
---------------
df : pandas.DataFrame
DataFrame containing the opening price, high price, low price,
closing price, and volume. The index contains the times associated with
the retrieved price values.
"""
# build url
url = 'https://finance.google.com/finance/getprices?p={days}d&f=d,o,h,l,c,v&q={ticker}&i={period}&x={exchange}'.format(ticker=ticker, period=period, days=days, exchange=exchange)
page = requests.get(url)
reader = csv.reader(codecs.iterdecode(page.content.splitlines(), "utf-8"))
columns = ['Open', 'High', 'Low', 'Close', 'Volume']
rows = []
times = []
for row in reader:
if re.match('^[a\d]', row[0]):
if row[0].startswith('a'):
start = datetime.datetime.fromtimestamp(int(row[0][1:]))
times.append(start)
else:
times.append(start+datetime.timedelta(seconds=period*int(row[0])))
rows.append(map(float, row[1:]))
if len(rows):
return pd.DataFrame(rows, index=pd.DatetimeIndex(times, name='Date'), columns=columns)
else:
return pd.DataFrame(rows, index=pd.DatetimeIndex(times, name='Date'))
Now you can just call the function with the ticket that you want, in my case AAPL and the result is a pandas DataFrame containing the opening price, high price, low price, closing price, and volume.
ticker = 'AAPL'
period = 60
days = 1
exchange = 'NASD'
df = get_google_finance_intraday(ticker, period=period, days=days)
df
The simplest way to extract Indian stock price data into Python is to use the nsepy library.
In case you do not have the nsepy library do the following:
pip install nsepy
The following code allows you to extract HDFC stock price for 10 years.
from nsepy import get_history
from datetime import date
dfc=get_history(symbol="HDFCBANK",start=date(2015,5,12),end=date(2020,5,18))
This is so far the easiest code I have found.