Refining a Python finance tracking program - python

I am trying to create a basic program that tracks retirement finances. What I have got so far below takes input for ONE entry and stores it. The next time I run it, the previous values get wiped out. Ideally what I would like is a program that appends indefinitely to a list, if I open it up 2 weeks from now I'd like to be able to see current data in dict format, plus add to it. I envision running the script, entering account names and balances then closing it, and doing it again at a later point
Few questions:
How do I achieve that? I think I need some loop concept to get there
Is there a more elegant way to enter the Account Name and Balance, rather than hardcoding it in the parameters like I have below? I tried input() but it only runs for the Account Name, not Balance (again, maybe loop related)
I'd like to add some error checking, so if the user doesn't enter a valid account, say (HSA, 401k or Roth) they are prompted to re-enter. Where should that input/check occur?
Thanks!
from datetime import datetime
Account = {
"name": [],
"month": [],
"day": [],
"year": [],
"balance": []
}
finance = [Account]
def finance_data(name, month, day, year, balance):
Account['name'].append(name)
Account['month'].append(month)
Account['day'].append(day)
Account['year'].append(year)
Account['balance'].append(balance)
print(finance)
finance_data('HSA',
datetime.now().month,
datetime.now().day,
datetime.now().year,
500)

When you run a script and put values in variables defined in the code, the values only last for however long the program runs. Each time you run the script, it will start over from the initial state defined in the code and, thus, not save state from the last time you ran the script.
What you need is persistent data that lasts beyond the runtime of the script. Normally, we accomplish this by create a database, using the script to write new data to the database, and then, when the script runs next, read the old values from the database to remember what happened in the past. However, since your use case is smaller, it probably doesn't need a full blown database system. Instead, I would recommend writing the data to a text file and then reading from the text file to get the old data. You could do that as follows:
# read about file io in python here: https://docs.python.org/2/tutorial/inputoutput.html#reading-and-writing-files
dataStore = open("dataFile.txt", "r+") # r+ is the read and write mode
def loadDataToAccount(dataStore):
Account = {
"name": [],
"month": [],
"day": [],
"year": [],
"balance": []
}
for line in dataStore.read().splitlines():
(name, month, day, year, balance) = line.split("|")
Account['name'].append(name)
Account['month'].append(month)
Account['day'].append(day)
Account['year'].append(year)
Account['balance'].append(balance)
return Account
Account = loadDataToAccount(dataStore)
Here I am assuming that we organize the text file so that each line is an entry and the entry is "|" separated such as:
bob|12|30|1994|500
rob|11|29|1993|499
Thus, we can parse the text into the Account dictionary. Now, lets look at entering the data into the text file:
def addData(Account, dataStore):
name = raw_input("Enter the account name: ")
balance = raw_input("Enter the account balance: ")
# put name and balance validation here!
month = datetime.now().month
day = datetime.now().day
year = datetime.now().year
# add to Account
Account['name'].append(name)
Account['month'].append(month)
Account['day'].append(day)
Account['year'].append(year)
Account['balance'].append(balance)
# also add to our dataStore
dataStore.write(name + "|" + month + "|" + day + "|" + year + "|" + balance + "\n")
addData(Account, dataStore)
Notice here how I wrote it to the dataStore with the expected format that I defined to read it in. Without writing it to the text file, it will not save the data and be available for the next time you run.
Also, I used input to get the name and balance so that it is more dynamic. After collecting the input, you can put an if statement to make sure it is a valid name and then use some sort of while loop structure to keep asking for the name until they enter a valid one.
You would probably want to extract the code that adds the values to Account and put that in a helper function since we use the same code twice.
Good luck!

Related

How to correctly handle "ERROR:root:b'{"detail":"Error: Ticker \'SREF\' not found"}"?

So I'm making a program that calculates the dividend amount per share and closing price of a stock, which the user inputs. Right, now, I'm in the middle of putting some input validation just in case I, or someone else, accidentally make a typo. The code below does a good job of handling any exceptions that requests throw at me(you're going to have to put in your own api key if you want to test this):
import tiingo as t
# Ask for ticker and acquire share price
# specify the date range
ticker = input('What is the ticker? ')
# Set Tiingo API key
config = {'session': True, 'api_key': 'my API key'}
# create a client instance
client = t.TiingoClient(config)
try:
# Get latest prices, based on 3+ sources as JSON, sampled weekly
ticker_price = client.get_ticker_price(ticker, frequency='weekly')
# ticker_price is a list with a dictionary inside.
# the following indexes the closing price from said dictionary and prints the output
close_price = ticker_price[0]['close']
except:
print('Stock was not found, please check for typos')
Problem is, when I run this, it gives me "ERROR:root:b'{"detail":"Error: Ticker 'SREF' not found"}". How is this circumventing my try block, and how do I prevent it from popping up?

How to use AWS lambda in python to run multiple functions at once

I currently have code that gets back school data from a database and saves it to a csv file:
schoolID = '12345'
def getSchool(schoolID):
School = SchoolsDB.find_one({"_id": ObjectId(schoolID)})
return School
school = getSchool(schoolID)
school.to_csv(schoolID + ".csv")
It currently takes in a schoolID and runs one school at a time. I have tried putting it in a for loop so that it runs one school after another automatically, but I want to be able to run all schools at the same time.
I want to be able to use lambda to run all the schools at the same time, instead of one at a time. Does anyone know how to do this?
From a purely Python POV:
It looks like you are using some form of MongoDB, rather than having the function take a single ID and executing each time, why not pass it an array and find them all in one go.
def getSchool(list_of_school_ids):
Schooldb.collection.find( { _id : { $in : list_of_school_ids} } )
school = getSchool(list_of_school_ids=["1234", "5678"])
Then just build a massive CSV where each row is your entry in SchoolsDB - I'm coming at this completely blind though.
If not, you could look at:
https://aws.amazon.com/blogs/compute/parallel-processing-in-python-with-aws-lambda/
But my gut tells me its overkill for your use case :)

Modifying code to default to option instead of asking user for selection?

I am not very experienced with python, just trying to make a modification to the code of a program I purchased.
The code runs in the command terminal and its function is to gather data. It gives you the option of gathering the data in either csv or json. It prompts you once with asking which format and a second time for confirmation.
Please enter output format (csv/json)
Do you want to extract data in {} format? (y/n)
I am trying to change this to just do csv by default, and not be prompted for either choosing or confirming
I believe the relevant code from the program is the following ( this isn't all consecutive ). How do I alter the first part to just be 'csv' and do I need to do anything beyond deleting the entire def getConfirmMessage block to wipe that from the code?
if spiderToExecute in range(1, 7):
while True:
outputFormat = click.prompt('Please enter output format (csv/json)', default='csv')
outputFormat = ''.join(outputFormat).lower()
if outputFormat in ['json', 'csv']:
break
settings.set('FEED_FORMAT', outputFormat)
settings.set('FEED_URI', './data/{}.{}'.format(spiderConf['log_output_name_format'], outputFormat))
settings.set('LOG_FILE', './log/{}.log'.format(spiderConf['log_output_name_format']))
def getConfirmMessage(spiderToExecute, outputFormat):
confirmMessages = {
1: 'Do you want to extract data from in {} format ?'.format(outputFormat),
2: 'Do you want to extract data from in {} format ?'.format(outputFormat),
}
return confirmMessages[spiderToExecute]
edit: more code
if not click.confirm(getConfirmMessage(spiderToExecute, outputFormat), default=True):
click.echo()
click.secho(' Please relaunch the command to select new options! ', bg='red')
click.echo()
raise click.Abort()
Complete steps (maybe):
Remove the while loop and replace with outputFormat = "csv".
You should be able to completely remove the code block you posted (if not click.confirm...). If that doesn't work, you'll need to post the code for click.confirm.

Cleansing Data with Updates - Mongodb + Python

I have imported the into Mongodb but not able to cleanse the data in Python. Please see the below question and the script. I need answer of Script 1 & 2
import it into MongoDB, cleanse the data in Python, and update MongoDB with the cleaned data. Specifically, you'll be taking a people dataset where some of the birthday fields look like this:
{
...
"birthday": ISODate("2011-03-17T11:21:36Z"),
...
}
And other birthday fields look like this:
{
...
"birthday": "Thursday, March 17, 2011 at 7:21:36 AM",
...
}
MongoDB natively supports a Date datatype through BSON. This datatype is used in the first example, but a plain string is used in the second example. In this assessment, you'll complete the attached notebook to script a fix that makes all of the document's birthday field a Date.
Download the notebook and dataset to your notebook directory. Once you have the notebook up and running, and after you've updated your connection URI in the third cell, continue through the cells until you reach the fifth cell, where you'll import the dataset. This can take up to 10 minutes depending on the speed of your Internet connection and computing power of your computer.
After verifying that all of the documents have successfully been inserted into your cluster, you'll write a query in the 7th cell to find all of the documents that use a string for the birthday field.
To verify your understanding of the first part of this assessment, how many documents had a string value for the birthday field (the output of cell 8)?
Script1
Replace YYYY with a query on the people-raw collection that will return a cursor with only
documents where the birthday field is a string
people_with_string_birthdays = YYYY
This is the answer to verify you completed the lab:
people_with_string_birthdays.count()
Script2
updates = []
# Again, we're updating several thousand documents, so this will take a little while
for person in people_with_string_birthdays:
# Pymongo converts datetime objects into BSON Dates. The dateparser.parse function
# returns a datetime object, so we can simply do the following to update the field
# properly. Replace ZZZZ with the correct update operator
updates.append(UpdateOne(
{"_id": person["_id"]},
{ZZZZ: { "birthday": dateparser.parse(person["birthday"]) } }
))
count += 1
if count == batch_size:
people_raw.bulk_write(updates)
updates = []
count = 0
if updates:
people_raw.bulk_write(updates)
count = 0
# If everything went well this should be zero
people_with_string_birthdays.count()
import json
with open("./people-raw.json") as dataset:
array={}
for i in dataset:
a=json.loads(i)
if type(a["birthday"]) not in array:
array[type(a["birthday"])]=1
else:
array[type(a["birthday"])]+=1
print(array)
give the path of your people-raw.json file in open() method iff JSON file not in same directory.
Ans : 10382
Script 1: YYYY = people_raw.find({"Birthday" : {"$type" : "string"}})

Building a ranking list in Python: how to assign scores to contestants?

(I posted this on the wrong section of Stackexchange before, sorry)
I'm working on a assignment which is way above my head. I've tried for days on end figuring out how to do this, but I just can't get it right...
I have to make a ranking list in which I can enter a user, alter the users score, register if he/she payed and display all users in sequence of who has the most score.
The first part I got to work with CSV, I've put only the basic part in here to save space. The menu and import csv have been done: (I had to translate a lot from my native language, sorry if there is a mistake, I know it's a bad habit).
more = True
while more:
print("-" * 40)
firstname = raw_input("What is the first name of the user?: ")
with open("user.txt", "a") as scoreFile:
scoreWrite = csv.writer(scoreFile)
scoreWrite.writerow([firstname, "0", "no"])
scoreFile.close()
mr_dnr = raw_input("Need to enter more people? If so, enter 'yes' \n")
more = mr_dnr in "yes \n"
This way I can enter the name. Now I need a second part (other option in the menu of course) to:
let the user enter the name of the person
after that enter the (new) score of that person.
So it needs to alter the second value in any entry in the csv file ("0") to something the user enters without erasing the name already in the csv file.
Is this even possible? A kind user suggested using SQlite3, but this basic CSV stuff is already stretching it far over my capabilities...
Your friend is right that SQlite3 would be a much better approach to this. If this is stretching your knowledge too far, I suggest having a directory called users and having one file per user. Use JSON (or pickle) to write the user information and overwrite the entire file each time you need to update it.

Categories

Resources