Hopefully this is a quick and easy question that is not a repeat. I am looking for a built in numpy function (though it could also be a part of another library) which, given an original loan amount, monthly payment amount, and number of payments, can calculate the interest rate. I see numpy has the following function:
np.pmt(Interest_Rate/Payments_Year, Years*Payments_Year, Principal)
This is close to what I am looking for but would instead prefer a function which would provide the interest rate when given the three parameters I listed above. Thank you.
You want numpy.rate from numpy_financial.
An example usage: suppose I'm making 10 monthly payments of $200 to pay off an initial loan of $1500. Then the monthly interest rate is approximately 5.6%:
>>> import numpy as np
>>> monthly_payment = 200.0
>>> number_of_payments = 10
>>> initial_loan_amount = 1500.0
>>> np.rate(number_of_payments, -monthly_payment, initial_loan_amount, 0.0)
0.056044636451588969
Note the sign convention here: the payment is negative (it's money leaving my account), while the initial loan amount is positive.
You should also take a look at the when parameter: depending on whether interest is accrued after each payment or before, you'll want to select the value of when accordingly. The above example models the situation where the first round of interest is added before the first payment is made (when='end'). If instead the payment is made at the beginning of each month, and interest accrued at the end of the month (when='begin'), the effective interest rate ends up higher, a touch over 7%.
>>> np.rate(number_of_payments, -monthly_payment, initial_loan_amount, 0.0, when='begin')
0.070550580696092852
Related
Trying to solve the following problem but not sure how to continue:
Suppose a logistics company would like to simulate demand for a given product.
Assume that there are Good and Bad weeks.
On a good week, the demand is normally distributed with mean 200 and standard deviation 50.
On a bad week, the demand is normally distributed with mean 100 and standard deviation 30.
As a practical constraint, you should round the decimal part of demand to the nearest integer and set it to zero if it is ever negative.
Additionally, we should assume that a week being good or bad is serially correlated across time.
Conditional on a given week being Good, the next week remains Good with probability 0.9. Similarly, conditional on a given week being Bad, the next week remains Bad with probability 0.9.
You are to simulate a time series of demand for 100 weeks, assuming the first week starts Good. Also, plot the demand over time.
This is what I have so far:
rng = np.random.default_rng(seed=42)
simulated_demand = [rng.normal(200, 50)]
for t in range(1, 100):
if simulated_demand[t-1] == rng.normal():
simulated_demand.append(rng.normal(150, 70))
else:
simulated_demand.append(rng.normal(50, 15))
simulated_demand = pd.DataFrame(simulated_demand, columns=['Demand Time Series'])
simulated_demand.plot(style='r--', figsize=(10,3))
How can I fix the if condition?
I have a set of say 10,000 contracts ( Loan account numbers). The contracts are for specific durations say 24,48,84 months. My pool consists of a mix of these contracts for a different duration. Assume that at the start of this month I have 100 contracts amounting to 10,000 USD . After 2 months a few accounts/contracts are closed prematurely (early pay off) and few are extended. I need to simulate the data to maintain a constant value (amount of 10,000 USD). It means I need to know how many new contracts that I need to add say 2 months from now so that the value of my portfolio remains at 10,000 USD. Can someone help me if there is any technique to simulate this? Preferably in R, Python or SAS
Add a payoff date element to each contract object. Then, do:
need = 0
for c in contracts:
need += int( (date today + 2 mo) > c.payoff_date or c.paid_off == True)
print 10000 - len(contracts) + need
I have a financial dataset with monthly aggregates. I know the real world average for each measure.
I am trying to build some dummy transactional data using Python. I do not want the dummy transactional data to be entirely random. I want to model it around the real world averages I have.
Eg - If from the real data, the monthly total profit is $1000 and the total transactions are 5, then the average profit per transaction is $200.
I want to create dummy transactions that are modelled around this real world average of $200.
This is how I did it :
import pandas as pd
from random import gauss
bucket = []
for _ in range(5):
value = [int(gauss(200,50))]
bucket += value
transactions = pd.DataFrame({ 'Amount' : bucket})
Now, the challenge for me is that I have to randomize the identifiers too.
For eg, I know for a fact that there are three buyers in total. Let's call them A, B and C.
These three have done those 5 transactions and I want to randomly assign them when I create the dummy transactional data. However, I also know that A is very likely to do a lot more transactions than B and C. To make my dummy data close to real life scenarios, I want to assign probabilities to the occurence of these buyers in my dummy transactional data.
Let's say I want it like this:
A : 60% appearance
B : 20% appearance
C : 20% appearance
How can I achieve this?
What you are asking is not a probability. You want a 100% chance of A having 60% chance of buying. For the same take a dict as an input that has a probability of each user buying. Then create a list with these probabilities on your base and randomly pick a buyer from the list. Something like below:
import random
#Buy percentages of the users
buy_percentage = {'A': 0.6, 'B': 0.2, 'C': 0.2}
#no of purchases
base = 100
buy_list = list()
for buyer, percentage in buy_percentage.items():
buy_user = [buyer for _ in range(0, int(percentage*base))]
buy_list.extend(buy_user)
for _ in range(0,base):
#Randomly gets a buyer but makes sure that your ratio is maintained
buyer = random.choice(buy_list)
#your code to get buying price goes below
UPDATE:
Alternatively, the answer given in the below link can be used. This solution is better in my opinion.
A weighted version of random.choice
I am new to Python and I am currently stuck on this learning problem.
I am trying to make a program which will output the lowest common multiple of 10 required to pay off a credit card balance. Each payment is made once a month and has to be the same for each month in order to satisfy the requirements of the problem, and monthly interest must also be taken into account.
def debt(payment):
balance = 3329
annualInterestRate = 0.2
month = 1
finalbalance = balance
while month <= 12:
#Monthly interest rate
rate=(annualInterestRate / 12.0)
#Monthly unpaid balance
finalbalance = round(finalbalance - payment,2)
#Updated balance each month
finalbalance = round(finalbalance + (rate * finalbalance),2)
#Moves month forward
month = month + 1
#Shows final fingures
print('Lowest Payment: ' + str(payment))
debt(10)
The above works fine except for the fact I am lacking a mechanism to supply ever greater multiples of ten into the problem until the final balance becomes less than zero.
I posted a similar question here with different code that I deleted as I felt it could not go anywhere and I had subsequently rewrote my code anyway.
If so, then you need to restructure your function. Instead of payment, use balance as the parameter. Your function should output your payment, not take it in as a parameter. Then, since you're doing this monthly, the final output (whatever it is) would be greater than balance / 12, because that would be how you pay the core debt, without interest.
So, now off we go to find the worst thing possible. The entire balance unpaid plus interests. That would be (annual rate x balance) + balance. Divide that by 12, and you get the max amount you should pay per month.
There, now that you have your min and max, you have a start and end point for a loop. Just add 1 to the payment for each loop until you get the minimum amount to pay for the included interests.
Have a situation where I am given a total ticket count, and cumulative ticket sale data as follows:
Total Tickets Available: 300
Day 1: 15 tickets sold to date
Day 2: 20 tickets sold to date
Day 3: 25 tickets sold to date
Day 4: 30 tickets sold to date
Day 5: 46 tickets sold to date
The number of tickets sold is nonlinear, and I'm asked if someone plans to buy a ticket on Day 23, what is the probability he will get a ticket?
I've been looking at quite a libraries used for curve fitting like numpy, PyLab, and sage but I've been a bit overwhelmed since statistics is not in my background. How would I easily calculate a probability given this set of data? If it helps, I also have ticket sale data at other locations, the curve should be somewhat different.
The best answer to this question would require more information about the problem--are people more/less likely to buy a ticket as the date approaches (and mow much)? Are there advertising events that will transiently affect the rate of sales? And so on.
We don't have access to that information, though, so let's just assume, as a first approximation, that the rate of ticket sales is constant. Since sales occur basically at random, they might be best modeled as a Poisson process Note that this does not account for the fact that many people will buy more than one ticket, but I don't think that will make much difference for the results; perhaps a real statistician could chime in here. Also: I'm going to discuss the constant-rate Poisson process here but note that since you mentioned the rate is decidedly NOT constant, you could look into variable-rate Poisson processes as a next step.
To model a Poisson process, all you need is the average rate of ticket sales. In your example data, sales-per-day are [15, 5, 5, 5, 16], so the average rate is about 9.2 tickets per day. We've already sold 46 tickets, so there are 254 remaining.
From here, it is simple to ask, "Given a rate of 9.2 tpd, what is the probability of selling less than 254 tickets in 23 days?" (ignore the fact that you can't sell more than 300 tickets). The way to calculate this is with a cumulative distribution function (see here for the CDF for a poisson distribution).
On average, we would expect to sell 23 * 9.2 = 211.6 tickets after 23 days, so in the language of probability distributions, the expectation value is 211.6. The CDF tells us, "given an expectation value λ, what is the probability of seeing a value <= x". You can do the math yourself or ask scipy to do it for you:
>>> import scipy.stats
>>> scipy.stats.poisson(9.2 * 23).cdf(254-1)
0.99747286634158705
So this tells us: IF ticket sales can be accurately represented as a Poisson process and IF the average rate of ticket sales really is 9.2 tpd, then the probability of at least one ticket being available after 23 more days is 99.7%.
Now let's say someone wants to bring a group of 50 friends and wants to know the probability of getting all 50 tickets if they buy them in 25 days (rephrase the question as "If we expect on average to sell 9.2 * 25 tickets, what is the probability of selling <= (254-50) tickets?"):
>>> scipy.stats.poisson(9.2 * 25).cdf(254-50)
0.044301801145630537
So the probability of having 50 tickets available after 25 days is about 4%.