I am currently building a trading bot for cryptocurrencies based on time series analysis in Python. While working on defining the buy and sell signals, I am confronted with the issue of finding the maximum affordable amount of coins to buy with a given stock of cash such that the cash will not be negative. For simplicity reasons, we can assume that the minimum amount of coins to buy is 0.0001, so 0.0001 of the current crypto price. So how can I implement it in Python to find the maximum amount of 0.0001 units of a cryptocurrency with a given cash stock such that the cash won't be negative but maximally used?
It sounds like you're looking for ceiling and floor math functions.
https://docs.python.org/3/library/math.html
for example if you have 10 dollars and each share is $3 you can only buy 3 shares.
or in code:
import math
print(math.floor(10/3))
>>>3
*edited to show result
Related
I'm trying to use python to detect if the price of a stock is easy to go up and down in a short period.
The yellow line in the following pictures are the stock prices:
pic1
pic2
The above 2 stock prices are considered to be easy to go up and down for me.
The following stock prices are not.
pic3
pic4
pic5
I have all deal prices of the stocks.
I've tried normalize the stock price then calculate its standard deviation.
import pandas as pd
today = date.today().strftime('%Y-%m-%d')
ticks = api.ticks( # api is an object from shioaji API
contract=api.Contracts.Stocks[ticker_code],
date=today
)
# The type of ticks.close is "list"
pclose = pd.Series(ticks.close)
# normalized_std is used to decide if the stock price is easy to go up and down in a short period.
normalized_std = (pclose/pclose.mean()).std(ddof=0)
But the result is not good.
For example, pic1's normalized_std is 0.013441217877894908, pic2's normalized_std
is 0.011078299230201987. But pic3's normalized_std is 0.02169184908826346, and pic5's normalized_std is 0.0014295901932196125.
So it's hard to decide if the price is too easy to go and down by using normalized_std variable.
Anyone knows if there's a method to detect this kind of stock price patterns?
Can machine learning be applied to solve this problem?
Just an idea, but it looks like you want a lot of absolute movement, with little relative movement. For example down $1.10, up 1, down $0.90 up $1.10
Absolute movement would be the sum of the absolute value of the moves:
1.10 +1+0.9+1.10 =$4.10
Relative movement would be the sum of the moves:
-1.10 + 1 - 0.90 + 1.10 = 0.10
A fancier method, if you have access to real-time quotes, is to calculate the price of a short expiration straddle (the price of the at the money put and call), it is good indicator of expected movement.
So this is my program so far.
This is my outcome.
This is what it is supposed to look like.
This is the context behind the program.
Instructors in a community college are paid on a schedule that provides a salary based on their number of years of teaching experience. For each year of experience after the first year, up to 10 years, the instructor receives a 2% increase over the preceding value. Suppose the initial salary of an instructor is $50,000. In the second year, this instructor’s salary will be $51,000 ($50,000 + $50,000 * 0.02 = $51,000). In the third year, the salary will be $52,020 ($51,000 + $51,000 * 0.02 = $52,020), and so on. In addition, the instructor is required to deposit 5% of the salary each year into a retirement fund account. For example, if the salary is $50,000 in a year, $2500 ($50,000 * 0.05) will be deposited into his/her retirement fund account. Write a program to do the following. Ask the user to enter the first year’s salary. Calculate and display the salary each year in the first 10 years. Also, calculate and display a running total of the instructor’s retirement fund after each year.
Please keep in mind that I am an absolute beginner when it comes to python programming, and programming in general, so I understand that my code is probably not efficient or even the easiest way of doing things.
My problem is not the salary accumulating, but the retirement fund remembering the value before based off of the new salary every year and then adding the amount of money put into the retirement fund plus the former amount of money already in the retirement fund.
Can anyone please help me on how to make the retirement fund add up as the context above wants it to?
In your code, you set
new_salary = salary
and then update the value of new_salary on each iteration, but never salary, which is what retirement_fund is calculated from on line 13. Change line 13 to use the updated salary instead.
Hopefully this is a quick and easy question that is not a repeat. I am looking for a built in numpy function (though it could also be a part of another library) which, given an original loan amount, monthly payment amount, and number of payments, can calculate the interest rate. I see numpy has the following function:
np.pmt(Interest_Rate/Payments_Year, Years*Payments_Year, Principal)
This is close to what I am looking for but would instead prefer a function which would provide the interest rate when given the three parameters I listed above. Thank you.
You want numpy.rate from numpy_financial.
An example usage: suppose I'm making 10 monthly payments of $200 to pay off an initial loan of $1500. Then the monthly interest rate is approximately 5.6%:
>>> import numpy as np
>>> monthly_payment = 200.0
>>> number_of_payments = 10
>>> initial_loan_amount = 1500.0
>>> np.rate(number_of_payments, -monthly_payment, initial_loan_amount, 0.0)
0.056044636451588969
Note the sign convention here: the payment is negative (it's money leaving my account), while the initial loan amount is positive.
You should also take a look at the when parameter: depending on whether interest is accrued after each payment or before, you'll want to select the value of when accordingly. The above example models the situation where the first round of interest is added before the first payment is made (when='end'). If instead the payment is made at the beginning of each month, and interest accrued at the end of the month (when='begin'), the effective interest rate ends up higher, a touch over 7%.
>>> np.rate(number_of_payments, -monthly_payment, initial_loan_amount, 0.0, when='begin')
0.070550580696092852
I am new to Python and I am currently stuck on this learning problem.
I am trying to make a program which will output the lowest common multiple of 10 required to pay off a credit card balance. Each payment is made once a month and has to be the same for each month in order to satisfy the requirements of the problem, and monthly interest must also be taken into account.
def debt(payment):
balance = 3329
annualInterestRate = 0.2
month = 1
finalbalance = balance
while month <= 12:
#Monthly interest rate
rate=(annualInterestRate / 12.0)
#Monthly unpaid balance
finalbalance = round(finalbalance - payment,2)
#Updated balance each month
finalbalance = round(finalbalance + (rate * finalbalance),2)
#Moves month forward
month = month + 1
#Shows final fingures
print('Lowest Payment: ' + str(payment))
debt(10)
The above works fine except for the fact I am lacking a mechanism to supply ever greater multiples of ten into the problem until the final balance becomes less than zero.
I posted a similar question here with different code that I deleted as I felt it could not go anywhere and I had subsequently rewrote my code anyway.
If so, then you need to restructure your function. Instead of payment, use balance as the parameter. Your function should output your payment, not take it in as a parameter. Then, since you're doing this monthly, the final output (whatever it is) would be greater than balance / 12, because that would be how you pay the core debt, without interest.
So, now off we go to find the worst thing possible. The entire balance unpaid plus interests. That would be (annual rate x balance) + balance. Divide that by 12, and you get the max amount you should pay per month.
There, now that you have your min and max, you have a start and end point for a loop. Just add 1 to the payment for each loop until you get the minimum amount to pay for the included interests.
Have a situation where I am given a total ticket count, and cumulative ticket sale data as follows:
Total Tickets Available: 300
Day 1: 15 tickets sold to date
Day 2: 20 tickets sold to date
Day 3: 25 tickets sold to date
Day 4: 30 tickets sold to date
Day 5: 46 tickets sold to date
The number of tickets sold is nonlinear, and I'm asked if someone plans to buy a ticket on Day 23, what is the probability he will get a ticket?
I've been looking at quite a libraries used for curve fitting like numpy, PyLab, and sage but I've been a bit overwhelmed since statistics is not in my background. How would I easily calculate a probability given this set of data? If it helps, I also have ticket sale data at other locations, the curve should be somewhat different.
The best answer to this question would require more information about the problem--are people more/less likely to buy a ticket as the date approaches (and mow much)? Are there advertising events that will transiently affect the rate of sales? And so on.
We don't have access to that information, though, so let's just assume, as a first approximation, that the rate of ticket sales is constant. Since sales occur basically at random, they might be best modeled as a Poisson process Note that this does not account for the fact that many people will buy more than one ticket, but I don't think that will make much difference for the results; perhaps a real statistician could chime in here. Also: I'm going to discuss the constant-rate Poisson process here but note that since you mentioned the rate is decidedly NOT constant, you could look into variable-rate Poisson processes as a next step.
To model a Poisson process, all you need is the average rate of ticket sales. In your example data, sales-per-day are [15, 5, 5, 5, 16], so the average rate is about 9.2 tickets per day. We've already sold 46 tickets, so there are 254 remaining.
From here, it is simple to ask, "Given a rate of 9.2 tpd, what is the probability of selling less than 254 tickets in 23 days?" (ignore the fact that you can't sell more than 300 tickets). The way to calculate this is with a cumulative distribution function (see here for the CDF for a poisson distribution).
On average, we would expect to sell 23 * 9.2 = 211.6 tickets after 23 days, so in the language of probability distributions, the expectation value is 211.6. The CDF tells us, "given an expectation value λ, what is the probability of seeing a value <= x". You can do the math yourself or ask scipy to do it for you:
>>> import scipy.stats
>>> scipy.stats.poisson(9.2 * 23).cdf(254-1)
0.99747286634158705
So this tells us: IF ticket sales can be accurately represented as a Poisson process and IF the average rate of ticket sales really is 9.2 tpd, then the probability of at least one ticket being available after 23 more days is 99.7%.
Now let's say someone wants to bring a group of 50 friends and wants to know the probability of getting all 50 tickets if they buy them in 25 days (rephrase the question as "If we expect on average to sell 9.2 * 25 tickets, what is the probability of selling <= (254-50) tickets?"):
>>> scipy.stats.poisson(9.2 * 25).cdf(254-50)
0.044301801145630537
So the probability of having 50 tickets available after 25 days is about 4%.