I'm trying to learn Python, with a project where I'm reading data from a bike power meter. Right now I'm just calculating the average power from start to finish, by adding each power reading to a total sum variable, and dividing it with the number of readings.
I'd like to calculate the average power for 20 minutes, and if possible, keep calculating the 20 minute average each 30 seconds after the first 20 minutes, comparing to the previous value and storing it if it's higher than the last, so in the end I could have the higher 20 minute average power that I held during a one hour workout, no matter where it happened in that hour, for example.
Data from the power meter is event based, as far as I can tell it's not a regular intervals, but definitely once a second or faster.
This is the base of my code so far:
def average_power(power, count):
global PM1_sumwatt
global PM1_avgwatt
PM1_sumwatt = PM1_sumwatt + power
PM1_avgwatt = PM1_sumwatt / count
PM1_avgLog = open(PM1_avgFile, 'w')
PM1_avgLog.write("<div id=\"pwr\"> %d W</div>" % (PM1_avgwatt))
PM1_avgLog.close()
def power_data(eventCount, pedalDiff, pedalPowerRatio, cadence, accumPower, instantPower):
global PM1_avgcount
if WCORRECT1: calibpower = instantPower+(CF1w)
else: calibpower = instantPower*(CF1x)
power_meter.update(int(calibpower))
PM1_avgcount = PM1_avgcount + 1
average_power(int(calibpower), PM1_avgcount)
power = BicyclePower(readnode, network, callbacks = {'onDevicePaired': device_found,
'onPowerData': power_data})
# Starting PM readings
power.open(ChannelID(PM1, 11, 0))
Not quite sure how to tackle this! Any help or pointer is much appreciated!
if you are reading data in real time, I assume you are reading the data in a while loop:
sum = 0
number_of_readings = 0
while True: # forever
new_value = input() # here I read from the console input
sum += new_value
number_of_readings += 1
average = sum/number_of_readings
print(average)
Here I type a number in the console and press enter to simulate your bike power meter.
>>> 1
1.0
>>> 3
2.0
>>> 4
2.6666666666666665
>>> 2
2.5
Now, if you wants to make a moving average, you must store the readings that you wants to average. This is because you want to remove them later, when they will be too old. A list is perfect for that:
Here is a solution averaging the last n readings:
n = 2
Last_n_readings = []
while True: # forever
# add a new reading
new_value = int(input()) # here I read from the console input
Last_n_readings.append(new_value)
# discard an old one (except when we do not have enough readings yet)
if len(Last_n_readings) > n:
Last_n_readings.pop(0)
# print the average of all the readings still in the list
print(sum(Last_n_readings) / len(Last_n_readings))
Which gives:
>>> 1
1.0
>>> 3
2.0
>>> 4
3.5
>>> 2
3.0
Note that lists are not very efficient when removing elements near the start, so there are more effective ways to do this (circular buffers), but I try to keep it simple ;)
You could use this by guessing how many readings/seconds you have and choose n so you average over 20 minutes.
If you want to truly average all the result which are less than 20 minutes ago, you need to not only record the readings but also the times when you red them, so you can remove the old readings wen they get more than 20 minutes old. If this is what you need, tell me and I will expand my answer with an example.
You can use pandas dataframe to store the power output for each instance.
Considering that you receive a value each 30 second, you can store them all in data frame.
Then calculate a 40 data point moving average using rolling function in python.
Take the max value you get after the rolling function, this would be your final result.
refer this for doc : https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rolling.html
Related
I've been running some code for an hour or so using a rand.int function, where the code models a dice's roll, where the dice has ten faces, and you have to roll it six times in a row, and each time it has to roll the same number, and it is tracking how many tries it takes for this to happen.
success = 0
times = 0
count = 0
total = 0
for h in range(0,100):
for i in range(0,10):
times = 0
while success == 0:
numbers = [0,0,0,0,0,0,0,0,0,0]
for j in range(0,6):
x = int(random.randint(0,9))
numbers[x] = 1
count = numbers.count(1)
if count == 1:
success = 1
else:
times += 1
print(i)
total += times
success = 0
randtst = open("RandomTesting.txt", "a" )
randtst.write(str(total / 10)+"\n")
randtst.close()
And running this code, this has been going into a file, the contents of which is below
https://pastebin.com/7kRK1Z5f
And taking the average of these numbers using
newtotal = 0
totalamounts = 0
with open ('RandomTesting.txt', 'rt') as rndtxt:
for myline in rndtxt: ,
newtotal += float(myline)
totalamounts += 1
print(newtotal / totalamounts)
Which returns 742073.7449342106. This number is incorrect, (I think) as this is not near to 10^6. I tried getting rid of the contents and doing it again, but to no avail, the number is nowhere near 10^6. Can anyone see a problem with this?
Note: I am not asking for fixes to the code or anything, I am asking whether something has gone wrong to get the above number rather that 100,000
There are several issues working against you here. Bottom line up front:
your code doesn't do what you described as your intent;
you currently have no yardstick for measuring whether your results agree with the theoretical answer; and
your expectations regarding the correct answer are incorrect.
I felt that your code was overly complex for the task you were describing, so I wrote my own version from scratch. I factored out the basic experiment of rolling six 10-sided dice and checking to see if the outcomes were all equal by creating a list of length 6 comprised of 10-sided die rolls. Borrowing shamelessly from BoarGules' comment, I threw the results into a set—which only stores unique elements—and counted the size of the set. The dice are all the same value if and only if the size of the set is 1. I kept repeating this while the number of distinct elements was greater than 1, maintaining a tally of how many trials that required, and returned the number of trials once identical die rolls were obtained.
That basic experiment is then run for any desired number of replications, with the results placed in a numpy array. The resulting data was processed by numpy and scipy to yield the average number of trials and a 95% confidence interval for the mean. The confidence interval uses the estimated variability of the results to construct a lower and an upper bound for the mean. The bounds produced this way should contain the true mean for 95% of estimates generated in this way if the underlying assumptions are met, and address the second point in my BLUF.
Here's the code:
import random
import scipy.stats as st
import numpy as np
NUM_DIGITS = 6
SAMPLE_SIZE = 1000
def expt():
num_trials = 1
while(len(set([random.randrange(10) for _ in range(NUM_DIGITS)])) > 1):
num_trials += 1
return num_trials
data = np.array([expt() for _ in range(SAMPLE_SIZE)])
mu_hat = np.mean(data)
ci = st.t.interval(alpha=0.95, df=SAMPLE_SIZE-1, loc=mu_hat, scale=st.sem(data))
print(mu_hat, ci)
The probability of producing 6 identical results of a particular value from a 10-sided die is 10-6, but there are 10 possible particular values so the overall probability of producing all duplicates is 10*10-6, or 10-5. Consequently, the expected number of trials until you obtain a set of duplicates is 105. The code above took a little over 5 minutes to run on my computer, and produced 102493.559 (96461.16185897154, 108525.95614102845) as the output. Rounding to integers, this means that the average number of trials was 102493 and we're 95% confident that the true mean lies somewhere between 96461 and 108526. This particular range contains 105, i.e., it is consistent with the expected value. Rerunning the program will yield different numbers, but 95% of such runs should also contain the expected value, and the handful that don't should still be close.
Might I suggest if you're working with whole integers that you should be receiving a whole number back instead of a floating point(if I'm understanding what you're trying to do.).
##randtst.write(str(total / 10)+"\n") Original
##randtst.write(str(total // 10)+"\n")
Using a floor division instead of a division sign will round down the number to a whole number which is more idea for what you're trying to do.
If you ARE using floating point numbers, perhaps using the % instead. This will not only divide your number, but also ONLY returns the remainder.
% is Modulo in python
// is floor division in python
Those signs will keep your numbers stable and easier to work if your total returns a floating point integer.
If this isn't the case, you will have to account for every number behind the decimal to the right of it.
And if this IS the case, your result will never reach 10x^6 because the line for totalling your value is stuck in a loop.
I hope this helps you in anyway and if not, please let me know as I'm also learning python.
I was trying to solve the following Problem Statement:
In the city A, there is a disease x, spreading at the rate of 2. The
rate doubles after every 3 days.
If 2 people were initially infected with the disease, how many people
will be affected in total after 100 days?
(Hint: Ni+1 = Ni + a*Ni *t where Ni is the ith day patients and Ni+1
is patients the day after that, a is the increase rate and t is the
number of days.)
This is the Python code I tried to solve the statement:
rate = 2
rate_of_rate = 2 # the rate doubles every gap days
gap = 3
initially_infected = 2
final_day = 100
infected = initially_infected
days_passed = 1
while days_passed != final_day:
if days_passed%gap == 0:
rate *= rate_of_rate
infected = infected * rate
days_passed += 1
print(infected)
The answer expected is: 658781418
And the answer I'm getting is:
7387586092700242099654546576830696772603866567292789055868426442323956818125567473217880665869221255368279336978185916233370357196371072076345487974033022845153783727077340269105240653596212209328236829977000561171160601353019714984950312214004440228069460097961675499715690703175560410535127557079386864191774441606293810308368351268196693882638167250873667663250863266951807800784887663781068841491777971210302562177144021123949168116897834247743963522769738506629596576834286879022276623596962844306405686165635072
Where am I doing wrong?
P.S. > I am also not able to understand where the formula is incorporating rate_of_rate
Your program is perfectly correct.
Even if the disease would be
spreading at a constant rate 2, and
initially would be infected only 1 man, and
instead of 100 days would be only 63 days,
your task would be an analogy of well-known “Wheat and chessboard problem” with the result as large as 18,446,744,073,709,551,615.
From the Wikipedia “Wheat and chessboard problem”:
If a chessboard were to have wheat placed upon each square such that
one grain were placed on the first square, two on the second, four on
the third, and so on (doubling the number of grains on each subsequent
square), how many grains of wheat would be on the chessboard at the
finish?
I want to generate big strings (up to 1000) of 0s and 1s with all the possible combinations, without using itertools. I have code that works well with strings of length 20, but I have problems with bigger numbers.
exponent=input("String length:")
n= int(exponent) #Convert to integer the input
#Define a function to get the calc
def combinations(n):
# If the length of the string is 0 just write "empty string".
if n < 1:
print("empty string")
else:
for i in range(2**n):
# 2 raised to n power gives the total combinations
s = bin(i)[2:]
# Bin converts to binary the number
s = "0" * (n-len(s))+s
print(s)
print(combinations(n))
When I try with big numbers like 50 I get the following message:
for i in range(2**n):
OverflowError: range() result has too many items
Do you have any idea of how to improve my code? How can I spend less memory and also try bigger numbers?
Since range grabs too much memory, just hand-build your iteration loop:
i = 0
limit = 2**(n+1) - 1
while i <= limit:
print(bin(i)[2:].zfill(n))
i += 1
Note, however, that you're still limited to a universe with roughly 10^79 particles (about 2^263). Before you put in a large number, time a smaller case, and then compute how long your large one will take to print.
On my desktop monster, I can print all string of length 20 in just over 45 seconds. Scaling this up, I should be able to handle your desired length of 1000 in ...
45 * 2**(1000-20) sec
= 2**5.5 * 2**980 sec
= 2**985.5 sec
Now, there are about 2^31.5 seconds in a century ...
= 2**(985.5 - 31.5) centuries
= 2**954 centuries
Or, looking at this another way, I can produce the output of all strings of length 46 in just about one century. Doing your "small" case of 50 would finish somewhere around the year 3600 A.D. on my screen.
Even assuming a faster rendering method, we're not solving your "large" problem. My current speed of printing those 20-char binaries is only 23k (2^14.5) per second. Let's posit a machine somewhat faster than my desktop monster, say a 1000 Ghz machine that produces a new string every clock cycle. That's 2^40 strings / sec.
Oh, goodie! Now, with a perfect rendering speed, we can do the 50-char job in 2^10 seconds, or only 17 minutes instead of 16 centuries. That pulls in the full 1000-char job to
= 2**960 sec
= 2**(960 - 31.5) centuries
= 2**928.5 centuries
I'm not going to wait for the results.
I have been learning python by myself and recursion is troublesome. We are given a starting piece of weight 2.0. When the piece weight is =< 0.1, we return a count of 1 piece. Otherwise, break the piece into 2, 3, or 4 pieces (quantity chosen randomly) and recur. Return the total count of pieces when all are no larger than 0.1. So far my code looks like this.
import random as rand
def breaker_function(weight_of_piece):
if weight_of_piece <= 0.1:
return 1 #returns one piece
else:
return breaker_function(weight_of_piece/rand.randint(2,4))
However, this code does not work. I ran it through the debugger, and when it reached the recursive step, the program broke it randomly (which were not less than 0.1). Since the pieces were not less than 0.1, the function stopped. I am not getting an error.
I have also tried double recursion(?) such as:
return breaker_function(breaker_function(weight_of_piece/rand.randint(2,4)))
I have also tried to store the random pieces in a list, but that just complicated things, and got a similar result.
Test case: a starting piece of size 1.0 i should get approximately 18 pieces.
Your function never returns anything but 1. You need to change the recursion step. Break into equal pieces, and then add the results of breaking each of those pieces:
else:
quant = rand.randint(2,4)
count = 0
for i in range(quant):
count += breaker_function(weight_of_piece/quant)
return count
Running this 10 times:
21
22
22
17
20
25
15
18
20
17
As an exercise I'm writing a program to calculate the odds of rolling 5 die with the same number. The idea is to get the result via simulation as opposed to simple math though. My program is this:
# rollFive.py
from random import *
def main():
n = input("Please enter the number of sims to run: ")
hits = simNRolls(n)
hits = float(hits)
n = float(n)
prob = hits/n
print "The odds of rolling 5 of the same number are", prob
def simNRolls(n):
hits = 0
for i in range(n):
hits = hits + diceRoll()
return hits
def diceRoll():
firstDie = randrange(1,7,1)
for i in range(4):
nextDie = randrange(1,7,1)
if nextDie!=firstDie:
success = 0
break
else:
success = 1
return success
The problem is that running this program with a value for n of 1 000 000 gives me a probability usually between 0.0006 and 0.0008 while my math makes me believe I should be getting an answer closer to 0.0001286 (aka (1/6)^5).
Is there something wrong with my program? Or am I making some basic mistake with the math here? Or would I find my result revert closer to the right answer if I were able to run the program over larger iterations?
The probability of getting a particular number five times is (1/6)^5, but the probability of getting any five numbers the same is (1/6)^4.
There are two ways to see this.
First, the probability of getting all 1's, for example, is (1/6)^5 since there is only one way out of six to get a 1. Multiply that by five dice, and you get (1/6)^5. But, since there are six possible numbers to get the same, then there are six ways to succeed, which is 6((1/6)^5) or (1/6)^4.
Looked at another way, it doesn't matter what the first roll gives, so we exclude it. Then we have to match that number with the four remaining rolls, the probability of which is (1/6)^4.
Your math is wrong. The probability of getting five dice with the same number is 6*(1/6)^5 = 0.0007716.
Very simply, there are 6 ** 5 possible outcomes from rolling 5 dice, and only 6 of those outcomes are successful, so the answer is 6.0 / 6 ** 5
I think your expected probability is wrong, as you've stated the problem. (1/6)^5 is the probability of rolling some specific number 5 times in a row; (1/6)^4 is the probability of rolling any number 5 times in a row (because the first roll is always "successful" -- that is, the first roll will always result in some number).
>>> (1.0/6.0)**4
0.00077160493827160479
Compare to running your program with 1 million iterations:
[me#host:~] python roll5.py
Please enter the number of sims to run: 1000000
The odds of rolling 5 of the same number are 0.000755