Code not calculating final count as per rate correctly - python

I was trying to solve the following Problem Statement:
In the city A, there is a disease x, spreading at the rate of 2. The
rate doubles after every 3 days.
If 2 people were initially infected with the disease, how many people
will be affected in total after 100 days?
(Hint: Ni+1 = Ni + a*Ni *t where Ni is the ith day patients and Ni+1
is patients the day after that, a is the increase rate and t is the
number of days.)
This is the Python code I tried to solve the statement:
rate = 2
rate_of_rate = 2 # the rate doubles every gap days
gap = 3
initially_infected = 2
final_day = 100
infected = initially_infected
days_passed = 1
while days_passed != final_day:
if days_passed%gap == 0:
rate *= rate_of_rate
infected = infected * rate
days_passed += 1
print(infected)
The answer expected is: 658781418
And the answer I'm getting is:
7387586092700242099654546576830696772603866567292789055868426442323956818125567473217880665869221255368279336978185916233370357196371072076345487974033022845153783727077340269105240653596212209328236829977000561171160601353019714984950312214004440228069460097961675499715690703175560410535127557079386864191774441606293810308368351268196693882638167250873667663250863266951807800784887663781068841491777971210302562177144021123949168116897834247743963522769738506629596576834286879022276623596962844306405686165635072
Where am I doing wrong?
P.S. > I am also not able to understand where the formula is incorporating rate_of_rate

Your program is perfectly correct.
Even if the disease would be
spreading at a constant rate 2, and
initially would be infected only 1 man, and
instead of 100 days would be only 63 days,
your task would be an analogy of well-known “Wheat and chessboard problem” with the result as large as 18,446,744,073,709,551,615.
From the Wikipedia “Wheat and chessboard problem”:
If a chessboard were to have wheat placed upon each square such that
one grain were placed on the first square, two on the second, four on
the third, and so on (doubling the number of grains on each subsequent
square), how many grains of wheat would be on the chessboard at the
finish?

Related

What is the expected value of a coin-toss that doubles in value if heads and why is it different in practice? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
Improve this question
Here's the thought experiment: say I have a coin that is worth 1$. Everytime I toss it, if it lands on head, it will double in value. If it lands on tail, it will be forever stuck with the latest value. What is the expected final value of the coin?
Here is how I am thinking about it:
ExpectedValue = 1 * 0.5 + (1 * 2) * (0.5 * 0.5) + (1 * 2 * 2) * (0.5 * 0.5 * 0.5) ...
=0.5 + 0.5 + 0.5 ...
= Infinity
Assuming my Math is correct, the expected value should be infinity. However, when I do try to simulate it out on code, the expected value comes out very different. Here's the code below:
import random
def test(iterations):
total = 0
max = 0
for i in range(iterations):
coin = False
val = 1
while coin == False:
coin = random.choice([True, False])
val *= 2
total += val
if val > max:
max = val
ave = total/iterations
print(ave)
test(10000000) # returns 38.736616
I assume that the sample size of 10000000 should be statistically significant enough. However, the final expected value returned is 38.736616, which is nowhere near Infinity. Either my Math is wrong or my code is wrong. Which is it?
The average value of the process over infinitely many trials is infinite. However, you did not perform infinitely many trials; you only performed 10,000,000, which falls short of infinity by approximately infinity.
Suppose we have a fair coin. In four flips, the average number of heads that come up is two. So, I do 100 trials: 100 times, I flip the coin four times, and I count the heads. I got 2.11, not two. Why?
My 100 trials are only 100 samples from the population. They are not distributed the same way as the population. Your 10,000,000 trials are only 10,000,000 samples from an infinite population. None of your samples happened to include the streak of a hundred heads in a row, which would have made the value for that sample 299, and would have made the average for your 10,000,000 trials more than 299/10,000,000 = 6.338•1022, which is a huge number (but still not infinity).
If you increase the number of trials, you will tend to see increasing averages, because increasing the number of trials tends to move your samples toward the full population distribution. For the process you describe, you need to move infinitely far to get to the full distribution. So you need infinitely many trials.
(Also, there is a bug in your code. If the trial starts with False, representing tails, it still doubles the value. This means the values for 0, 1, 2, 3… heads are taken as 2, 4, 8, 16,… The process you describe in the question would have them as 1, 2, 4, 8,…)
Another way of looking at this is to conduct just one trial. The average value for just one trial is infinite. However, half the time you start one trial, you stop after one coin flip, since you got a tail immediately. One quarter of the time you stop after one or two flips. One-eighth of the time, you stop after three flips. Most of the time, you will get a small number as the answer. In every trial, you will get a finite number as the answer; every trial will always end with getting a tail, and the value at that point will be finite. It is impossible to do a finite number of trials and ever end up with an infinite value. Yet there is no finite number that is greater than the expected value: If you do more and more trials, the average will tend to grow and grow, and, eventually, it will exceed any finite target.

Calculate 20 minute average & best 20 minute average

I'm trying to learn Python, with a project where I'm reading data from a bike power meter. Right now I'm just calculating the average power from start to finish, by adding each power reading to a total sum variable, and dividing it with the number of readings.
I'd like to calculate the average power for 20 minutes, and if possible, keep calculating the 20 minute average each 30 seconds after the first 20 minutes, comparing to the previous value and storing it if it's higher than the last, so in the end I could have the higher 20 minute average power that I held during a one hour workout, no matter where it happened in that hour, for example.
Data from the power meter is event based, as far as I can tell it's not a regular intervals, but definitely once a second or faster.
This is the base of my code so far:
def average_power(power, count):
global PM1_sumwatt
global PM1_avgwatt
PM1_sumwatt = PM1_sumwatt + power
PM1_avgwatt = PM1_sumwatt / count
PM1_avgLog = open(PM1_avgFile, 'w')
PM1_avgLog.write("<div id=\"pwr\"> %d W</div>" % (PM1_avgwatt))
PM1_avgLog.close()
def power_data(eventCount, pedalDiff, pedalPowerRatio, cadence, accumPower, instantPower):
global PM1_avgcount
if WCORRECT1: calibpower = instantPower+(CF1w)
else: calibpower = instantPower*(CF1x)
power_meter.update(int(calibpower))
PM1_avgcount = PM1_avgcount + 1
average_power(int(calibpower), PM1_avgcount)
power = BicyclePower(readnode, network, callbacks = {'onDevicePaired': device_found,
'onPowerData': power_data})
# Starting PM readings
power.open(ChannelID(PM1, 11, 0))
Not quite sure how to tackle this! Any help or pointer is much appreciated!
if you are reading data in real time, I assume you are reading the data in a while loop:
sum = 0
number_of_readings = 0
while True: # forever
new_value = input() # here I read from the console input
sum += new_value
number_of_readings += 1
average = sum/number_of_readings
print(average)
Here I type a number in the console and press enter to simulate your bike power meter.
>>> 1
1.0
>>> 3
2.0
>>> 4
2.6666666666666665
>>> 2
2.5
Now, if you wants to make a moving average, you must store the readings that you wants to average. This is because you want to remove them later, when they will be too old. A list is perfect for that:
Here is a solution averaging the last n readings:
n = 2
Last_n_readings = []
while True: # forever
# add a new reading
new_value = int(input()) # here I read from the console input
Last_n_readings.append(new_value)
# discard an old one (except when we do not have enough readings yet)
if len(Last_n_readings) > n:
Last_n_readings.pop(0)
# print the average of all the readings still in the list
print(sum(Last_n_readings) / len(Last_n_readings))
Which gives:
>>> 1
1.0
>>> 3
2.0
>>> 4
3.5
>>> 2
3.0
Note that lists are not very efficient when removing elements near the start, so there are more effective ways to do this (circular buffers), but I try to keep it simple ;)
You could use this by guessing how many readings/seconds you have and choose n so you average over 20 minutes.
If you want to truly average all the result which are less than 20 minutes ago, you need to not only record the readings but also the times when you red them, so you can remove the old readings wen they get more than 20 minutes old. If this is what you need, tell me and I will expand my answer with an example.
You can use pandas dataframe to store the power output for each instance.
Considering that you receive a value each 30 second, you can store them all in data frame.
Then calculate a 40 data point moving average using rolling function in python.
Take the max value you get after the rolling function, this would be your final result.
refer this for doc : https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rolling.html

What is the reason this while loop does not generate output?

I am new to python and trying to learn by doing small projects.
The problem
Strontium-90, a radioactive element that is part of the fallout
from nuclear explosions, has a half-life of 28 years. This means that a given quantity
of strontium-90 will emit radioactive particles and decay to one-half its size every
28 years. How many years are required for 100 grams of strontium-90 to decay to less
than 1 gram?
My Input code:
full_life = int(100)
while full_life < int(1):
full_life -= 0.5*full_life
year +=28
print("The decay time is:",year)
On running the above, no output is generated. why is this the case? what am i missing?
while loop logic is reversed should be:
while full_life > int(1):
Right away you have made full_life 100, checking the condition in the while loop comes out as false right away (since 100 < 1 is false). Since it is false you will not go through the loop. I imagine what you're looking for is to switch the sign from < to >
full_life = int(100)
while full_life < int(1):
do stuff
to
full_life = int(100)
while full_life > int(1):
do stuff
After you reverse your while loop logic like this:
while full_life>1:
do something
The other problem will occur.
You didn't define 'year'
you shuld do this:
year=0
full_life represents the amount of strontium you have. Initially, you have 100 grams, and are starting at year 0.
year = 0
full_life = 100
You want to decrease full_life until it is less than 1 gram, which means you execute the body of the loop as long as full_life is greater than or equal to 1.
while full_life >= 1:
full_life /= 2
year +=28
print("The decay time is:",year)
Each time the condition is true, you divide full_life in half, and increment year by 28 years. Once the loop has completed, you can output the total number of years before the condition because false.
Note this is not entirely correct, as the mass doesn't stay constant for 28 years, then immediately drop by a factor of 2. This is most obvious if you start with 1 gram of strontium. It doesn't take 28 years for any of it to decay.
What you really need to do is pick a granularity (hour, day, month, whatever), and figure out from the half-life how much of the sample will decay in that amount of time. Then you can decrease full_life by smaller amounts while increasing time likewise to find out when full_life finally drops under 1.

How can I calculate dynamic default radius for map search based on population density of the area?

I am looking to do this for a community sublet advertising website, but theoretically the algorithm would be similar for any local search.
The less populated the area searched, the higher should be the default radius to search.
On the other hand areas with high population density should have low default radius to keep things locally relevant.
This might be more of a mathematical question than a programming one but code is very welcome. So far I have calculated amount of sublets within 15 miles of each town or village and saved this in the database as an approximation for density. I intended to use this number to figure out how far to go with the search when someone searches for the town or village.
To test any proposed solution, I pulled out some approximate numbers I would want the algorithm to come up with. If there is a lot of sublets within 15 miles of a point, say 30k, I would want the default radius to search to be around 3 miles. If there is very little say 1 or 2, the default radius should go high up to 25 miles, even more miles if there are no places around. A mid range area with say ~1k sublets would have a default radius of 15 miles. These are just examples, the density will off course grow or shrink a with number of things in the database.
Population -> Default search radius
0 -> very high (~60 miles or more)
1 -> 25 miles
1k -> 15 miles
30k -> 3 miles
Am I going in the right direction? Python or PHP would be preferred for code centric answers.
Thank you
A reasonable approach would be to define regions so that they contain the same number of people, and then there will be approximately the same number of available apartments in each region.
To write this mathematically:
N = total number of people within a region
d = population density of the region (taken to be what you list as population)
A = Area of region
R = Radius of the region A
So, N = d*A = d*pi*R*R, and we want N to be constant, so R = K*sqrt(1/D), where K is a constant chosen to match your numbers, or approximately 500 miles. Then,
30K -> 2.9 miles
1K -> 16 miles
1 -> 500 miles
So it works for the first two, though not the extreme case of a population of 1 (but it's not clear that 1 is truly an important case to consider, rather than a special case all of it's own). Anyway, I think this approach makes some sense and at least gives something to consider.

Probability exercise returning different result that expected

As an exercise I'm writing a program to calculate the odds of rolling 5 die with the same number. The idea is to get the result via simulation as opposed to simple math though. My program is this:
# rollFive.py
from random import *
def main():
n = input("Please enter the number of sims to run: ")
hits = simNRolls(n)
hits = float(hits)
n = float(n)
prob = hits/n
print "The odds of rolling 5 of the same number are", prob
def simNRolls(n):
hits = 0
for i in range(n):
hits = hits + diceRoll()
return hits
def diceRoll():
firstDie = randrange(1,7,1)
for i in range(4):
nextDie = randrange(1,7,1)
if nextDie!=firstDie:
success = 0
break
else:
success = 1
return success
The problem is that running this program with a value for n of 1 000 000 gives me a probability usually between 0.0006 and 0.0008 while my math makes me believe I should be getting an answer closer to 0.0001286 (aka (1/6)^5).
Is there something wrong with my program? Or am I making some basic mistake with the math here? Or would I find my result revert closer to the right answer if I were able to run the program over larger iterations?
The probability of getting a particular number five times is (1/6)^5, but the probability of getting any five numbers the same is (1/6)^4.
There are two ways to see this.
First, the probability of getting all 1's, for example, is (1/6)^5 since there is only one way out of six to get a 1. Multiply that by five dice, and you get (1/6)^5. But, since there are six possible numbers to get the same, then there are six ways to succeed, which is 6((1/6)^5) or (1/6)^4.
Looked at another way, it doesn't matter what the first roll gives, so we exclude it. Then we have to match that number with the four remaining rolls, the probability of which is (1/6)^4.
Your math is wrong. The probability of getting five dice with the same number is 6*(1/6)^5 = 0.0007716.
Very simply, there are 6 ** 5 possible outcomes from rolling 5 dice, and only 6 of those outcomes are successful, so the answer is 6.0 / 6 ** 5
I think your expected probability is wrong, as you've stated the problem. (1/6)^5 is the probability of rolling some specific number 5 times in a row; (1/6)^4 is the probability of rolling any number 5 times in a row (because the first roll is always "successful" -- that is, the first roll will always result in some number).
>>> (1.0/6.0)**4
0.00077160493827160479
Compare to running your program with 1 million iterations:
[me#host:~] python roll5.py
Please enter the number of sims to run: 1000000
The odds of rolling 5 of the same number are 0.000755

Categories

Resources