i want to make a program to find how many number is divisible by 3 or 5 in a number for example 10 has 3 9 6 divisible by 3 and has 5 and 10 divisible by 5 so total is 5 and so on so i write my code
import math
n=float(raw_input())
div3=(n-2)/3
div5=(n-4)/5
f1=math.ceil(div3)
f2=math.ceil(div5)
sumss=f1+f2
print int(sumss)
but in some number it get wrong answer and the range of input number will be
from 1 to 10^18 so i need to use math in it because the time limit for the problem test is 2 second any one have any efficiently equation to make that the loop cant make it it take very long time
This is probably a project Euler question. The issue is that some numbers can be shared by 3 and 5. For instance 22:
divisors of 3: 3 6, 9, 12, 15, 18, 21.
divisors of 5: 5 10, 15, 20
For both 15 occurs, so you did a double count.
The advantage is that 3 and 5 are relatively prime, so the only numbers that are shared are the ones dividable by 15. So you simply need to undo the double counting:
n=int(raw_input())
div3=n//3
div5=n//5
div15=n//15
sumss=div3+div5-div15
print sumss
In case you allow double counting (15 should be counted twice), you can simply use:
n=int(raw_input())
div3=n//3
div5=n//5
sumss=div3+div5
print sumss
Note that the programs omitted the floating point arithmetic: this will result in both faster and more precise programs since floating point numbers work with a limited mantisse and thus can fail to represent a large number correctly (resulting by small errors). Furthermore in general integer arithmetic is faster.
Project Euler #1
Now the problem statement of Project Euler is a bit different: it asks to sum up these numbers. In order to do that, you have to construct an expression to sum up the first k multiples of l:
k
---
\
/ l*i
---
i=1
Using Wolfram Alpha, one gets this expression. So you can calculate these as:
def suml (k,l) :
return k*(k+1)*l/2
n=int(raw_input())
div3=n//3
div5=n//5
div15=n//15
sumss=suml(div3,3)+suml(div5,5)-suml(div15,15)
print sumss
This program gives 119 for n=22 which - you can verify above - is correct if you count 15 only once.
I am not sure whether I got the question right, but here is some idea:
n=float(raw_input())
div3=int(n/3)
div5=int(n/5)
div15=int(n/15)
sumss=div3+div5-div15
print sumss
EDIT: Ah, found the project Euler.
If we list all the natural numbers below 10 that are multiples of 3 or
5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
That is a different task then the question posted here. It says bellow the number and to find the sum.
I am not sure whether it would be the right thing to post the solution here, so I am rather not doing that.
EDIT2: from Project Euler:
We hope that you enjoyed solving this problem. Please do not deprive
others of going through the same process by publishing your solution
outside Project Euler. If you want to share your insights then please
go to thread 1 in the discussion forum.
Related
I'm trying to gather some statistics on prime numbers, among which is the distribution of factors for the number (prime-1)/2. I know there are general formulas for the size of factors of uniformly selected numbers, but I haven't seen anything about the distribution of factors of one less than a prime.
I've written a program to iterate through primes starting at the first prime after 2^63, and then factor the (prime - 1)/2 using trial division by all primes up to 2^32. However, this is extremely slow because that is a lot of primes (and a lot of memory) to iterate through. I store the primes as a single byte each (by storing the increment from one prime to the next). I also use a deterministic variant of the Miller-Rabin primality test for numbers up to 2^64, so I can easily detect when the remaining value (after a successful division) is prime.
I've experimented using a variant of pollard-rho and elliptic curve factorization, but it is hard to find the right balance of between trial division and switching to these more complicated methods. Also I'm not sure I've implemented them correctly, because sometimes they seem to take a very lone time to find a factor, and based on their asymptotic behavior, I'd expect them to be quite quick for such small numbers.
I have not found any information on factoring many numbers (vs just trying to factor one), but it seems like there should be some way to speed up the task by taking advantage of this.
Any suggestions, pointers to alternate approaches, or other guidance on this problem is greatly appreciated.
Edit:
The way I store the primes is by storing an 8-bit offset to the next prime, with the implicit first prime being 3. Thus, in my algorithms, I have a separate check for division by 2, then I start a loop:
factorCounts = collections.Counter()
while N % 2 == 0:
factorCounts[2] += 1
N //= 2
pp = 3
for gg in smallPrimeGaps:
if pp*pp > N:
break
if N % pp == 0:
while N % pp == 0:
factorCounts[pp] += 1
N //= pp
pp += gg
Also, I used a wheel sieve to calculate the primes for trial division, and I use an algorithm based on the remainder by several primes to get the next prime after the given starting point.
I use the following for testing if a given number is prime (porting code to c++ now):
bool IsPrime(uint64_t n)
{
if(n < 341531)
return MillerRabinMulti(n, {9345883071009581737ull});
else if(n < 1050535501)
return MillerRabinMulti(n, {336781006125ull, 9639812373923155ull});
else if(n < 350269456337)
return MillerRabinMulti(n, {4230279247111683200ull, 14694767155120705706ull, 1664113952636775035ull});
else if(n < 55245642489451)
return MillerRabinMulti(n, {2ull, 141889084524735ull, 1199124725622454117, 11096072698276303650});
else if(n < 7999252175582851)
return MillerRabinMulti(n, {2ull, 4130806001517ull, 149795463772692060ull, 186635894390467037ull, 3967304179347715805ull});
else if(n < 585226005592931977)
return MillerRabinMulti(n, {2ull, 123635709730000ull, 9233062284813009ull, 43835965440333360ull, 761179012939631437ull, 1263739024124850375ull});
else
return MillerRabinMulti(n, {2ull, 325ull, 9375ull, 28178ull, 450775ull, 9780504ull, 1795265022ull});
}
I don't have a definitive answer, but I do have some observations and some suggestions.
There are about 2*10^17 primes between 2^63 and 2^64, so any program you write is going to run for a while.
Let's talk about a primality test for numbers in the range 2^63 to 2^64. Any general-purpose test will do more work than you need, so you can speed things up by writing a special-purpose test. I suggest strong-pseudoprime tests (as in Miller-Rabin) to bases 2 and 3. If either of those tests shows the number is composite, you're done. Otherwise, look up the number (binary search) in a table of strong-pseudoprimes to bases 2 and 3 (ask Google to find those tables for you). Two strong pseudoprime tests followed by a table lookup will certainly be faster than the deterministic Miller-Rabin test you are currently performing, which probably uses six or seven bases.
For factoring, trial division to 1000 followed by Brent-Rho until the product of the known prime factors exceeds the cube root of the number being factored ought to be fairly fast, a few milliseconds. Then, if the remaining cofactor is composite, it will necessarily have only two factors, so SQUFOF would be a good algorithm to split them, faster than the other methods because all the arithmetic is done with numbers less than the square root of the number being factored, which in your case means the factorization could be done using 32-bit arithmetic instead of 64-bit arithmetic, so it ought to be fast.
Instead of factoring and primality tests, a better method uses a variant of the Sieve of Eratosthenes to factor large blocks of numbers. That will still be slow, as there are 203 million sieving primes less than 2^32, and you will need to deal with the bookkeeping of a segmented sieve, but considering that you factor lots of numbers at once, it's probably the best approach to your task.
I have code for everything mentioned above at my blog.
This is how I store primes for later:
(I'm going to assume you want the factors of the number, and not just a primality test).
Copied from my website http://chemicaldevelopment.us/programming/2016/10/03/PGS.html
I’m going to assume you know the binary number system for this part. If not, just think of 1 as a “yes” and 0 as a “no”.
So, there are plenty of algorithms to generate the first few primes. I use the Sieve of Eratosthenes to compute a list.
But, if we stored the primes as an array, like [2, 3, 5, 7] this would take up too much space. How much space exactly?
Well, 32 bit integers which can store up to 2^32 each take up 4 bytes because each byte is 8 bits, and 32 / 8 = 4
If we wanted to store each prime under 2,000,000,000, we would have to store over 98,000,000,000. This takes up more space, and is slower at runtime than a bitset, which is explained below.
This approach will take 98,000,000 integers of space (each is 32 bits, which is 4 bytes), and when we check at runtime, we will need to check every integer in the array until we find it, or we find a number that is greater than it.
For example, say I give you a small list of primes: [2, 3, 5, 7, 11, 13, 17, 19]. I ask you if 15 is prime. How do you tell me?
Well, you would go through the list and compare each to 15.
Is 2 = 15?
Is 3 = 15?
. . .
Is 17 = 15?
At this point, you can stop because you have passed where 15 would be, so you know it isn’t prime.
Now then, let’s say we use a list of bits to tell you if the number is prime. The list above would look like:
001101010001010001010
This starts at 0, and goes to 19
The 1s mean that the index is prime, so count from the left: 0, 1, 2
001101010001010001010
The last number in bold is 1, which indicates that 2 is prime.
In this case, if I asked you to check if 15 is prime, you don’t need to go through all the numbers in the list; All you need to do is skip to 0 . . . 15, and check that single bit.
And for memory usage, the first approach uses 98000000 integers, whereas this one can store 32 numbers in a single integer (using the list of 1s and 0s), so we would need
2000000000/32=62500000 integers.
So it uses about 60% as much memory as the first approach, and is much faster to use.
We store the array of integers from the second approach in a file, then read it back when you run.
This uses 250MB of ram to store data on the first 2000000000 primes.
You can further reduce this with wheel sieving (like what you did storing (prime-1)/2)
I'll go a little bit more into wheel sieve.
You got it right by storing (prime - 1)/2, and 2 being a special case.
You can extend this to p# (the product of the first p primes)
For example, you use (1#)*k+1 for numbers k
You can also use the set of linear equations (n#)*k+L, where L is the set of primes less than n# and 1 excluding the first n primes.
So, you can also just store info for 6*k+1 and 6*k+5, and even more than that, because L={1, 2, 3, 5}{2, 3}
These methods should give you an understanding of some the methods behind it.
You will need someway to implement this bitset, such as a list of 32 bit integers, or a string.
Look at: https://pypi.python.org/pypi/bitarray for a possible abstraction
I am pretty new to python so i dont know many things.
So lets say i got this piece of code
for i in range(10000):
n = random.randrange(1, 1000**20)
ar.append(n)
print(n)
The smallest number i get out of the 10000 is always 3-4 digits smaller than 1000^20
in this particular example the smallest numbers are
2428677832187681074801010996569419611537028033553060588
8134740394131686305349870676240657821649938786653774568
44697837467174999512466010709212980674765975179401904173
50027463313770628365294401742988068945162971582587075902
61592865033437278850861444506927272718324998780980391309
71066003554034577228599139472327583180914625767704664425
90125804190638177118373078542499530049455096499902511032
100371114393629537113796511778272151802547292271926376588
i have tried multiple ranges on multiple occasions and every time i get the same results.Maybe i am doing something wrong?
Okay, so let's use small numbers to illustrate that clearly.
Suppose you pick a random number from 1 to 100, inclusively.
How many 1 digit numbers are there? 9, from 1 to 9. For 2 digits it's from 10 to 99, 90 numbers. For 3 digits it's just 1, 100. Since it's a random pick, the probability of picking one of those 100 (9 + 90 + 9) numbers is equal, and it should be intuitively clear it's 1/100. But since there are 90 2-digit numbers, the probability of picking a 2-digit number is 90/100, or 9/10. For 1-digit numbers it's 9/100, and for the only 3-digit one it's 1/100.
I hope that answers the question.
I'm trying to find out if numbers divide cleanly by seeing if they divide into a float or an int, for example:
10/2 = 5
10/3 = 3.333
The problem is, as I understand it, you can either use / and get ONLY float results or use // and get ONLY int results. I'm trying to figure out a way to see if some number n is prime.
The idea I had was to see if all numbers between 1 and n-1 divide into floats, as that would mean none of them divide cleanly.
This is an exercise gauging my ability for an introductory course, I realize there may be some library I can import but I'm supposed to solve this problem using methods that are at my level and importing libraries isn't.
So I was wondering if theres a way to use a divison which will return the true type of the answer, if such a question even makes sense.
To see if a number "divides cleanly", you want to use the %1 operator:
10 % 3 # 1
11 % 3 # 2
12 % 3 # 0
Clearly if a divides b "cleanly", then the result is of b % a is 0.
1Modulus operator
Yes, it's Euler problem 5. I'm new to python and I'm trying to solve a couple of problems to get used to the syntax. And yes I know that there are other question regarding the same problem, but I have to know why my code is not working:
import sys
def IsEvDivBy1to20(n):
for i in range(1,21):
if n%i!=0:
return 0
return 1
SmallMultiple = 0
for i in range(sys.maxsize**10):
if IsEvDivBy1to20(i) == 1:
SmallMultiple = i
break
print(SmallMultiple)
It returns 0.
range() by default starts at 0. The first time through your loop, then, i is 0: and so the first time through your (horribly-named) function, the values being compared against are 0.
Your code fails because,
range(sys.maxsize**10)
The first value returned by range is 0 and every number between 1 and 21 divides 0 without leaving any remainder. So, 0 is considered as the solution.
Also: Euler problems are not about brute forcing, it's also about finding an efficient solution.
For example, if a number is evenly divisible by the numbers 1 - 20 you can simply multiply 1 * 2 * ... * 20 = ... to find an upper bound. This number would clearly satisfy the conditions but it's likely not the smallest number.
You can then reason as follows: if the number can be divided by 6 then it can also be divided by 2 and 3. So I don't really need to include 6 in the 1 * 2 * ... * 20 multiplication. You can repeatedly apply this reasoning to find a much smaller upper bound and work your way towards the final answer.
Assume a game in which one rolls 20, 8-sided die, for a total number of 8^20 possible outcomes. To calculate the probability of a particular event occurring, we divide the number of ways that event can occur by 8^20.
One can calculate the number of ways to get exactly 5 dice of the value 3. (20 choose 5) gives us the number of orders of 3. 7^15 gives us the number of ways we can not get the value 3 for 15 rolls.
number of ways to get exactly 5, 3's = (20 choose 5)*7^15.
The answer can also be viewed as how many ways can I rearrange the string 3,3,3,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 (20 choose 5) times the total number of values we the zero's (assuming 7 legal values) 7^15 (is this correct).
Question 1: How can I calculate the number of ways to get exactly 5 dice of the same value(That is, for all die values).
Note: if I just naively use my first answer above and multiply bt 8, I get an enormous amount of double counting?
I understand that I could solve for each of the cases (5 1's), (5, 2's), (5, 3's), ... (5's, 8) sum them (more simply 8*(5 1's) ). Then subtract the sum of number of overlaps (5 1's) and (5 2's), (5 1's) and (5 3's)... (5 1's) and (5, 2's) and ... and (5, 8's) but this seems exceedingly messy. I would a generalization of this in a way that scales up to large numbers of samples and large numbers of classes.
How can I calculate the number of ways to get at least 5 dice of the same value?
So 111110000000000000000 or 11110100000000000002 or 11111100000001110000 or 11011211222222223333, but not 00001111222233334444 or 000511512252363347744.
I'm looking for answers which either explain the math or point to a library which supports this (esp python modules). Extra points for detail and examples.
I suggest that you spend a little bit of time writing up a Monte Carlo simulation and let it run while you work out the math by hand. Hopefully the Monte Carlo simulation will converge before you're finished with the math and you'll be able to check your solution.
A slightly faster option might involve creating a SO clone for math questions.
Double counting can be solved by use of the Inclusion/Exclusion Principle
I suspect it comes out to:
Choose(8,1)*P(one set of 5 Xs)
- Choose(8,2)*P(a set of 5 Xs and a set of 5 Ys)
+ Choose(8,3)*P(5 Xs, 5 Ys, 5 Zs)
- Choose(8,4)*P(5 Xs, 5 Ys, 5 Zs, 5 As)
P(set of 5 Xs) = 20 Choose 5 * 7^15 / 8^20
P(5 Xs, 5 Ys) = 20 Choose 5,5 * 6^10 / 8^20
And so on. This doesn't solve the problem directly of 'more then 5 of the same', as if you simply summed the results of this applied to 5,6,7..20; you would over count the cases where you have, say, 10 1's and 5 8's.
You could probably apply inclusion exclusion again to come up with that second answer; so, P(of at least 5)=P(one set of 20)+ ... + (P(one set of 15) - 7*P(set of 5 from 5 dice)) + ((P(one set of 14) - 7*P(one set of 5 from 6) - 7*P(one set of 6 from 6)). Coming up with the source code for that is proving itself more difficult.
The exact probability distribution Fs,i of a sum of i s-sided dice can be calculated as the repeated convolution of the single-die probability distribution with itself.
where for all and 0 otherwise.
http://en.wikipedia.org/wiki/Dice
This problem is really hard if you have to generalize it (get the exact formula).
But anyways, let me explain the algorithm.
If you want to know
the number of ways to get exactly 5
dice of the same value
you have to rephrase your previous problem, as
calculate the number of ways to get
exactly 5 dice of the value 3 AND no
other value can be repeated exactly 5
times
For simplicity's sake, let's call function F(20,8,5) (5 dice, all values) the first answer, and F(20,8,5,3) (5 dice, value 3) the second.
We have that F(20,8,5) = F(20,8,5,3) * 8 + (events when more than one value is repeated 5 times)
So if we can get F(20,8,5,3) it should be pretty simple isn't it?
Well...not so much...
First, let us define some variables:
X1,X2,X3...,Xi , where Xi=number of times we get the dice i
Then:
F(20,8,5)/20^8 = P(X1=5 or X2=5 or ... or X8=5, with R=20(rolls) and N=8(dice number))
, P(statement) being the standard way to write a probability.
we continue:
F(20,8,5,3)/20^8 = P(X3=5 and X1<>5 and ... and X8<>5, R=20, N=8)
F(20,8,5,3)/20^8 = 1 - P(X1=5 or X2=5 or X4=5 or X5=5 or X6=5 or X7=5 or X8=5, R=15, N=7)
F(20,8,5,3)/20^8 = 1 - F(15,7,5)/7^15
recursively:
F(15,8,5) = F(15,7,5,1) * 7
P(X1=5 or X2=5 or X4=5 or X5=5 or X6=5 or X7=5 or X8=5, R=15, N=7) = P(X1=5 and X2<>5 and X4<>5 and .. and X8<>5. R=15, N=7) * 7
F(15,7,5,1)/7^15 = 1 - F(10,6,5)/6^10 F(10,6,5) = F(10,6,5,2) * 6
F(10,6,5,2)/6^10 = 1 - F(5,5,5)/5^5
F(5,5,5) = F(5,5,5,4) * 5
Well then... F(5,5,5,4) is the number of ways to get 5 dices of value 4 in 5 rolls, such as no other dice repeats 5 times. There is only 1 way, out of a total 5^5. The probability is then 1/5^5.
F(5,5,5) is the number of ways to get 5 dices of any value (out of 5 values) in 5 rolls. It's obviously 5. The probability is then 5/5^5 = 1/5^4.
F(10,6,5,2) is the number of ways to get 5 dices of value 2 in 10 rolls, such as no other dice repeats 5 times.
F(10,6,5,2) = (1-F(5,5,5)/5^5) * 6^10 = (1-1/5^4) * 6^10
Well... I think it may be incorrect at some part, but anyway, you get the idea. I hope I could make the algorithm understandable.
edit:
I did some checks, and I realized you have to add some cases when you get more than one value repeated exactly 5 times. Don't have time to solve that part thou...
Here is what I am thinking...
If you just had 5 dice, you would only have eight ways to get what you want.
For each of those eight ways, all possible combinations of the other 15 dice work.
So - I think the answer is: (8 * 815) / 820
(The answer for at least 5 the same.)
I believe you can use the formula of x occurrences in n events as:
P = probability^n * (n!/((n - x)!x!))
So the final result is going to be the sum of results from 0 to n.
I don't really see any easy way to combine it into one step that would be less messy. With this way you have the formula spelled out in the code as well. You may have to write your own factorial method though.
float calculateProbability(int tosses, int atLeastNumber) {
float atLeastProbability = 0;
float eventProbability = Math.pow( 1.0/8.0, tosses);
int nFactorial = factorial(tosses);
for ( i = 1; i <= atLeastNumber; i++) {
atLeastProbability += eventProbability * (nFactorial / (factorial(tosses - i) * factorial(i) );
}
}
Recursive solution:
Prob_same_value(n) = Prob_same_value(n-1) * (1 - Prob_noone_rolling_that_value(N-(n-1)))