Unlucky number 13 - python

I came across this problem Unlucky number 13! recently but could not think of efficient solution this.
Problem statement :
N is taken as input.
N can be very large 0<= N <= 1000000009
Find total number of such strings that are made of exactly N characters which don't include "13". The strings may contain any integer from 0-9, repeated any number of times.
# Example:
# N = 2 :
# output : 99 (0-99 without 13 number)
# N =1 :
# output : 10 (0-9 without 13 number)
My solution:
N = int(raw_input())
if N < 2:
print 10
else:
without_13 = 10
for i in range(10, int('9' * N)+1):
string = str(i)
if string.count("13") >= 1:
continue
without_13 += 1
print without_13
Output
The output file should contain answer to each query in a new line modulo 1000000009.
Any other efficient way to solve this ? My solution gives time limit exceeded on coding site.

I think this can be solved via recursion:
ans(n) = { ans([n/2])^2 - ans([n/2]-1)^2 }, if n is even
ans(n) = { ans([n/2]+1)*ans([n/2]) - ans([n/2])*ans([n/2]-1) }, if n is odd
Base Cases:
ans(0) = 1
ans(1) = 10
It's implementation is running quite fast even for larger inputs like 10^9 ( which is expected as its complexity is O(log[n]) instead of O(n) like the other answers ):
cache = {}
mod = 1000000009
def ans(n):
if cache.has_key(n):
return cache[n]
if n == 0:
cache[n] = 1
return cache[n]
if n == 1:
cache[n] = 10
return cache[n]
temp1 = ans(n/2)
temp2 = ans(n/2-1)
if (n & 1) == 0:
cache[n] = (temp1*temp1 - temp2*temp2) % mod
else:
temp3 = ans(n/2 + 1)
cache[n] = (temp1 * (temp3 - temp2)) % mod
return cache[n]
print ans(1000000000)
Online Demo
Explanation:
Let a string s have even number of digits 'n'.
Let ans(n) be the answer for the input n, i.e. the number of strings without the substring 13 in them.
Therefore, the answer for string s having length n can be written as the multiplication of the answer for the first half of the string (ans([n/2])) and the answer for the second half of the string (ans([n/2])), minus the number of cases where the string 13 appears in the middle of the number n, i.e. when the last digit of the first half is 1 and the first digit of the second half is 3.
This can expressed mathematically as:
ans(n) = ans([n/2])^2 - ans([n/2]-1)*2
Similarly for the cases where the input number n is odd, we can derive the following equation:
ans(n) = ans([n/2]+1)*ans([n/2]) - ans([n/2])*ans([n/2]-1)

I get the feeling that this question is designed with the expectation that you would initially instinctively do it the way you have. However, I believe there's a slightly different approach that would be faster.
You can produce all the numbers that contain the number 13 yourself, without having to loop through all the numbers in between. For example:
2 digits:
13
3 digits position 1:
113
213
313 etc.
3 digits position 2: 131
132
133 etc.
Therefore, you don't have to check all the number from 0 to n*9. You simply count all the numbers with 13 in them until the length is larger than N.
This may not be the fastest solution (in fact I'd be surprised if this couldn't be solved efficiently by using some mathematics trickery) but I believe it will be more efficient than the approach you have currently taken.

This a P&C problem. I'm going to assume 0 is valid string and so is 00, 000 and so on, each being treated distinct from the other.
The total number of strings not containing 13, of length N, is unsurprisingly given by:
(Total Number of strings of length N) - (Total number of strings of length N that have 13 in them)
Now, the Total number of strings of length N is easy, you have 10 digits and N slots to put them in: 10^N.
The number of strings of length N with 13 in them is a little trickier.
You'd think you can do something like this:
=> (N-1)C1 * 10^(N-2)
=> (N-1) * 10^(N-2)
But you'd be wrong, or more accurately, you'd be over counting certain strings. For example, you'd be over counting the set of string that have two or more 13s in them.
What you really need to do is apply the inclusion-exclusion principle to count the number of strings with 13 in them, so that they're all included once.
If you look at this problem as a set counting problem, you have quite a few sets:
S(0,N): Set of all strings of Length N.
S(1,N): Set of all strings of Length N, with at least one '13' in it.
S(2,N): Set of all strings of Length N, with at least two '13's in it.
...
S(N/2,N): Set of all strings of Length N, with at least floor(N/2) '13's in it.
You want the set of all strings with 13 in them, but counted at most once. You can use the inclusion-exclusion principle for computing that set.

Let f(n) be the number of sequences of length n that have no "13" in them, and g(n) be the number of sequences of length n that have "13" in them.
Then f(n) = 10^n - g(n) (in mathematical notation), because it's the number of possible sequences (10^n) minus the ones that contain "13".
Base cases:
f(0) = 1
g(0) = 0
f(1) = 10
g(1) = 0
When looking for the sequences with "13", a sequence can have a "13" at the beginning. That will account for 10^(n-2) possible sequences with "13" in them. It could also have a "13" in the second position, again accounting for 10^(n-2) possible sequences. But if it has a "13" in the third position, and we'd assume there would also be 10^(n-2) possible sequences, we could those twice that already had a "13" in the first position. So we have to substract them. Instead, we count 10^(n-4) times f(2) (because those are exactly the combinations in the first two positions that don't have "13" in them).
E.g. for g(5):
g(5) = 10^(n-2) + 10^(n-2) + f(2)*10^(n-4) + f(3)*10^(n-5)
We can rewrite that to look the same everywhere:
g(5) = f(0)*10^(n-2) + f(1)*10^(n-3) + f(2)*10^(n-4) + f(3)*10^(n-5)
Or simply the sum of f(i)*10^(n-(i+2)) with i ranging from 0 to n-2.
In Python:
from functools import lru_cache
#lru_cache(maxsize=1024)
def f(n):
return 10**n - g(n)
#lru_cache(maxsize=1024)
def g(n):
return sum(f(i)*10**(n-(i+2)) for i in range(n-1)) # range is exclusive
The lru_cache is optional, but often a good idea when working with recursion.
>>> [f(n) for n in range(10)]
[1, 10, 99, 980, 9701, 96030, 950599, 9409960, 93149001, 922080050]
The results are instant and it works for very large numbers.

In fact this question is more about math than about python.
For N figures there is 10^N possible unique strings. To get the answer to the problem we need to subtract the number of string containing "13".
If string starts from "13" we have 10^(N-2) possible unique strings. If we have 13 at the second possition (e.i. a string like x13...), we again have 10^(N-2) possibilities. But we can't continue this logic further as this will lead us to double calculation of string which have 13 at different possitions. For example for N=4 there will be a string "1313" which we will calculate twice. To avoid this we should calculate only those strings which we haven't calculated before. So for "13" on possition p (counting from 0) we should find the number of unique string which don't have "13" on the left side from p, that is for each p
number_of_strings_for_13_at_p = total_number_of_strings_without_13(N=p-1) * 10^(N-p-2)
So we recursevily define the total_number_of_strings_without_13 function.
Here is the idea in the code:
def number_of_strings_without_13(N):
sum_numbers_with_13 = 0
for p in range(N-1):
if p < 2:
sum_numbers_with_13 += 10**(N-2)
else:
sum_numbers_with_13 += number_of_strings_without_13(p) * 10**(N-p-2)
return 10**N - sum_numbers_with_13
I should say that 10**N means 10 in the power of N. All the other is described above. The functions also has a surprisingly pleasent ability to give correct answers for N=1 and N=2.
To test this works correct I've rewritten your code into function and refactored a little bit:
def number_of_strings_without_13_bruteforce(N):
without_13 = 0
for i in range(10**N):
if str(i).count("13"):
continue
without_13 += 1
return without_13
for N in range(1, 7):
print(number_of_strings_without_13(N),
number_of_strings_without_13_bruteforce(N))
They gave the same answers. With bigger N bruteforce is very slow. But for very large N recursive function also gets mush slower. There is a well known solution for that: as we use the value of number_of_strings_without_13 with parameters smaller than N multiple times, we should remember the answers and not recalculate them each time. It's quite simple to do like this:
def number_of_strings_without_13(N, answers=dict()):
if N in answers:
return answers[N]
sum_numbers_with_13 = 0
for p in range(N-1):
if p < 2:
sum_numbers_with_13 += 10**(N-2)
else:
sum_numbers_with_13 += number_of_strings_without_13(p) * 10**(N-p-2)
result = 10**N - sum_numbers_with_13
answers[N] = result
return result

Thanks to L3viathan's comment now it is clear. The logic is beautiful.
Let's assume a(n) is a number of strings of n digits without "13" in it. If we know all the good strings for n-1, we can add one more digit to the left of each string and calculate a(n). As we can combine previous digits with any of 10 new, we will get 10*a(n-1) different strings. But we must subtract the number of strings, which now starts with "13" which we wrongly summed like OK at the previous step. There is a(n-2) of such wrongly added strings. So a(n) = 10*a(n-1) - a(n-2). That is it. Such simple.
What is even more interesting is that this sequence can be calculated without iterations with a formula https://oeis.org/A004189 But practically that doesn't helps much, as the formula requires floating point calculations which will lead to rounding and would not work for big n (will give answer with some mistake).
Nevertheless the original sequence is quite easy to calculate and it doesn't need to store all the previous values, just the last two. So here is the code
def number_of_strings(n):
result = 0
result1 = 99
result2 = 10
if n == 1:
return result2
if n == 2:
return result1
for i in range(3, n+1):
result = 10*result1 - result2
result2 = result1
result1 = result
return result
This one is several orders faster than my previous suggestion. And memory consumption is now just O(n)
P.S. If you run this with Python2, you'd better change range to xrange

This python3 solution meets time and memory requirement of HackerEarth
from functools import lru_cache
mod = 1000000009
#lru_cache(1024)
def ans(n):
if n == 0:
return 1
if n == 1:
return 10
temp1 = ans(n//2)
temp2 = ans(n//2-1)
if (n & 1) == 0:
return (temp1*temp1 - temp2*temp2) % mod
else:
temp3 = ans(n//2 + 1)
return (temp1 * (temp3 - temp2)) % mod
for t in range(int(input())):
n = int(input())
print(ans(n))

I came across this problem on
https://www.hackerearth.com/problem/algorithm/the-unlucky-13-d7aea1ff/
I haven't been able to get the judge to accept my solution(s) in Python but (2) in ANSI C worked just fine.
Straightforward recursive counting of a(n) = 10*a(n-1) - a(n-2) is pretty slow when getting to large numbers but there are several options (one which is not mentioned here yet):
1.) using generating functions:
https://www.wolframalpha.com/input/?i=g%28n%2B1%29%3D10g%28n%29+-g%28n-1%29%2C+g%280%29%3D1%2C+g%281%29%3D10
the powers should be counted using squaring and modulo needs to be inserted cleverly into that and the numbers must be rounded but Python solution was slow for the judge anyway (it took 7s on my laptop and judge needs this to be counted under 1.5s)
2.) using matrices:
the idea is that we can get vector [a(n), a(n-1)] by multiplying vector [a(n-1), a(n-2)] by specific matrix constructed from equation a(n) = 10*a(n-1) - a(n-2)
| a(n) | = | 10 -1 | * | a(n-1) |
| a(n-1) | | 1 0 | | a(n-2) |
and by induction:
| a(n) | = | 10 -1 |^(n-1) * | a(1) |
| a(n-1) | | 1 0 | | a(0) |
the matrix multiplication in 2D should be done via squaring using modulo. It should be hardcoded rather counted via for cycles as it is much faster.
Again this was slow for Python (8s on my laptop) but fast for ANSI C (0.3s)
3.) the solution proposed by Anmol Singh Jaggi above which is the fastest in Python (3s) but the memory consumption for cache is big enough to break memory limits of the judge. Removing cache or limiting it makes the computation very slow.

You are given a string S of length N. The string S consists of digits from 1-9, Consider the string indexing to be 1-based.
You need to divide the string into blocks such that the i block contains the elements from the index((i 1) • X +1) to min(N, (i + X)) (both inclusive). A number is valid if it is formed by choosing exactly one digit from each block and placing the digits in the order of their block
number

Related

HackerRank bitwiseAnd challenge - algorithm is too slow?

I am working on this code challenge on HackerRank: Day 29: Bitwise AND:
Task
Given set 𝑆={1,2,3,...,𝑁}. Find two integers, 𝐴 and 𝐵 (where 𝐴 < 𝐵), from set 𝑆 such that the value of 𝐴&𝐵 is the
maximum possible and also less than a given integer, 𝐾. In this case,
& represents the bitwise AND operator.
Function Description
Complete the bitwiseAnd function in the editor below.
bitwiseAnd has the following parameter(s):
int N: the maximum integer to consider
int K: the limit of the result, inclusive
Returns
int: the maximum value of 𝐴&𝐵 within the limit.
Input Format
The first line contains an integer, 𝑇, the number of test cases. Each
of the 𝑇 subsequent lines defines a test case as 2 space-separated
integers, 𝑁 and 𝐾, respectively.
Constraints
1 ≤ 𝑇 ≤ 103
2 ≤ 𝑁 ≤ 103
2 ≤ 𝐾 ≤ 𝑁
Sample Input
STDIN Function
----- --------
3 T = 3
5 2 N = 5, K = 2
8 5 N = 8, K = 5
2 2 N = 2, K = 2*
Sample Output
1
4
0
*At the time of writing the original question has an error here. Corrected in this copy.
I was not able to solve it using Python, as the time limit was exceeded every time, with a message telling me to optimise my code (some test cases had like 1000 requests).
I then tried writing the same code in C#, and it worked perfectly, executing like 10 times faster, even without any effort in optimizing the code.
Is it possible to further optimise the code below, for example with some magic that I don't know about?
Code:
def bitwiseAnd(N, K):
result = 0
for x in range(1, N):
for y in range(x+1, N+1):
if result < x&y < K:
result = x&y
return result
for x in range(int(input())):
print(bitwiseAnd(*[int(x) for x in input().split(' ')]))
Your algorithm has a brute force approach, but it can be done more efficiently.
First, observe some properties of this problem:
𝐴 & 𝐵 will never be greater than 𝐴 nor than 𝐵
If we think we have a solution 𝐶, then both 𝐴 and 𝐵 should have the same 1-bits as 𝐶 has, including possibly a few more.
We want 𝐴 and 𝐵 to not be greater than needed, since they need to be not greater than 𝑁, so given the previous point, we should let 𝐴 be equal to 𝐶, and let 𝐵 just have one 1-bit more than 𝐴 (since it should be a different number).
The least possible value for 𝐵 is then to set a 1-bit at the least bit in 𝐴 that is still 0.
If this 𝐵 is still not greater than 𝑁, then we can conclude that 𝐶 is a solution.
With the above steps in mind, it makes sense to first try with the greatest 𝐶 possible, i.e. 𝐶=𝐾−1, and then reduce 𝐶 until the above routine finds a 𝐵 that is not greater than 𝑁.
Here is the code for that:
def bitwiseAnd(N, K):
for A in range(K - 1, 0, -1):
# Find the least bit that has a zero in this number A:
# Use some "magic" to get the number with just one 1-bit in that position
bit = (A + 1) & -(A + 1)
B = A + bit
if B <= N:
# We know that A & B == B here, so just return A
return A
return 0

How to convert negative bit-represented numbers to their actual negative int value in python3?

Hello I have solved this leetcode question https://leetcode.com/problems/single-number-ii. The objective is to solve the problem in O(n) time and 0(1) space. The code I wrote is the following:
class Solution:
def singleNumber(self, nums: List[int]) -> int:
counter = [0 for i in range(32)]
result = 0
for i in range(32):
for num in nums:
if ((num >> i) & 1):
counter[i] += 1
result = result | ((counter[i] % 3) << i)
return self.convert(result)
#return result
def convert(self,x):
if x >= 2**31:
x = (~x & 0xffffffff) + 1
x = -x
return x
Now the interesting part is in the convert function, since python uses objects to store int as opposed to a 32 bit word or something, it does not know that the result is negative when the MSB of my counter is set to 1. I handle that by converting it to its 2's complement and returning the negative value.
Now someone else posted their solution with:
def convert(self, x):
if x >= 2**31:
x -= 2**32
return x
And I can't figure out why that works. I need help understanding why this subtraction works.
The value of the highest bit of an unsigned n-bit number is 2n-1.
The value of the highest bit of a signed two's complement n-bit number is -2n-1.
The difference between those two values is 2n.
So if a unsigned n-bit number has the highest bit set, to convert to a two's complement signed number subtract 2n.
In a 32-bit number, if bit 31 is set, the number will be >= 231, so the formula would be:
if n >= 2**31:
n -= 2**32
I hope that makes it clear.
Python integers are infinitely large. They will not turn negative as you add more bits so two's complement may not work as expected. You could manage negatives differently.
def singleNumber(nums):
result = 0
sign = [1,-1][sum(int(n<0) for n in nums)%3]
for i in range(32):
counter = 0
for num in nums:
counter += (abs(num) >> i) & 1
result = result | ((counter % 3) << i)
return result * sign
This binary approach can be optimized and simplified like this:
def singleNumber(nums):
result = 0
for i in range(32):
counter = sum(1 for n in nums if (n>>i)&1)
if counter > 0: result |= (counter % 3) << i
return result - 2*(result&(1<<31))
If you like one liners, you can implement it using reduce() from functools:
result = reduce(lambda r,i:r|sum(1&(n>>i) for n in nums)%3<<i,range(32),sum(n<0 for n in nums)%3*(-1<<32))
Note that this approach will always do 32 passes through the data and will be limited to numbers in the range -2^31...2^31. Increasing this range will systematically augment the number of passes through the list of numbers (even if the list only contains small values). Also, since you're not using counter[i] outside of the i loop, you don't need a list to store the counters.
You could leverage base 3 instead of base 2 using a very similar approach (which also responds in O(n) time and O(1) space):
def singleNumber(nums):
result = sign = 0
for num in nums:
if num<0 : sign += 1
base3 = 1
num = abs(num)
while num > 0 :
num,rest = divmod(num,3)
rest,base3 = rest*base3, 3*base3
if rest == 0 : continue
digit = result % base3
result = result - digit + (digit+rest)%base3
return result * (1-sign%3*2)
This one has the advantage that it will go through the list only once (thus supporting iterators as input). It does not limit the range of values and will perform the nested while loop as few times as possible (in accordance with the magnitude of each value)
The way it works is by adding digits independently in a base 3 representation and cycling the result (digit by digit) without applying a carry.
For example: [ 16, 16, 32, 16 ]
Base10 Base 3 Base 3 digits result (cumulative)
------ ------ ------------- ------
16 121 0 | 1 | 2 | 1 121
16 121 0 | 1 | 2 | 1 212
32 2012 2 | 0 | 1 | 2 2221
16 121 0 | 1 | 2 | 1 2012
-------------
sum of digits % 3 2 | 0 | 1 | 2 ==> 32
The while num > 0 loop processes the digits. It will run at most log(V,3) times where V is the largest absolute value in the numbers list. As such it is similar to the for i in range(32) loop in the base 2 solution except that it always uses the smallest possible range. For any given pattern of values, the number of iterations of that while loop is going to be less or equal to a constant thus preserving the O(n) complexity of the main loop.
I made a few performance tests and, in practice, the base3 version is only faster than the base2 approach when values are small. The base3 approach always performs fewer iterations but, when values are large, it loses out in total execution time because of the overhead of modulo vs bitwise operations.
In order for the base2 solution to always be faster than the base 3 approach, it needs to optimize its iterations through the bits by reversing the loop nesting (bits inside numbers instead of numbers inside bits):
def singleNumber(nums):
bits = [0]*len(bin(max(nums,key=abs)))
sign = 0
for num in nums:
if num<0 : sign += 1
num = abs(num)
bit = 0
while num > 0:
if num&1 : bits[bit] += 1
bit += 1
num >>= 1
result = sum(1<<bit for bit,count in enumerate(bits) if count%3)
return result * [1,-1][sign%3]
Now it will outperform the base 3 approach every time. As a side benefit, it is no longer limited by a value range and will support iterators as input. Note that the size of the bits array can be treated as a constant so this is also a O(1) space solution
But, to be fair, if we apply the same optimization to the base 3 approach (i.e. using a list of base 3 'bits'), its performance comes back in front for all value sizes:
def singleNumber(nums):
tribits = [0]*len(bin(max(nums,key=abs))) # enough base 2 -> enough 3
sign = 0
for num in nums:
if num<0 : sign += 1
num = abs(num)
base3 = 0
while num > 0:
digit = num%3
if digit: tribits[base3] += digit
base3 += 1
num //= 3
result = sum(count%3 * 3**base3 for base3,count in enumerate(tribits) if count%3)
return result * [1,-1][sign%3]
.
Counter from collections would give the expected result in O(n) time with a single line of code:
from collections import Counter
numbers = [1,0,1,0,1,0,99]
singleN = next(n for n,count in Counter(numbers).items() if count == 1)
Sets would also work in O(n):
distinct = set()
multiple = [n for n in numbers if n in distinct or distinct.add(n)]
singleN = min(distinct.difference(multiple))
These last two solutions do use a variable amount of extra memory that is proportional to the size of the list (i.e. not O(1) space). On the other hand, they run 30 times faster and they will support any data type in the list. They also support iterators
32-bit signed integers wrap around every 2**32, so a positive number with the the sign bit set (ie >= 2**31) has the same binary representation as the negative number 2**32 less.
That is the very definition of two's complement code of a number A on n bits.
if number A is positive use the binary code of A
if A is negative, use the binary code of 2^n+A (or 2^n-|A|). This number is the one you have to add to |A| to get 2^n (i.e. the complement of |A| to 2^n, hence the name of the two's complement method).
So, if you have a negative number B coded in two's complement, what is actually in its code is 2^N+B. To get its value, you have to substract 2^N from B.
There are many other definitions of two's complement (~A+1, ~(A-1), etc), but this one is the most useful as it explains why adding signed two's complement numbers is absolutely identical to adding positive numbers. The number is in the code (with 2^32 added if negative) and the addition result will be correct, provided you ignore the 2^32 that may be generated as a carry out (and there is no overflow). This arithmetic property is the main reason why two's complement is used in computers.

Capturing all data in non-whole train, test, and validate splits

just wondering if a better solution exists for this sort of problem.
We know that for a X/Y percentage split of an even number we can get an exact split of the data - for example for data size 10:
10 * .6 = 6
10 * .4 = 4
10
Splitting data this way is easy, and we can guarantee we have all of the data and nothing is lost. However where I am struggling is on less friendly numbers - take 11
11 * .6 = 6.6
11 * .4 = 4.4
11
However we can't index into an array at i = 6.6 for example. So we have to decide how to to do this. If we take JUST the integer portion we lose 1 data point -
First set = 0..6
Second set = 6..10
This would be the same case if we floored the numbers.
However, if we take the ceiling of the numbers:
First set = 0..7
Second set = 7..12
And we've read past the end of our array.
This gets even worse when we throw in a 3rd or 4th split (30,30,20,20 for example).
Is there a standard splitting procedure for these kinds of problems? Is data loss accepted? It seems like data loss would be unacceptable for dependent data, such as time series.
Thanks!
EDIT: The values .6 and .4 are chosen by me. They could be any two numbers that sum to 1.
First of all, notice that your problem is not limited to odd-sized arrays as you claim, but any-sized arrays. How would you make the 56%-44% split of a 10 element array? Or a 60%-40% split of a 4 element array?
There is no standard procedure. In many cases, programmers do not care that much about an exact split and they either do it by flooring or rounding one quantity (the size of the first set), while taking the complementary (array length - rounded size) for the other (the size of the second).
This might be ok in most cases when this is an one-off calculation and accuracy is not required. You have to ask yourself what your requirements are. For example: are you taking thousands of 10-sized arrays and each time you are splitting them 56%-44% doing some calculations and returning a result? You have to ask yourself what accuracy do you want. Do you care if your result ends up being
the 60%-40% split or the 50%-50% split?
As another example imagine that you are doing a 4-way equal split of 25%-25%-25%-25%. If you have 10 elements and you apply the rounding technique you end up with 3,3,3,1 elements. Surely this will mess up your results.
If you do care about all these inaccuracies then the first step is consider whether you can to adjust either the array size and/or the split ratio(s).
If these are set in stone then the only way to have an accurate split of any ratios of any sized array is to make it probabilistic. You have to split multiple arrays for this to work (meaning you have to apply the same split ratio to same-sized arrays multiple times). The more arrays the better (or you can use the same array multiple times).
So imagine that you have to make a 56%-44% split of a 10 sized array. This means that you need to split it in 5.6 elements and 4.4 elements on the average.
There are many ways you can achieve a 5.6 element average. The easiest one (and the one with the smallest variance in the sequence of tries) is to have 60% of the time a set with 6 elements and 40% of the time a set that has 5 elements.
0.6*6 + 0.4*5 = 5.6
In terms of code this is what you can do to decide on the size of the set each time:
import random
array_size = 10
first_split = 0.56
avg_split_size = array_size * first_split
floored_split_size = int(avg_split_size)
if avg_split_size > floored_split_size:
if random.uniform(0,1) > avg_split_size - floored_split_size:
this_split_size = floored_split_size
else:
this_split_size = floored_split_size + 1
else:
this_split_size = avg_split_size
You could make the code more compact, I just made an outline here so you get the idea. I hope this helps.
Instead of using ciel() or floor() use round() instead. For example:
>>> round(6.6)
7.0
The value returned will be of float type. For getting the integer value, type-cast it to int as:
>>> int(round(6.6))
7
This will be the value of your first split. For getting the second split, calculate it using len(data) - split1_val. This will be applicable in case of 2 split problem.
In case of 3 split, take round value of two split and take the value of 3rd split as the value of len(my_list) - val_split_1 - val_split2
In a Generic way, For N split:
Take the round() value of N-1 split. And for the last value, do len(data) - "value of N round() values".
where len() gives the length of the list.
Let's first consider just splitting the set into two pieces.
Let n be the number of elements we are splitting, and p and q be the proportions, so that
p+q == 1
I assert that the parts after the decimal point will always sum to either 1 or 0, so we should use floor on one and ceil on the other, and we will always be right.
Here is a function that does that, along with a test. I left the print statements in but they are commented out.
def simpleSplitN(n, p, q):
"split n into proportions p and q and return indices"
np = math.ceil(n*p)
nq = math.floor(n*q)
#print n, sum([np, nq]) #np and nq are the proportions
return [0, np] #these are the indices we would use
#test for simpleSplitN
for i in range(1, 10):
p = i/10.0;
q = 1-p
simpleSplitN(37, p, q);
For the mathematically inclined, here is the proof that the decimal proportions will sum to 1
-----------------------
We can express p*n as n/(1/p), and so by the division algorithm we get integers k and r
n == k*(1/p) + r with 0 <= r < (1/p)
Thus r/(1/p) == p*r < 1
We can do exactly the same for q, getting
q*r < 1 (this is a different r)
It is important to note that q*r and p*r are the part after the decimal when we divide our n.
Now we can add them together (we've added subscripts now)
0 <= p*(r_1) < 1
0 <= q*(r_2) < 1
=> 0 < p*r + q*r == p*n + q*n + k_1 + k_2 == n + k_1 + k_2 < 2
But by closure of the integers, n + k_1 + k_2 is an integer and so
0 < n + k_1 + k_2 < 2
means that p*r + q*r must be either 0 or 1. It will only be 0 in the case that our n is divided evenly.
Otherwise we can now see that our fractional parts will always sum to 1.
-----------------------
We can do a very similar (but slightly more complicated) proof for splitting n into an arbitrary number (say N) parts, but instead of them summing to 1, they will sum to an integer less than N.
Here is the general function, it has uncommented print statements for verification purposes.
import math
import random
def splitN(n, c):
"""Compute indices that can be used to split
a dataset of n items into a list of proportions c
by first dividing them naively and then distributing
the decimal parts of said division randomly
"""
nc = [n*i for i in c];
nr = [n*i - int(n*i) for i in c] #the decimal parts
N = int(round(sum(nr))) #sum of all decimal parts
print N, nc
for i in range(0, len(nc)):
nc[i] = math.floor(nc[i])
for i in range(N): #randomly distribute leftovers
nc[random.randint(1, len(nc)) - 1] += 1
print n,sum(nc); #nc now contains the proportions
out = [0] #compute a cumulative sum
for i in range(0, len(nc) - 1):
out.append(out[-1] + nc[i])
print out
return out
#test for splitN with various proportions
c = [.1,.2,.3,.4]
c = [.2,.2,.2,.2,.2]
c = [.3, .2, .2, .3]
for n in range( 10, 40 ):
print splitN(n, c)
If we have leftovers, we will never get an even split, so we distribute them randomly, like #Thanassis said. If you don't like the dependency on random, then you could just add them all at the beginning or at even intervals.
Both of my functions output indices but they compute proportions and thus could be slightly modified to output those instead per user preference.

sum of even fibonacci numbers up to 4million

The method I've used to try and solve this works but I don't think it's very efficient because as soon as I enter a number that is too large it doesn't work.
def fib_even(n):
fib_even = []
a, b = 0, 1
for i in range(0,n):
c = a+b
if c%2 == 0:
fib_even.append(c)
a, b = b, a+b
return fib_even
def sum_fib_even(n):
fib_evens = fib_even(n)
s = 0
for i in fib_evens:
s = s+i
return s
n = 4000000
answer = sum_fib_even(n)
print answer
This for example doesn't work for 4000000 but will work for 400. Is there a more efficient way of doing this?
It is not necessary to compute all the Fibonacci numbers.
Note: I use in what follows the more standard initial values F[0]=0, F[1]=1 for the Fibonacci sequence. Project Euler #2 starts its sequence with F[2]=1,F[3]=2,F[4]=3,.... For this problem the result is the same for either choice.
Summation of all Fibonacci numbers (as a warm-up)
The recursion equation
F[n+1] = F[n] + F[n-1]
can also be read as
F[n-1] = F[n+1] - F[n]
or
F[n] = F[n+2] - F[n+1]
Summing this up for n from 1 to N (remember F[0]=0, F[1]=1) gives on the left the sum of Fibonacci numbers, and on the right a telescoping sum where all of the inner terms cancel
sum(n=1 to N) F[n] = (F[3]-F[2]) + (F[4]-F[3]) + (F[5]-F[4])
+ ... + (F[N+2]-F[N+1])
= F[N+2] - F[2]
So for the sum using the number N=4,000,000 of the question one would have just to compute
F[4,000,002] - 1
with one of the superfast methods for the computation of single Fibonacci numbers. Either halving-and-squaring, equivalent to exponentiation of the iteration matrix, or the exponential formula based on the golden ratio (computed in the necessary precision).
Since about every 20 Fibonacci numbers you gain 4 additional digits, the final result will consist of about 800000 digits. Better use a data type that can contain all of them.
Summation of the even Fibonacci numbers
Just inspecting the first 10 or 20 Fibonacci numbers reveals that all even members have an index of 3*k. Check by subtracting two successive recursions to get
F[n+3]=2*F[n+2]-F[n]
so F[n+3] always has the same parity as F[n]. Investing more computation one finds a recursion for members three indices apart as
F[n+3] = 4*F[n] + F[n-3]
Setting
S = sum(k=1 to K) F[3*k]
and summing the recursion over n=3*k gives
F[3*K+3]+S-F[3] = 4*S + (-F[3*K]+S+F[0])
or
4*S = (F[3*K]+F[3*K]) - (F[3]+F[0]) = 2*F[3*K+2]-2*F[2]
So the desired sum has the formula
S = (F[3*K+2]-1)/2
A quick calculation with the golden ration formula reveals what N should be so that F[N] is just below the boundary, and thus what K=N div 3 should be,
N = Floor( log( sqrt(5)*Max )/log( 0.5*(1+sqrt(5)) ) )
Reduction of the Euler problem to a simple formula
In the original problem, one finds that N=33 and thus the sum is
S = (F[35]-1)/2;
Reduction of the problem in the question and consequences
Taken the mis-represented problem in the question, N=4,000,000, so K=1,333,333 and the sum is
(F[1,333,335]-1)/2
which still has about 533,400 digits. And yes, biginteger types can handle such numbers, it just takes time to compute with them.
If printed in the format of 60 lines a 80 digits, this number fills 112 sheets of paper, just to get the idea what the output would look like.
It should not be necessary to store all intermediate Fibonacci numbers, perhaps the storage causes a performance problem.

How to speed up code to solve bit deletion puzzle

[This is related to Minimum set cover ]
I would like to solve the following puzzle by computer for small size of n. Consider all 2^n binary vectors of length n. For each one you delete exactly n/3 of the bits, leaving a binary vector length 2n/3 (assume n is an integer multiple of 3). The goal is to choose the bits you delete so as to minimize the number of different binary vectors of length 2n/3 that remain at the end.
For example, for n = 3 the optimal answer is 2 different vectors 11 and 00. For n = 6 it is 4, for n = 9 it is 6 and for n = 12 it is 10.
I had previously attempted to solve this problem as a minimum set cover problem of the following sort. All the lists contain only 1s and 0s.
I say that a list A covers a list B if you can make B from A by inserting exactly x symbols.
Consider all 2^n lists of 1s and 0s of length n and set x = n/3. I would like to compute a minimal set of lists of length 2n/3 that covers them all. David Eisenstat provided code that converted this minimal set cover problem into a Mixed Integer Programming Problem that could be fed into CPLEX (or http://scip.zib.de/ which is open source).
from collections import defaultdict
from itertools import product, combinations
def all_fill(source, num):
output_len = (len(source) + num)
for where in combinations(range(output_len), len(source)):
poss = ([[0, 1]] * output_len)
for (w, s) in zip(where, source):
poss[w] = [s]
for tup in product(*poss):
(yield tup)
def variable_name(seq):
return ('x' + ''.join((str(s) for s in seq)))
n = 12
shortn = ((2 * n) // 3)
x = (n // 3)
all_seqs = list(product([0, 1], repeat=shortn))
hit_sets = defaultdict(set)
for seq in all_seqs:
for fill in all_fill(seq, x):
hit_sets[fill].add(seq)
print('Minimize')
print(' + '.join((variable_name(seq) for seq in all_seqs)))
print('Subject To')
for (fill, seqs) in hit_sets.items():
print(' + '.join((variable_name(seq) for seq in seqs)), '>=', 1)
print('Binary')
for seq in all_seqs:
print(variable_name(seq))
print('End')
The problem is that if you set n=15 then the instance it outputs is too large for any solver I can find. Is there a more efficient way of solving this problem so I can solve n=15 or even n = 18?
This doesn't solve your problem (well, not quickly enough), but you're not getting many ideas and someone else may find something useful to build on here.
It's a short pure Python 3 program, using backtracking search with some greedy ordering heuristics. It solves the N = 3, 6, and 9 instances very quickly. It finds a cover of size 10 for N=12 quickly too, but will apparently take a much longer time to exhaust the search space (I'm out of time for this, and it's still running). For N=15, the initialization time is already slow.
Bitstrings are represented by plain N-bit integers here, so consume little storage. That's to ease recoding in a faster language. It does make heavy use of sets of integers, but no other "advanced" data structures.
Hope this helps someone! But it's clear that the combinatorial explosion of possibilities as N increases ensures that nothing will be "fast enough" without digging deeper into the mathematics of the problem.
def dump(cover):
for s in sorted(cover):
print(" {:0{width}b}".format(s, width=I))
def new_best(cover):
global best_cover, best_size
assert len(cover) < best_size
best_size = len(cover)
best_cover = cover.copy()
print("N =", N, "new best cover, size", best_size)
dump(best_cover)
def initialize(N, X, I):
from itertools import combinations
# Map a "wide" (length N) bitstring to the set of all
# "narrow" (length I) bitstrings that generate it.
w2n = [set() for _ in range(2**N)]
# Map a narrow bitstring to all the wide bitstrings
# it generates.
n2w = [set() for _ in range(2**I)]
for wide, wset in enumerate(w2n):
for t in combinations(range(N), X):
narrow = wide
for i in reversed(t): # largest i to smallest
hi, lo = divmod(narrow, 1 << i)
narrow = ((hi >> 1) << i) | lo
wset.add(narrow)
n2w[narrow].add(wide)
return w2n, n2w
def solve(needed, cover):
if len(cover) >= best_size:
return
if not needed:
new_best(cover)
return
# Find something needed with minimal generating set.
_, winner = min((len(w2n[g]), g) for g in needed)
# And order its generators by how much reduction they make
# to `needed`.
for g in sorted(w2n[winner],
key=lambda g: len(needed & n2w[g]),
reverse=True):
cover.add(g)
solve(needed - n2w[g], cover)
cover.remove(g)
N = 9 # CHANGE THIS TO WHAT YOU WANT
assert N % 3 == 0
X = N // 3 # number of bits to exclude
I = N - X # number of bits to include
print("initializing")
w2n, n2w = initialize(N, X, I)
best_cover = None
best_size = 2**I + 1 # "infinity"
print("solving")
solve(set(range(2**N)), set())
Example output for N=9:
initializing
solving
N = 9 new best cover, size 6
000000
000111
001100
110011
111000
111111
Followup
For N=12 this eventually finished, confirming that the minimal covering set contains 10 elements (which it found very soon at the start). I didn't time it, but it took at least 5 hours.
Why's that? Because it's close to brain-dead ;-) A completely naive search would try all subsets of the 256 8-bit short strings. There are 2**256 such subsets, about 1.2e77 - it wouldn't finish in the expected lifetime of the universe ;-)
The ordering gimmicks here first detect that the "all 0" and "all 1" short strings must be in any covering set, so pick them. That leaves us looking at "only" the 254 remaining short strings. Then the greedy "pick an element that covers the most" strategy very quickly finds a covering set with 11 total elements, and shortly thereafter a covering with 10 elements. That happens to be optimal, but it takes a long time to exhaust all other possibilities.
At this point, any attempt at a covering set that reaches 10 elements is aborted (it can't possibly be smaller than 10 elements then!). If that were done wholly naively too, it would need to try adding (to the "all 0" and "all 1" strings) all 8-element subsets of the 254 remaining, and 254-choose-8 is about 3.8e14. Very much smaller than 1.2e77 - but still way too large to be practical. It's an interesting exercise to understand how the code manages to do so much better than that. Hint: it has a lot to do with the data in this problem.
Industrial-strength solvers are incomparably more sophisticated and complex. I was pleasantly surprised at how well this simple little program did on the smaller problem instances! It got lucky.
But for N=15 this simple approach is hopeless. It quickly finds a cover with 18 elements, but makes no more visible progress for at least hours. Internally, it's still working with needed sets containing hundreds (even thousands) of elements, which makes the body of solve() quite expensive. It still has 2**10 - 2 = 1022 short strings to consider, and 1022-choose-16 is about 6e34. I don't expect it would visibly help even if this code were sped by a factor of a million.
It was fun to try, though :-)
And a small rewrite
This version runs at least 6 times faster on a full N=12 run, simply by cutting off futile searches one level earlier. Also speeds initialization, and cuts memory use by changing the 2**N w2n sets into lists (no set operations are used on those). It's still hopeless for N=15, though :-(
def dump(cover):
for s in sorted(cover):
print(" {:0{width}b}".format(s, width=I))
def new_best(cover):
global best_cover, best_size
assert len(cover) < best_size
best_size = len(cover)
best_cover = cover.copy()
print("N =", N, "new best cover, size", best_size)
dump(best_cover)
def initialize(N, X, I):
from itertools import combinations
# Map a "wide" (length N) bitstring to the set of all
# "narrow" (length I) bitstrings that generate it.
w2n = [set() for _ in range(2**N)]
# Map a narrow bitstring to all the wide bitstrings
# it generates.
n2w = [set() for _ in range(2**I)]
# mask[i] is a string of i 1-bits
mask = [2**i - 1 for i in range(N)]
for t in combinations(range(N), X):
t = t[::-1] # largest i to smallest
for wide, wset in enumerate(w2n):
narrow = wide
for i in t: # delete bit 2**i
narrow = ((narrow >> (i+1)) << i) | (narrow & mask[i])
wset.add(narrow)
n2w[narrow].add(wide)
# release some space
for i, s in enumerate(w2n):
w2n[i] = list(s)
return w2n, n2w
def solve(needed, cover):
if not needed:
if len(cover) < best_size:
new_best(cover)
return
if len(cover) >= best_size - 1:
# can't possibly be extended to a cover < best_size
return
# Find something needed with minimal generating set.
_, winner = min((len(w2n[g]), g) for g in needed)
# And order its generators by how much reduction they make
# to `needed`.
for g in sorted(w2n[winner],
key=lambda g: len(needed & n2w[g]),
reverse=True):
cover.add(g)
solve(needed - n2w[g], cover)
cover.remove(g)
N = 9 # CHANGE THIS TO WHAT YOU WANT
assert N % 3 == 0
X = N // 3 # number of bits to exclude
I = N - X # number of bits to include
print("initializing")
w2n, n2w = initialize(N, X, I)
best_cover = None
best_size = 2**I + 1 # "infinity"
print("solving")
solve(set(range(2**N)), set())
print("best for N =", N, "has size", best_size)
dump(best_cover)
First consider if you have 6 bits. You can throw away 2 bits. Therefore, any pattern balance of 6-0, 5-1 or 4-2 can be converted to 0000 or 1111. In the case a 3-3 zero-one balance any pattern can be converted to one of four cases: 1000, 0001, 0111, or 1110. Therefore, one possible minimum set for 6 bits is:
0000
0001
0111
1110
1000
1111
Now consider 9 bits with 3 thrown away. You have the following set of 14 master patterns:
000000
100000
000001
010000
000010
110000
000011
001111
111100
101111
111101
011111
111110
111111
In other words, each pattern set has ones/zeros in the center, with every permutation of n/3-1 bits on each end. For example, if you have 24 bits then you will have 17 bits in the center and 7 bits on the ends. Since 2^7 = 128 you will have 4 x 128 - 2 = 510 possible patterns.
To find correct deletions there are various algorithms. One method is to find the edit distance between the current bit set and each master pattern. The pattern with the minimum edit distance is the one to convert to. This method uses dynamic programming. Another method would be to do a tree search through the patterns using a set of rules to find the matching pattern.

Categories

Resources