how to optimize this code and make it faster on python - python

determine whether all values of a certain range are used in the array and at the same time there are no values in the array that will not be in the range. For example, the range is [1,5], and the array is [1,2,3,4,5] - everything is correct. Or the range [1,5], and the array [1,2,1,2,3,3,4,5] - everything is also true.
the range is [1,5], and the array [0,2,2,3,3,4,5] is already incorrect since there is no 0 in the range, and there is also a 1 missing in the array
I did this, but it's slow with big values and terrible:
def func(segment, arr):
segment_arr = []
for i in range(segment[0], segment[1] + 1):
segment_arr.append(i)
arr_corr = True
if min(arr) != min(segment_arr) or max(arr) != max(segment_arr):
arr_corr = False
else:
for i in range(len(arr)):
if arr[i] in segment_arr:
for a in range(len(segment_arr)):
if segment_arr[a] in arr:
continue
else:
arr_corr = False
else:
arr_corr = False
return arr_corr

To test if every member of a list/array/range x is in another list/array y:
all(e in y for e in x)
to test if only members of a list/array/range x are in another list/array y:
all(e in x for e in y)
The speed of these operations depends on the container type. all short circuits, so it will be faster when the test fails. in is very fast on set but can be slow on lists. Creation of set is slow and can eliminate the gains from the speed of doing in on a set. If you are working with numpy arrays, it will be faster to use numpy's intersect.
This should do what you're asking. If it is too slow, you will need to optimize the types. At that point, you will probably need to edit the question to give some clear examples of when it is too slow and what your constraints are:
def func(segment, arr):
return all(e in segment for e in arr) and all(e in arr for e in segment)

def func(segment, arr):
return set(list(range(segment[0], segment[1]+1))) == set(arr)

Build a set from allowed values
If the resulting set has "full" length, all is correct.
left, right = [1, 5]
values = [0, 2, 3, 4, 5]
values_are_correct = (
len({i for i in values if left <= i <= right}) == (right - left) + 1
)

Related

Is the time complexity of this code O(n^2)?

The problem finds two items in the array that add up to target value.
It returns an array w/ the index of the correct values.
I think the time complexity is n^2 because the while loop runs through array once so n time. And in the worst case, it has to repeat this n times. So n*n running time.
Even though the number of elements it has to iterate through decreases each time, we drop the constants when calc. time complexity.
Is this analysis correct?
Any recommendations for bringing it down to n?
def twoSum(nums, target):
indx = []
size = len(nums)
if (size < 2):
return indx
x = 0
y = size - 1
while(x < y):
if( (nums[x] + nums[y]) == target):
indx[0] = x
indx[1] = y
break
elif ( (y - 1) == x):
x = x + 1
y = size - 1
else:
y = y -1
return indx
You can do O(n), this is a google interview question that they have a video on YouTube for I believe. Or at least they had a very similar problem:
def twoSum(nums, target):
values = dict()
for index, n in enumerate(nums):
if target - n in values:
return values[target - n], index
else:
values[n] = index
print(twoSum([4, 5, 2, 1, 3], 4)) # (3, 4)
- Edit -
Per the comments below, this solution technically still has a worst case of O(n^2) do to hash collisions. For most cases you should get close to O(n) but if you are working with large numbers (negative or positive) you will see an increase in collisions which will result n * log(n) to n^2 time (especially if the test set given to you tries to target hash collisions).

Python: How can I make my implementation of bubble sort more time efficient?

Here is my code - a bubble sort algorithm for sorting list elements in asc order:
foo = [7, 0, 3, 4, -1]
cnt = 0
for i in foo:
for i in range(len(foo)-1):
if foo[cnt] > foo[cnt + 1]:
temp = foo[cnt]
c[cnt] = c[cnt + 1]
c[cnt + 1] = temp
cnt = cnt + 1
cnt = 0
I've been revising my code, but it is still too inefficient for an online judge. Some help would be greatly appreciated!
Early Exit BubbleSort
The first loop has no bearing on what happens inside
The second loop does all the heavy lifting. You can get rid of count by using enumerate
To swap elements, use the pythonic swap - a, b = b, a.
As per this comment, make use of an early exit. If there are no swaps to be made at any point in the inner loop, that means the list is sorted, and no further iteration is necessary. This is the intuition behind changed.
By definition, after the ith iteration of the outer loop, the last i elements will have been sorted, so you can further reduce the constant factor associated with the algorithm.
foo = [7, 0, 3, 4, -1]
for i in range(len(foo)):
changed = False
for j, x in enumerate(foo[:-i-1]):
if x > foo[j + 1]:
foo[j], foo[j + 1] = foo[j + 1], foo[j]
changed = True
if not changed:
break
print(foo)
[-1, 0, 3, 4, 7]
Note that none of these optimisations change the asymptotic (Big-O) complexity of BubbleSort (which remains O(N ** 2)), instead, only reduces the constant factors associated.
One easy optimization is to start second loop from i+1 index:
for i in range(0, len(foo)):
for j in range(i+1, len(foo)):
if (foo[i] > foo[j]):
temp = foo[i]
foo[i] = foo[j]
foo[j] = temp
Since you already sorted everything up to index i there is no need to iterate over it again. This can save you more than 50% of comparisons - in this case it's 10 versus 25 in your original algorithm.
You need to understand the big Oh notation in order to understand how efficient your algorithm is in terms of usage of computational resources independent of computer architecture or clock rate. It basically helps you analyze the worst case running time or memory usage of your algorithm as the size of the input increases.
In summary, the running time of your algorithm will fall into one of these categories (from fastest to slowest);
O(1): Constant time. Pronounced (Oh of 1). The fastest time.
O(lg n): Logarithmic time. Pronounced (Oh of log n). Faster than linear time.
Traditionally, it is the fastest time bound for search.
O(n): Linear time. Pronounced (Oh of n, n is the size of your input e.g size of
an array). Usually something when you need to examine every single bit of
your input.
O(nlgn): The fastest time we can currently achieve when performing a sort on a
list of elements.
O(n**2): Oh of n squared. Quadratic time. Often this is the bound when we have
nested loops.
O(2**n): Really, REALLY big! A number raised to the power of n is slower than
n raised to any power.
In your case, you are using nested loops which is O(n2). The code i have written uses a single while loop and has a growth complexity of O(n) which is faster than O(n2). I haven't really tried it on a very large array but in your case it seems to work. Try it and let me know if it works as expected.
k = [7, 0, 3, 4, -1]
n = len(k)
i = 0
count = 0
while count < n**2: # assuming we wouldn't go through the loop more than n squared times
if i == n - 1:
i = 0
count += 1
swapped = False
elif k[i] > k[i+1]:
temp = k[i]
k[i] = k[i+1]
k[i+1] = temp
i+=1
swapped = True
elif swapped == False:
i += 1
elif swapped == True and i < n - 1:
i += 1
Note: In the example list (k), we only need to loop through the list three times in order for it to be sorted in ascending order. So if you change the while loop to this line of code while count < 4:, it would still work.

how to init a array with each element holding the value different from its neighbours

I have a matrix or a multiple array written in python, each element in the array is an integer ranged from 0 to 7, how would I randomly initalize this matrix or multiple array, so that for each element holds a value, which is different from the values of its 4 neighbours(left,right, top, bottom)? can it be implemented in numpy?
You can write your own matrix initializer.
Go through the array[i][j] for each i, j pick a random number between 0 and 7.
If the number equals to either left element: array[i][j-1] or to the upper one: array[i-1][j] regenerate it once again.
You have 2/7 probability to encounter such a bad case, and 4/49 to make it twice in a row, 8/343 for 3 in a row, etc.. the probability dropes down very quickly.
The average case complexity for n elements in a matrix would be O(n).
A simpler problem that might get you started is to do the same for a 1d array. A pure-python solution would look like:
def sample_1d(n, upper):
x = [random.randrange(upper)]
for i in range(1, n)"
xi = random.randrange(upper - 1)
if xi >= x:
xi += 1
x.append(xi)
return x
You can vectorize this as:
def sample_1d_v(n, upper):
x = np.empty(n)
x[0] = 0
x[1:] = np.cumsum(np.random.randint(1, upper, size=n-1)) % upper
x += np.random.randint(upper)
return
The trick here is noting that if there is adjacent values must be different, then the difference between their values is uniformly distributed in [1, upper)

Subset sum Problem

recently I became interested in the subset-sum problem which is finding a zero-sum subset in a superset. I found some solutions on SO, in addition, I came across a particular solution which uses the dynamic programming approach. I translated his solution in python based on his qualitative descriptions. I'm trying to optimize this for larger lists which eats up a lot of my memory. Can someone recommend optimizations or other techniques to solve this particular problem? Here's my attempt in python:
import random
from time import time
from itertools import product
time0 = time()
# create a zero matrix of size a (row), b(col)
def create_zero_matrix(a,b):
return [[0]*b for x in xrange(a)]
# generate a list of size num with random integers with an upper and lower bound
def random_ints(num, lower=-1000, upper=1000):
return [random.randrange(lower,upper+1) for i in range(num)]
# split a list up into N and P where N be the sum of the negative values and P the sum of the positive values.
# 0 does not count because of additive identity
def split_sum(A):
N_list = []
P_list = []
for x in A:
if x < 0:
N_list.append(x)
elif x > 0:
P_list.append(x)
return [sum(N_list), sum(P_list)]
# since the column indexes are in the range from 0 to P - N
# we would like to retrieve them based on the index in the range N to P
# n := row, m := col
def get_element(table, n, m, N):
if n < 0:
return 0
try:
return table[n][m - N]
except:
return 0
# same definition as above
def set_element(table, n, m, N, value):
table[n][m - N] = value
# input array
#A = [1, -3, 2, 4]
A = random_ints(200)
[N, P] = split_sum(A)
# create a zero matrix of size m (row) by n (col)
#
# m := the number of elements in A
# n := P - N + 1 (by definition N <= s <= P)
#
# each element in the matrix will be a value of either 0 (false) or 1 (true)
m = len(A)
n = P - N + 1;
table = create_zero_matrix(m, n)
# set first element in index (0, A[0]) to be true
# Definition: Q(1,s) := (x1 == s). Note that index starts at 0 instead of 1.
set_element(table, 0, A[0], N, 1)
# iterate through each table element
#for i in xrange(1, m): #row
# for s in xrange(N, P + 1): #col
for i, s in product(xrange(1, m), xrange(N, P + 1)):
if get_element(table, i - 1, s, N) or A[i] == s or get_element(table, i - 1, s - A[i], N):
#set_element(table, i, s, N, 1)
table[i][s - N] = 1
# find zero-sum subset solution
s = 0
solution = []
for i in reversed(xrange(0, m)):
if get_element(table, i - 1, s, N) == 0 and get_element(table, i, s, N) == 1:
s = s - A[i]
solution.append(A[i])
print "Solution: ",solution
time1 = time()
print "Time execution: ", time1 - time0
I'm not quite sure if your solution is exact or a PTA (poly-time approximation).
But, as someone pointed out, this problem is indeed NP-Complete.
Meaning, every known (exact) algorithm has an exponential time behavior on the size of the input.
Meaning, if you can process 1 operation in .01 nanosecond then, for a list of 59 elements it'll take:
2^59 ops --> 2^59 seconds --> 2^26 years --> 1 year
-------------- ---------------
10.000.000.000 3600 x 24 x 365
You can find heuristics, which give you just a CHANCE of finding an exact solution in polynomial time.
On the other side, if you restrict the problem (to another) using bounds for the values of the numbers in the set, then the problem complexity reduces to polynomial time. But even then the memory space consumed will be a polynomial of VERY High Order.
The memory consumed will be much larger than the few gigabytes you have in memory.
And even much larger than the few tera-bytes on your hard drive.
( That's for small values of the bound for the value of the elements in the set )
May be this is the case of your Dynamic programing algorithm.
It seemed to me that you were using a bound of 1000 when building your initialization matrix.
You can try a smaller bound. That is... if your input is consistently consist of small values.
Good Luck!
Someone on Hacker News came up with the following solution to the problem, which I quite liked. It just happens to be in python :):
def subset_summing_to_zero (activities):
subsets = {0: []}
for (activity, cost) in activities.iteritems():
old_subsets = subsets
subsets = {}
for (prev_sum, subset) in old_subsets.iteritems():
subsets[prev_sum] = subset
new_sum = prev_sum + cost
new_subset = subset + [activity]
if 0 == new_sum:
new_subset.sort()
return new_subset
else:
subsets[new_sum] = new_subset
return []
I spent a few minutes with it and it worked very well.
An interesting article on optimizing python code is available here. Basically the main result is that you should inline your frequent loops, so in your case this would mean instead of calling get_element twice per loop, put the actual code of that function inside the loop in order to avoid the function call overhead.
Hope that helps! Cheers
, 1st eye catch
def split_sum(A):
N_list = 0
P_list = 0
for x in A:
if x < 0:
N_list+=x
elif x > 0:
P_list+=x
return [N_list, P_list]
Some advices:
Try to use 1D list and use bitarray to reduce memory footprint at minimum (http://pypi.python.org/pypi/bitarray) so you will just change get / set functon. This should reduce your memory footprint by at lest 64 (integer in list is pointer to integer whit type so it can be factor 3*32)
Avoid using try - catch, but figure out proper ranges at beginning, you might found out that you will gain huge speed.
The following code works for Python 3.3+ , I have used the itertools module in Python that has some great methods to use.
from itertools import chain, combinations
def powerset(iterable):
s = list(iterable)
return chain.from_iterable(combinations(s, r) for r in range(len(s)+1))
nums = input("Enter the Elements").strip().split()
inputSum = int(input("Enter the Sum You want"))
for i, combo in enumerate(powerset(nums), 1):
sum = 0
for num in combo:
sum += int(num)
if sum == inputSum:
print(combo)
The Input Output is as Follows:
Enter the Elements 1 2 3 4
Enter the Sum You want 5
('1', '4')
('2', '3')
Just change the values in your set w and correspondingly make an array x as big as the len of w then pass the last value in the subsetsum function as the sum for which u want subsets and you wl bw done (if u want to check by giving your own values).
def subsetsum(cs,k,r,x,w,d):
x[k]=1
if(cs+w[k]==d):
for i in range(0,k+1):
if x[i]==1:
print (w[i],end=" ")
print()
elif cs+w[k]+w[k+1]<=d :
subsetsum(cs+w[k],k+1,r-w[k],x,w,d)
if((cs +r-w[k]>=d) and (cs+w[k]<=d)) :
x[k]=0
subsetsum(cs,k+1,r-w[k],x,w,d)
#driver for the above code
w=[2,3,4,5,0]
x=[0,0,0,0,0]
subsetsum(0,0,sum(w),x,w,7)

Find large number of consecutive values fulfilling condition in a numpy array

I have some audio data loaded in a numpy array and I wish to segment the data by finding silent parts, i.e. parts where the audio amplitude is below a certain threshold over a period in time.
An extremely simple way to do this is something like this:
values = ''.join(("1" if (abs(x) < SILENCE_THRESHOLD) else "0" for x in samples))
pattern = re.compile('1{%d,}'%int(MIN_SILENCE))
for match in pattern.finditer(values):
# code goes here
The code above finds parts where there are at least MIN_SILENCE consecutive elements smaller than SILENCE_THRESHOLD.
Now, obviously, the above code is horribly inefficient and a terrible abuse of regular expressions. Is there some other method that is more efficient, but still results in equally simple and short code?
Here's a numpy-based solution.
I think (?) it should be faster than the other options. Hopefully it's fairly clear.
However, it does require a twice as much memory as the various generator-based solutions. As long as you can hold a single temporary copy of your data in memory (for the diff), and a boolean array of the same length as your data (1-bit-per-element), it should be pretty efficient...
import numpy as np
def main():
# Generate some random data
x = np.cumsum(np.random.random(1000) - 0.5)
condition = np.abs(x) < 1
# Print the start and stop indices of each region where the absolute
# values of x are below 1, and the min and max of each of these regions
for start, stop in contiguous_regions(condition):
segment = x[start:stop]
print start, stop
print segment.min(), segment.max()
def contiguous_regions(condition):
"""Finds contiguous True regions of the boolean array "condition". Returns
a 2D array where the first column is the start index of the region and the
second column is the end index."""
# Find the indicies of changes in "condition"
d = np.diff(condition)
idx, = d.nonzero()
# We need to start things after the change in "condition". Therefore,
# we'll shift the index by 1 to the right.
idx += 1
if condition[0]:
# If the start of condition is True prepend a 0
idx = np.r_[0, idx]
if condition[-1]:
# If the end of condition is True, append the length of the array
idx = np.r_[idx, condition.size] # Edit
# Reshape the result into two columns
idx.shape = (-1,2)
return idx
main()
There is a very convenient solution to this using scipy.ndimage. For an array:
a = np.array([1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0])
which can be the result of a condition applied to another array, finding the contiguous regions is as simple as:
regions = scipy.ndimage.find_objects(scipy.ndimage.label(a)[0])
Then, applying any function to those regions can be done e.g. like:
[np.sum(a[r]) for r in regions]
Slightly sloppy, but simple and fast-ish, if you don't mind using scipy:
from scipy.ndimage import gaussian_filter
sigma = 3
threshold = 1
above_threshold = gaussian_filter(data, sigma=sigma) > threshold
The idea is that quiet portions of the data will smooth down to low amplitude, and loud regions won't. Tune 'sigma' to affect how long a 'quiet' region must be; tune 'threshold' to affect how quiet it must be. This slows down for large sigma, at which point using FFT-based smoothing might be faster.
This has the added benefit that single 'hot pixels' won't disrupt your silence-finding, so you're a little less sensitive to certain types of noise.
I haven't tested this but you it should be close to what you are looking for. Slightly more lines of code but should be more efficient, readable, and it doesn't abuse regular expressions :-)
def find_silent(samples):
num_silent = 0
start = 0
for index in range(0, len(samples)):
if abs(samples[index]) < SILENCE_THRESHOLD:
if num_silent == 0:
start = index
num_silent += 1
else:
if num_silent > MIN_SILENCE:
yield samples[start:index]
num_silent = 0
if num_silent > MIN_SILENCE:
yield samples[start:]
for match in find_silent(samples):
# code goes here
This should return a list of (start,length) pairs:
def silent_segs(samples,threshold,min_dur):
start = -1
silent_segments = []
for idx,x in enumerate(samples):
if start < 0 and abs(x) < threshold:
start = idx
elif start >= 0 and abs(x) >= threshold:
dur = idx-start
if dur >= min_dur:
silent_segments.append((start,dur))
start = -1
return silent_segments
And a simple test:
>>> s = [-1,0,0,0,-1,10,-10,1,2,1,0,0,0,-1,-10]
>>> silent_segs(s,2,2)
[(0, 5), (9, 5)]
another way to do this quickly and concisely:
import pylab as pl
v=[0,0,1,1,0,0,1,1,1,1,1,0,1,0,1,1,0,0,0,0,0,1,0,0]
vd = pl.diff(v)
#vd[i]==1 for 0->1 crossing; vd[i]==-1 for 1->0 crossing
#need to add +1 to indexes as pl.diff shifts to left by 1
i1=pl.array([i for i in xrange(len(vd)) if vd[i]==1])+1
i2=pl.array([i for i in xrange(len(vd)) if vd[i]==-1])+1
#corner cases for the first and the last element
if v[0]==1:
i1=pl.hstack((0,i1))
if v[-1]==1:
i2=pl.hstack((i2,len(v)))
now i1 contains the beginning index and i2 the end index of 1,...,1 areas
#joe-kington I've got about 20%-25% speed improvement over np.diff / np.nonzero solution by using argmax instead (see code below, condition is boolean)
def contiguous_regions(condition):
idx = []
i = 0
while i < len(condition):
x1 = i + condition[i:].argmax()
try:
x2 = x1 + condition[x1:].argmin()
except:
x2 = x1 + 1
if x1 == x2:
if condition[x1] == True:
x2 = len(condition)
else:
break
idx.append( [x1,x2] )
i = x2
return idx
Of course, your mileage may vary depending on your data.
Besides, I'm not entirely sure, but i guess numpy may optimize argmin/argmax over boolean arrays to stop searching on first True/False occurrence. That might explain it.
I know I'm late to the party, but another way to do this is with 1d convolutions:
np.convolve(sig > threshold, np.ones((cons_samples)), 'same') == cons_samples
Where cons_samples is the number of consecutive samples you require above threshold

Categories

Resources