I'm new to python and haven't found an answer on this site so far.
I'm using numpy.polyfit in a loop and getting an error as below and don't understand as when I run the code in debug everything works fine and the len of arrays going into the function are the same:
Error Runtime exception: TypeError: expected x and y to have same length
My code is below:
import numpy as np
from collections import defaultdict
bb = [ 10, 11, 12, 22, 10, 11, 12, 11, 10, 11, 12, 22, 10, 11, 12, 11, 10, 11, 12, 22, 10, 11, 12, 11, 10, 11, 12, 22, 10, 11, 12, 11, 10 ]
i = 0
b = -3
bb_gradient = defaultdict(dict)
while ( b <= 0 ):
print i
print len(range(3))
print len(bb[b-3:b])
bb_gradient[i][0], _ = np.polyfit( range(3), weekly_bb_lower[b-3:b], 1 )
i += 1
b += 1
What am I doing wrong?
Thanks in anticipation.
I am assuming bb is weekly_bb_lower. Change while ( b <= 0 ) to while ( b < 0 ). because when b becomes 0, weekly_bb_lower[-3:0] will return an empty list. a list[-n:0] is supposed to be empty.
You can avoid referencing an empty list by moving the last three elements to the start of your list:
import numpy as np
from collections import defaultdict
bb = [ 10, 11, 12, 22, 10, 11, 12, 11, 10, 11, 12, 22, 10, 11, 12, 11, 10, 11, 12, 22, 10, 11, 12, 11, 10, 11, 12, 22, 10, 11, 12, 11, 10 ]
bb = bb[-3:] + bb[:-3] # moves the last three elements of the list to the start prior to looping
bb_gradient = defaultdict(dict)
for i in range(3):
bb_gradient[i][0], _ = np.polyfit( range(3) , bb[i:i+3], 1 )
Prashanth's explanation is correct.
Related
I would like to find a better solution for what I am proposing below. I am trying to find the indices associated with a line intersection when using the shapely library. Solutions from other libraries are welcome.
Right now I am iterating through the location coordinates and storing the index where an intersection is observed. I would like to do away with the loop and create a more streamlined function.
The code below results in a single intersection/crossing.
line_crossings = []
latitude = [10, 11, 12, 13, 14, 15, 16, 17 ,18]
longitude = [7, 9, 11, 13, 17, 19, 23, 25 ,29]
location = np.column_stack((latitude, longitude))
C = (14.5, 14.5)
D = (12.3, 12.5)
line2 = LineString([C, D])
for idx in range(0, len(location)-1):
A = (latitude[idx], longitude[idx])
B = (latitude[idx+1], longitude[idx+1])
line1 = LineString([A, B])
int_pt = line2.intersection(line1)
if int_pt.type == 'Point':
print(int_pt)
line_crossings.append(idx)
Update
It would seem the quickest way to get the coordinates of the crossings is as follows:
latitude = [10, 11, 12, 13, 14, 15, 16, 17 ,16, 15, 14, 13, 12, 11, 10]
longitude = [7, 9, 11, 13, 17, 19, 23, 25 ,29, 25, 23, 13, 13, 13, 11]
location = LineString([i for i in zip(latitude,longitude)])
C = (14.5, 14.5)
D = (12.3, 12.5)
gate = LineString([C, D])
[[i.x, i.y] for i in location.intersection(gate)]
But I need to be able to get the index in the location variable where the intersection occurs. Is it possible to get this using the list comprehension?
I try to make an array in NumPy and put each index number in the corresponding place in an array
for example, if my array is a "ndarray(30,)" with the size of 30, then :
index 0 = 1
index 1 = 2
.
.
.
index 29 = 30
is there any function in NumPy that do it for me?
if it's not I would appreciate helping me with its code?
thanks
Here you go:
>>> import numpy as np
>>> np.arange(start=1, stop=31)
array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30])
>>>
I found the builtin function numpy.arange(your_desired_size) for example :
a = numpy.array([30.3 , 20.5 , 14.2 , 15.3 , 81.2 , 88.4])
v = numpy.size(a)
a = np.arange(v)
I hope y'all doing fine!
So I want to make 5 groups of 6 people randomly chosen from a list and then append those 6 chosen names to the special group.
Example: If a, b, c, d, e, f, are the first six chosen names -> append those names to group1;
after the group1 contains 6 names, then the next 6 names -> append to group2; and so and so till I have 5 groups of 6 people.
I hope you understand me and that you can help :)
My code:
import random
names = [30 names i dont wanna share]
group1 = list()
group2 = list()
group3 = list()
group4 = list()
group5 = list()
def choosegroup():
def chooserandom():
return(random.choice(names))
def creategroup():
for i in range(1,7):
chosed = chooserandom()
names.remove(chosed)
#while(chosed in group1):
#print('Ups')
#print(chosed + ' already chosed')
# chosed = chooserandom()
#print(chosed)
group1.append(chosed)
#print('Group 1:' + '\n' + str(group1) + '\n')
createdgroup = creategroup()
print(group1)
for i in range(1,6):
print(f'Group {i}')
choosegroup()
group1.clear()
random.shuffle(names)
groups = [ names[i:i+6] for i in range(0, len(names), 6) ]
Now groups[0], groups[1] etc. are your 6-person groups.
Once you have your list of names, to split them into random groups, I would instead use numpy
import numpy as np
groups = np.array(names)
np.shuffle(groups)
groups = np.reshape(groups, (5,6))
As an example with numbers instead of names
>>> names = np.arange(30)
>>> names
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29])
>>> np.random.shuffle(names)
>>> names
array([ 8, 18, 23, 7, 25, 14, 11, 20, 13, 24, 15, 26, 19, 21, 12, 17, 0,
6, 3, 10, 29, 9, 16, 28, 22, 5, 1, 4, 27, 2])
>>> np.reshape(names, (5,6))
array([[ 8, 18, 23, 7, 25, 14],
[11, 20, 13, 24, 15, 26],
[19, 21, 12, 17, 0, 6],
[ 3, 10, 29, 9, 16, 28],
[22, 5, 1, 4, 27, 2]])
You can access them from globals as such:
globals()[f"group{i}"]
though storing and retrieving them from a dictionary is preferable.
You can rewrite your code as follows:
import random
from collections import defaultdict
names = [30 names i dont wanna share]
groups = defaultdict(list)
def choosegroup(group_name):
def chooserandom():
return(random.choice(names))
def creategroup(group_name):
for i in range(1,7):
chosed = chooserandom()
names.remove(chosed)
groups[group_name].append(chosed)
createdgroup = creategroup()
print(group_name, "\n", group[group_name])
for i in range(1,6):
print(f'Group {i}')
group_name = f"group{i}"
choosegroup(group_name)
groups[group_name].clear()
This question of my homework has passed a list where index 1 is the new node and is also the root. Then I have to check if it's children is smaller then itself and swap it with the smaller child. I've written some code but it's not working.
def perc_down(data):
count = 0
index = 1
l, r = 2 * index, 2 * index + 1
while index < len(data):
if data[index] > data[l] and data[index] > data[r]:
min_i = data.index(min(data[l], data[r]))
data[index], data[min_i] = data[min_i], data[index]
count += 1
index = min_i
return count
values = [0, 100, 7, 8, 9, 22, 45, 12, 16, 27, 36]
swaps = perc_down(values)
print('Binary heap =',values)# should be [0, 7, 9, 8, 16, 22, 45, 12, 100, 27, 36]
print('Swaps =', swaps)# should be 3
Give l and r values inside the while loop
while index <= len(data) // 2:
l, r = 2 * index, 2 * index + 1
if r >= len(data):
r = index
if data[index] > data[l] or data[index] > data[r]:
min_i = data.index(min(data[l], data[r]))
data[index], data[min_i] = data[min_i], data[index]
count += 1
index = min_i
print(data) #Added this for easy debugging.
return count
And run the loop till half values only because it's binary min heap.
Output:
[0, 7, 100, 8, 9, 22, 45, 12, 16, 27, 36]
[0, 7, 9, 8, 100, 22, 45, 12, 16, 27, 36]
[0, 7, 9, 8, 16, 22, 45, 12, 100, 27, 36]
Binary heap = [0, 7, 9, 8, 16, 22, 45, 12, 100, 27, 36]
Swaps = 3
Revised the algorithm for those indices whose children do not exist.
For : values = [0, 100, 7, 11, 9, 8, 45, 12, 16, 27, 36] for 100 after 2 swaps comes at index 5 which does not have a right child so when it exceeds the length of list we just set it back to original index.
Heapified list : Binary heap = [0, 7, 8, 11, 9, 36, 45, 12, 16, 27, 100].
I am trying to test some strategies for a game, which can be defined by 10 non-negative integers that add up to 100. There are 109 choose 9, or roughly 10^12 of these, so comparing them all is not practical. I would like to take a random sample of about 1,000,000 of these.
I have tried the methods from the answers to this question, and this one, but all still seem far too slow to work. The quickest method seems like it will take about 180 hours on my machine.
This is how I've tried to make the generator (adapted from a previous SE answer). For some reason, changing prob does not seem to impact the run time of turning it into a list.
def tuples_sum_sample(nbval,total, prob, order=True) :
"""
Generate all the tuples L of nbval positive or nul integer
such that sum(L)=total.
The tuples may be ordered (decreasing order) or not
"""
if nbval == 0 and total == 0 : yield tuple() ; raise StopIteration
if nbval == 1 : yield (total,) ; raise StopIteration
if total==0 : yield (0,)*nbval ; raise StopIteration
for start in range(total,0,-1) :
for qu in tuples_sum(nbval-1,total-start) :
if qu[0]<=start :
sol=(start,)+qu
if order :
if random.random() <prob:
yield sol
else :
l=set()
for p in permutations(sol,len(sol)) :
if p not in l :
l.add(p)
if random.random()<prob:
yield p
Rejection sampling seems like it would take about 3 million years, so this is out as well.
randsample = []
while len(randsample)<1000000:
x = (random.randint(0,100),random.randint(0,100),random.randint(0,100),random.randint(0,100),random.randint(0,100),random.randint(0,100),random.randint(0,100),random.randint(0,100),random.randint(0,100),random.randint(0,100))
if sum(x) == 100:
randsample.append(x)
randsample
Can anyone think of another way to do this?
Thanks
A couple of frame-challenging questions:
Is there any reason you must generate the entire population, then sample that population?
Why do you need to check if your numbers sum to 100?
You can generate a set of numbers that sum to a value. Check out the first answer here:
Random numbers that add to 100: Matlab
Then generate the number of such sets you desire (1,000,000 in this case).
import numpy as np
def set_sum(number=10, total=100):
initial = np.random.random(number-1) * total
sort_list = np.append(initial, [0, total]).astype(int)
sort_list.sort()
set_ = np.diff(sort_list)
return set_
if __name__ == '__main__':
import timeit
a = set_sum()
n = 1000000
sample = [set_sum() for i in range(n)]
Numpy to the rescue!
Specifically, you need a multinomial distribution:
import numpy as np
desired_sum = 100
n = 10
np.random.multinomial(desired_sum, np.ones(n)/n, size=1000000)
It outputs a matrix with a million rows of 10 random integers in a few seconds. Each row sums up to 100.
Here's a smaller example:
np.random.multinomial(desired_sum, np.ones(n)/n, size=10)
which outputs:
array([[ 8, 7, 12, 11, 11, 9, 9, 10, 11, 12],
[ 7, 11, 8, 9, 9, 10, 11, 14, 11, 10],
[ 6, 10, 11, 13, 8, 10, 14, 12, 9, 7],
[ 6, 11, 6, 7, 8, 10, 8, 18, 13, 13],
[ 7, 7, 13, 11, 9, 12, 13, 8, 8, 12],
[10, 11, 13, 9, 6, 11, 7, 5, 14, 14],
[12, 5, 9, 9, 10, 8, 8, 16, 9, 14],
[14, 8, 14, 9, 11, 6, 10, 9, 11, 8],
[12, 10, 12, 9, 12, 10, 7, 10, 8, 10],
[10, 7, 10, 19, 8, 5, 11, 8, 8, 14]])
The sums appear to be correct:
sum(np.random.multinomial(desired_sum, np.ones(n)/n, size=10).T)
# array([100, 100, 100, 100, 100, 100, 100, 100, 100, 100])
Python only
You could also start with a list on 10 zeroes, iterate 100 times and increment a random cell each time :
import random
desired_sum = 100
n = 10
row = [0] * n
for _ in range(desired_sum):
row[random.randrange(n)] += 1
row
# [16, 7, 9, 7, 10, 11, 4, 19, 4, 13]
sum(row)
# 100