I have a function,
f(x,y)=4x^2*y+3x+y
displayed as
four_x_squared_y_plus_three_x_plus_y = [(4, 2, 1), (3, 1, 0), (1, 0, 1)]
where the first item in the tuple is the coefficient, the second item is the exponent of x, and the third item is the exponent of y. I am trying to calculate the output at a certain value of x and y
I have tried to split the list of terms up, into what they represent and then feed in the values of x and y when I input them however I am getting unsupported operand type regarding ** tuples - even though I tried to split them up into separate values within the terms
Is this an effective method of splitting up tuples like this of have I missed a trick?
def multivariable_output_at(list_of_terms, x_value, y_value):
coefficient, exponent, intersect = list_of_terms
calculation =int(coefficient*x_value^exponent*y_value)+int(coefficient*x_value)+int(y_value)
return calculation
multivariable_output_at(four_x_squared_y_plus_three_x_plus_y, 1, 1) # 8 should be the output
please try this:
four_x_squared_y_plus_three_x_plus_y = [(4, 2, 1), (3, 1, 0), (1, 0, 1)]
def multivariable_output_at(list_of_terms, x_value, y_value):
return sum(coeff*(x_value**x_exp)*(y_value**y_exp) for coeff,x_exp,y_exp in list_of_terms)
print(multivariable_output_at(four_x_squared_y_plus_three_x_plus_y, 1, 1))
NOTICE:
this is different from how your code originally treated variables, and is based on my intuition of what the list of term means, given your example.
If you have more examples of input -> output, you should check my answer with all of them to make sure what I did is correct.
The first line of code unpacks the list of tuples into three distinct tuples:
coefficient, exponent, intersect = list_of_terms
# coefficient = (4, 2, 1)
# exponent = (3, 1, 0)
# intersect = (1, 0, 1)
The product operator * is not supported by tuples, do you see the issue?
Related
init_tuple = [(0, 1), (1, 2), (2, 3)]
result = sum(n for _, n in init_tuple)
print(result)
The output for this code is 6. Could someone explain how it worked?
Your code extracts each tuple and sums all values in the second position (i.e. [1]).
If you rewrite it in loops, it may be easier to understand:
init_tuple = [(0, 1), (1, 2), (2, 3)]
result = 0
for (val1, val2) in init_tuple:
result = result + val2
print(result)
The expression (n for _, n in init_tuple) is a generator expression. You can iterate on such an expression to get all the values it generates. In that case it reads as: generate the second component of each tuple of init_tuple.
(Note on _: The _ here stands for the first component of the tuple. It is common in python to use this name when you don't care about the variable it refers to (i.e., if you don't plan to use it) as it is the case here. Another way to write your generator would then be (tup[1] for tup in init_tuple))
You can iterate over a generator expression using for loop. For example:
>>> for x in (n for _, n in init_tuple):
>>> print(x)
1
2
3
And of course, since you can iterate on a generator expression, you can sum it as you have done in your code.
To get better understanding first look at this.
init_tuple = [(0, 1), (1, 2), (2, 3)]
sum = 0
for x,y in init_tuple:
sum = sum + y
print(sum)
Now, you can see that what above code does is that it calculate sum of second elements of tuple, its equivalent to your code as both does same job.
for x,y in init_tuple:
x hold first value of tuple and y hold second of tuple, in first iteration:
x = 0, y = 1,
then in second iteration:
x = 1, y = 2 and so on.
In your case you don't need first element of tuple so you just use _ instead of using variable.
I'm trying to make a simple iterator which cycles through a list and returns three consecutive numbers from the list in python, but I get really weird result - code works fine only when numbers in the list are in ascending order.
import itertools
c=[0,1,2,3,0,5,6]
counter=itertools.cycle(c)
def func(x):
if x==len(c)-1:
return c[x],c[0],c[1]
elif x==len(c)-2:
return c[x],c[len(c)-1],c[0]
else:
return c[x],c[x+1],c[x+2]
for i in range(len(c)+2):
print(func(next(counter)))
'Im trying to make a simple iterator which cycles through a list and returns three consecutive numbers from the list in python, but I get really weird result - code works fine only when numbers in the list are in ascending order.Atom prints the following in the 5th tuple. Please help..
(0, 1, 2)
(1, 2, 3)
(2, 3, 0)
(3, 0, 5)
(0, 1, 2)
(5, 6, 0)
(6, 0, 1)
(0, 1, 2)
(1, 2, 3)
'
I believe you are confusing the values of c and the indices. It seems in func you expect that an index is passed but you are in fact passing a value from c. NOTE: counter is cycling over the values of c not over indices.
Also please note that in python you can use negative indices so you can write c[-1] as a short of c[len(c) - 1].
Before you suggest that this question is similar to another, read P.P.S, pls.
Simillarly to this question, I am looking to find all the non-repeating combinations of a list of factors: But beeing looking for a python solution, and also having a slighty diferent premise, I decided it's worth opening a new question.
The input to the combinations function would be of the form [2,5,3,1,5,1,11,2], a list where the odd entries are the primes and the even are the number of times they are present. This number would be (2^5)*3*5*(11^2), or 58080. My goal is to print all the combinations (in this case products) of the different factors.
My try is the following (b is the list where i have the primes, and div a blank list where I put the divisors(don't mind 1 as a divisor):
n=len(b)//2
a=1
if a<=n:
for i in range (n):
for g in range (1,b[2*i+1]+1):
div.append (b[2*i]**g)
a+=1
if a<=n:
for i in range (n):
for o in range (i+1,n):
for g in range(1,b[2*i+1]+1):
for h in range(1,b[2*o+1]+1):
div.append ((b[2*i]**g)*(b[2*o]**h))
This adds to the list all the combinations of at most two different prime factors, but there must be a way to continue this to numbers of n different prime factors without mannualy adding more code. But the most important is that it will not generate repeated products. If there is an answer out there please redirect me to it. Thanks in advance to all.
P.S. For example, take 60. 60 will be factored by a function (not showed here) into [2, 2, 3, 1, 5, 1]. My desired output is all the divisors of 60, in order or not, like this [1,2,3,4,5,6,10,12,15,30,60] All the combinations of the products of the factors. (2,2*2,3,5,2*3,2*5,2*2*3,2*2*5,3*5,2*2*3*5 (and 1, that is added to div before or after))
P.P.S. The difference to this (another) question relies on 2 things.
First, and most important, the point of this question isn't finding divisors, but combinations. Divisors is just the context for it, but I would like to know for future problems how to make such iterations. Second, like I said in the comments, even if were about divisors, finding the primes and only then combining them is more efficient (for large N) than looking for numbers until the sqrt, and the refered post revolves around that (for an example why, see in comments).
Your friend is itertools:
from itertools import product
from functools import reduce
def grow(factor, power):
#returns list [1, factor, factor^2, ..., factor^power]
array = []
for pw in range(power+1):
if pw != 0:
k *= factor
else:
k = 1
array.append(k)
return array
x = [2,2,3,1,5,1]
prime_factors = [x[i] for i in range(0, len(x), 2)]
powers = [x[i] for i in range(1, len(x), 2)]
divisor_tree = [grow(*n) for n in zip(prime_factors, powers)]
divisor_groups = product(*divisor_tree)
# returns iterator of [(1, 1, 1), (1, 1, 5), (1, 3, 1), (1, 3, 5), (2, 1, 1), (2, 1, 5), (2, 3, 1), (2, 3, 5), (4, 1, 1), (4, 1, 5), (4, 3, 1), (4, 3, 5)]
result = [reduce(lambda x,y: x*y, n) for n in divisor_groups]
print(result)
Output:
[1, 5, 3, 15, 2, 10, 6, 30, 4, 20, 12, 60]
Now I introduce what it does:
Extracts prime_factors and their powers from your list
zip(prime_factors, powers) pairs them with each other
grow returns list consecutive powers as commented
divisor_groups is iterable of all possible selections from group of these lists, each item taken from separate list
reduce(lambda x,y: x*y, n) maps selected tuple of factors to a product of these factors, e.g. (2,3,5) -> 30
Edit:
You might like to implement grow in this way, less efficient but more readable:
def grow(factor, power):
return [factor**i for i in range(power+1)]
I'm trying to compare two lists in Python, checking if they are the same. The problem is that both lists can contain duplicate elements, and in order to be considered equal, they need to have the same amount of duplicate elements.
I've currently "solved" this by creating a copy of both lists, and removing an element from both lists if they are equal:
def equals(v1: Vertex, v2: Vertex) -> bool:
# also checks if neighbourhoods are the same size
if v1.label == v2.label:
# copy the neighbourhoods to prevent data loss on removal of checked vertices
v1_neighbours = v1.neighbours.copy()
v2_neighbours = v2.neighbours.copy()
# for every Vertex in v1.neighbours, check if there is a corresponding Vertex in v2.neighbours
# if there is, remove that Vertex from both lists
for n1 in v1_neighbours:
for n2 in v2_neighbours:
if n1.label == n2.label:
v1_neighbours.remove(n1)
v2_neighbours.remove(n2)
break
else:
return False
if len(v1_neighbours) == 0 and len(v2_neighbours) == 0:
return True
return False
I doubt this solution works: doesn't List.remove(element) remove all occurrences of that element? Also, I don't think it's memory efficient, which is important, as the neighborhoods will be pretty big.
Could anyone tell me how I can compare v1_neighbours and v2_neighbours properly, checking for an equal amount of duplicates while not altering the lists, without copying the lists?
Count them and compare the Counter-dicts:
a= [ (x,y) for x in range(5) for y in range(5)]+[ (x,y) for x in range(3) for y in range(3)]
b= [ (x,y) for x in range(5) for y in range(5)]+[ (x,y) for x in range(3) for y in range(3)]
c= [ (x,y) for x in range(5) for y in range(5)]+[ (x,y) for x in range(4) for y in range(3)]
from collections import Counter
ca = Counter(a)
cb = Counter(b)
cc = Counter(c)
print(ca==cb) # True
print(ca==cc) # False
print(ca)
Output:
True
False
Counter({(0, 0): 2, (0, 1): 2, (0, 2): 2, (1, 0): 2, (1, 1): 2, (1, 2): 2,
(2, 0): 2, (2, 1): 2, (2, 2): 2, (0, 3): 1, (0, 4): 1, (1, 3): 1,
(1, 4): 1, (2, 3): 1, (2, 4): 1, (3, 0): 1, (3, 1): 1, (3, 2): 1,
(3, 3): 1, (3, 4): 1, (4, 0): 1, (4, 1): 1, (4, 2): 1, (4, 3): 1,
(4, 4): 1})
While collections.Counter would be the usual way to perform this kind of multiset comparison in Python, I think comparing neighbors is a fundamentally misguided approach to vertex equality testing. Vertex equality should use either the default identity-based equality, or label-based equality, depending on the details of your program.
You seem to be trying to implement a comparison where two vertices are equal if they have equal labels and equal collections of neighbors. However, if it's possible for two different vertices to have equal labels, then it should be possible for two distinct vertices to have the same label and the same neighbors, making this a broken equality comparison. If it's not possible for two vertices to have equal labels, then comparing neighbors is unnecessary.
Your neighbor comparison nested loop also assumes that vertices are equal if the have equal labels, further supporting a label-based comparison. If this assumption is wrong, then you have the problem of how to determine that neighbors are equal. If you try to compare neighbors with ==, you'll run into infinite recursion.
With the additional clarification that you're implementing a color refinement algorithm, we can confirm that comparing neighbors by label only is actually correct. However, equals seems like a misleading name for the function you're implementing, since you're not testing whether the given Vertex objects represent the same vertex.
I have some sorted/scored lists of parameters. I'd like to generate possible combinations of parameters (cartesian product). However, if the number of parameters is large, this quickly (very quickly!!) becomes a very large number. Basically, I'd like to do a cartesian product, but stop early.
import itertools
parameter_options = ['1234',
'123',
'1234']
for parameter_set in itertools.product(*parameter_options):
print ''.join(parameter_set)
generates:
111
112
113
114
121
122
123
124
131
132
133
134
...
I'd like to generate (or something similar):
111
112
121
211
122
212
221
222
...
So that if I stop early, I'd at least get a couple of "good" sets of parameters, where a good set of parameters comes mostly early from the lists. This particular order would be fine, but I am interested in any technique that changes the "next permutation" choice order. I'd like the early results generated to have most items from the front of the list, but don't really care whether a solution generates 113 or 122 first, or whether 211 or 112 comes first.
My plan is to stop after some number of permutations are generated (maybe 10K or so? Depends on results). So if there are fewer than the cutoff, all should be generated, ultimately. And preferably each generated only once.
I think you can get your results in the order you want if you think of the output in terms of a graph traversal of the output space. You want a nearest-first traversal, while the itertools.product function is a depth-first traversal.
Try something like this:
import heapq
def nearest_first_product(*sequences):
start = (0,)*len(sequences)
queue = [(0, start)]
seen = set([start])
while queue:
priority, indexes = heapq.heappop(queue)
yield tuple(seq[index] for seq, index in zip(sequences, indexes))
for i in range(len(sequences)):
if indexes[i] < len(sequences[i]) - 1:
lst = list(indexes)
lst[i] += 1
new_indexes = tuple(lst)
if new_indexes not in seen:
new_priority = sum(index * index for index in new_indexes)
heapq.heappush(queue, (new_priority, new_indexes))
seen.add(new_indexes)
Example output:
for tup in nearest_first_product(range(1, 5), range(1, 4), range(1, 5)):
print(tup)
(1, 1, 1)
(1, 1, 2)
(1, 2, 1)
(2, 1, 1)
(1, 2, 2)
(2, 1, 2)
(2, 2, 1)
(2, 2, 2)
(1, 1, 3)
(1, 3, 1)
(3, 1, 1)
(1, 2, 3)
(1, 3, 2)
(2, 1, 3)
(2, 3, 1)
(3, 1, 2)
(3, 2, 1)
(2, 2, 3)
(2, 3, 2)
(3, 2, 2)
(1, 3, 3)
(3, 1, 3)
(3, 3, 1)
(1, 1, 4)
(2, 3, 3)
(3, 2, 3)
(3, 3, 2)
(4, 1, 1)
(1, 2, 4)
(2, 1, 4)
(4, 1, 2)
(4, 2, 1)
(2, 2, 4)
(4, 2, 2)
(3, 3, 3)
(1, 3, 4)
(3, 1, 4)
(4, 1, 3)
(4, 3, 1)
(2, 3, 4)
(3, 2, 4)
(4, 2, 3)
(4, 3, 2)
(3, 3, 4)
(4, 3, 3)
(4, 1, 4)
(4, 2, 4)
(4, 3, 4)
You can get a bunch of slightly different orders by changing up the calculation of new_priority in the code. The current version uses squared Cartesian distance as the priorities, but you could use some other value if you wanted to (for instance, one that incorporates the values from the sequences, not only the indexes).
If you don't care too much about whether (1, 1, 3) comes before (1, 2, 2) (so long as they both come after (1, 1, 2), (1, 2, 1) and (2, 1, 1)), you could probably do a breadth-first traversal instead of nearest-first. This would be a bit simpler, as you could use a regular queue (like a collections.deque) rather than a priority queue.
The queues used by this sort of graph traversal mean that this code uses some amount of memory. However, the amount of memory is a lot less than if you had to produce the results all up front before putting them in order. The maximum memory used is proportional to the surface area of the result space, rather than its volume.
Your question is a bit ambigous, but reading your comments and another answers, it seems you want a cartesian product implementation that does a breadth search instead of a depth search.
Recently I had your same need, but also with the requirement that it doesn't store intermediate results in memory. This is very important to me because I am working with large number of parameters (thus a extremely big cartesian product) and any implementation that stores values or do recursive calls is non-viable. As you state in your question, this seems to be your case also.
As I didn't find an answer that fulfils this requirement, I came to this solution:
def product(*sequences):
'''Breadth First Search Cartesian Product'''
# sequences = tuple(tuple(seq) for seqin sequences)
def partitions(n, k):
for c in combinations(range(n+k-1), k-1):
yield (b-a-1 for a, b in zip((-1,)+c, c+(n+k-1,)))
max_position = [len(i)-1 for i in sequences]
for i in range(sum(max_position)):
for positions in partitions(i, len(sequences)):
try:
yield tuple(map(lambda seq, pos: seq[pos], sequences, positions))
except IndexError:
continue
yield tuple(map(lambda seq, pos: seq[pos], sequences, max_position))
In terms of speed, this generator works fine in the beginning but starts getting slower in the latest results. So, although this implementation is a bit slower it works as a generator that doesn't use memory and doesn't give repeated values.
As I mentioned in #Blckknght answer, parameters here also must be sequences (subscriptable and length-defined iterables). But you can also bypass this limitation (sacrificing a bit of memory) by uncommenting the first line. This may be useful if you are working with generators/iterators as parameters.
I hope I've helped you and let me know if this helps to your problem.
This solution possibly isn't the best as it forces every combination into memory briefly, but it does work. It just might take a little while for large data sets.
import itertools
import random
count = 100 # the (maximum) amount of results
results = random.sample(list(itertools.product(*parameter_options)), count)
for parameter_set in results:
print "".join(parameter_set)
This will give you a list of products in a random order.