Knapsack with constraint same value - python

I am Solving a Multiple Knapsacks Problem in python :
The problem is to pack a subset of the items into five bins, each of which has a maximum capacity of 100, so that the total packed value is a maximum.
data = {}
data['weights'] = [
48, 30, 42, 36, 36, 48, 42, 42, 36, 24, 30, 30, 42, 36, 36
]
data['values'] = [
10, 30, 25, 50, 35, 30, 15, 40, 30, 35, 45, 10, 20, 30, 25
]
assert len(data['weights']) == len(data['values'])
data['num_items'] = len(data['weights'])
data['all_items'] = range(data['num_items'])
data['bin_capacities'] = [100, 100, 100, 100, 100]
data['num_bins'] = len(data['bin_capacities'])
data['all_bins'] = range(data['num_bins'])
The data includes the following:
weights: A vector containing the weights of the items.
values: A vector containing the values of the items.
capacities: A vector containing the capacities of the bins.
The following code declares the MIP solver.
solver = pywraplp.Solver.CreateSolver('SCIP')
if solver is None:
print('SCIP solver unavailable.')
return
The following code creates the variables for the problem.
# x[i, b] = 1 if item i is packed in bin b.
x = {}
for i in data['all_items']:
for b in data['all_bins']:
x[i, b] = solver.BoolVar(f'x_{i}_{b}')
The following code defines the constraints for the problem:
Each x[(i, j)] is a 0-1 variable, where i is an item and j is a bin. In the solution, x[(i, j)] will be 1 if item i is placed in bin j, and 0 otherwise.
# Each item is assigned to at most one bin.
for i in data['all_items']:
solver.Add(sum(x[i, b] for b in data['all_bins']) <= 1)
# The amount packed in each bin cannot exceed its capacity.
for b in data['all_bins']:
solver.Add(
sum(x[i, b] * data['weights'][i]
for i in data['all_items']) <= data['bin_capacities'][b])
# Maximize total value of packed items.
objective = solver.Objective()
for i in data['all_items']:
for b in data['all_bins']:
objective.SetCoefficient(x[i, b], data['values'][i])
objective.SetMaximization()
I Try to add another contraint which consist that all items in the same bag should have the same weight, but I struggle to do it in python . Can you help me to code it?
Thanks

Just a sketch. Think about it... Maybe fix it (it's just an idea).
What you have: Assignment-matrix A items <-> bins
item 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
0
1
2
3
4
<=1 <=1 <=1 ...
bin
What you should add: Assignment-matrix B item-classes <-> bins
Item-class: set of all items of same weight
e.g.:
import numpy as np
weights = np.array([48, 30, 42, 36, 36, 48, 42, 42, 36, 24, 30, 30, 42, 36, 36])
unique_weights = set(weights)
partition = [np.where(weights == i)[0] for i in unique_weights]
# [array([ 3, 4, 8, 13, 14]), array([ 2, 6, 7, 12]), array([0, 5]), array([9]), array([ 1, 10, 11])]
Additional assignment-matrix:
item-class 0 1 2 3 4
0 <=1
1 <=1
2 <=1
3 <=1
4 <=1
bin
Then: Link/Channel those
Sum of assigned items to bin of class C is 0 if class C is not UNIQUELY assigned to this bin, unbounded (big-M) otherwise.
Something like:
for b in range(n_bins):
for c in range(n_partitions):
sum(A[b, all_indices_of_items_in_class(c)]) <= B[b, c] * len(all_indices_of_items_in_class(c))
Remarks
Obviously, this is more of some addition to the status quo.
It might make more sense to not model A as big boolean-mat, but just introduce cardinality-constraints (how much identical items are picked) as we already have variables expressing "what" we pick.

Related

Random.choices cum_weights

please I need more clarity on this, I really do not understand it well. Using this as an example
import random
my_list = [9999, 45, 63, 19, 89, 5, 72]
cum_w = [1, 9, 10, 9, 2, 12, 7]
d_rand = random.choices(my_list, cum_weights=cum_w, k=7)
sum = 0
for idx, i in enumerate(cum_w):
if idx == 0:
for i in cum_w: sum += i
print(f"cum_weight for {my_list[idx]}\t= {i/sum}\tRandom={random.choices(my_list, cum_weights=cum_w, k=7)}")
Below is the output
cum_weight for 9999 = 0.14 Random=[45, 45, 9999, 45, 45, 9999, 45]
cum_weight for 45 = 0.18 Random=[45, 45, 45, 45, 9999, 45, 45]
cum_weight for 63 = 0.2 Random=[45, 45, 45, 9999, 9999, 9999, 45]
cum_weight for 19 = 0.18 Random=[45, 45, 45, 45, 45, 45, 9999]
cum_weight for 89 = 0.04 Random=[9999, 45, 45, 45, 45, 9999, 45]
cum_weight for 5 = 0.24 Random=[45, 45, 45, 45, 45, 45, 45]
cum_weight for 72 = 0.14 Random=[45, 45, 9999, 45, 45, 45, 45]
The probability of 9(cum_w[1] and cum_w[3]) are 0.18.
Why does 45(9) occur so often?
I've read random.choices documentation and does not really get to me.
How does the cum_weights works?
Please, I kindly need depth knowledge on this.
You asked "Why does 45(9) occur so often?" and "How do the cum_weights work?" Addressing the second question will explain the first. Note that what follows is an implementation of one approach used for this kind of problem. I'm not claiming that this is python's implementation, it is intended to illustrate the concepts involved.
Let's start by looking at how values can be generated if you use cumulative weights, i.e., a list where at each index the entry is the sum of all weights up to and including the current index.
import random
# Given cumulative weights, convert them to proportions, then generate U ~ Uniform(0,1)
# random values to use in a linear search to generate values in the correct proportions.
# This is based on the well-known probability result that P{a<=U<=b} = (b - a) for
# 0 <= a < b <= 1.
def gen_cumulative_weighted(values, c_weights): # values and c_weights must be lists of the same length
# Convert cumulative weights to probabilities/proportions by dividing by the last value.
# This yields a list of non-decreasing values between 0 and 1. Note that the last entry
# is always 1, so a Uniform(0, 1) random number will *always* be less than or equal to
# some entry in the list.
p = [c_weights[i] / c_weights[-1] for i in range(len(c_weights))]
while True:
index = 0 # starting from the beginning of the list
# The following three lines find the first index having the property u <= p[index].
u = random.random()
while u > p[index]:
index += 1
yield(values[index]) # yield the corresponding value.
As the comments point out, the weights are scaled by the last (and largest) value to scale them to a set of values in the range (0,1). These can be thought of as the right-most endpoints of non-overlapping subranges, each of which has a length equal to the corresponding scaled weight. (Sketch it out on paper if this is unclear, you should see it pretty quickly.) A generated Uniform(0,1) value will fall in one of those subranges, and the probability it does so is equal to the length of the subrange according to a well-known result from probability.
If we have the raw weights rather than the cumulative weights, all we have to do is convert them to cumulative and then pass the work off to the cumulative weighted version of the generator:
def gen_weighted(values, weights): # values and weights must be lists of the same length
cumulative_w = [sum(weights[:i+1]) for i in range(len(weights))]
return gen_cumulative_weighted(values, cumulative_w)
Now we're ready to use the generators:
my_values = [9999, 45, 63, 19, 89, 5, 72]
my_weights = [1, 9, 10, 9, 2, 12, 7]
good_gen = gen_weighted(my_values, my_weights)
print('Passing raw weights to the weighted implementation:')
print([next(good_gen) for _ in range(20)])
which will produce results such as:
Passing raw weights to the weighted implementation:
[63, 5, 63, 63, 72, 19, 63, 5, 45, 63, 72, 19, 5, 89, 72, 63, 63, 19, 89, 45]
Okay, so what happens if we pass raw weights to the cumulative weighted version of the algorithm? Your raw weights of [1, 9, 10, 9, 2, 12, 7] get scaled by dividing by the last value, and become [1/7, 9/7, 10/7, 9/7, 2/7, 12/7, 1]. When we generate u ~ Uniform(0, 1) and use it to search linearly through the scaled weights, it will yield index zero => 9999 with probability 1/7, and index one => 45 with probability 6/7! This happens because u is always ≤ 1, and therefore always less than 9/7. As a result, the linear search will never get past any scaled weight ≥ 1, which for your inputs means it can only generate the first two values and does so with the wrong weighting.
print('Passing raw weights to the cumulative weighted implementation:')
bad_gen = gen_cumulative_weighted(my_values, my_weights)
print([next(bad_gen) for _ in range(20)])
produces results such as:
Passing raw weights to the cumulative weighted implementation:
[45, 45, 45, 45, 45, 45, 45, 9999, 45, 9999, 45, 45, 45, 45, 45, 9999, 45, 9999, 45, 45]

Sample irregular list of numbers with a set delta

Is there a simpler way, using e.g. numpy, to get samples for a given X and delta than the below code?
>>> X = [1, 4, 5, 6, 11, 13, 15, 20, 21, 22, 25, 30]
>>> delta = 5
>>> samples = [X[0]]
>>> for x in X:
... if x - samples[-1] >= delta:
... samples.append(x)
>>> samples
[1, 6, 11, 20, 25, 30]
If you are aiming to "vectorize" the process for performance reasons (e.g. using numpy), you could compute the number of elements that are less than each element plus the delta. This will give you indices for the items to select with the items that need to be skipped getting the same index as the preceding ones to be kept.
import numpy as np
X = np.array([1, 4, 5, 6, 11, 13, 15, 20, 21, 22, 25, 30])
delta = 5
i = np.sum(X<X[:,None]+delta,axis=1) # index of first to keep
i = np.insert(i[:-1],0,0) # always want the first, never the last
Y = X[np.unique(i)] # extract values as unique indexes
print(Y)
[ 1 6 11 20 25 30]
This assumes that the numbers are in ascending order
[EDIT]
As indicated in my comment, the above solution is flawed and will only work some of the time. Although vectorizing a python function does not fully leverage the parallelism (and is slower than the python loop), it is possible to implement the filter like this
X = np.array([1, 4, 5, 6, 10,11,12, 13, 15, 20, 21, 22, 25, 30])
delta = 5
fdelta = np.frompyfunc(lambda a,b:a if a+delta>b else b,2,1)
Y = X[X==fdelta.accumulate(X,dtype=np.object)]
print(Y)
[ 1 6 11 20 25 30]

Creating new arrays based on the difference of previous and following elements of two other arrays

I have some events with their start and end time steps. Array “start” represents the start time steps of 4 events, array “end” represents the end time steps for these events, and array “prop” contains one numerical property for each event (e.g. the 2nd event (1 index) started at time step 12 and finished at time step 14, and its property is 20). Array “diff” shows the difference between the events (from the end of the previous event to the start of the next one). The time difference between the end of the 1st event and the start of the 2nd event is 7 steps. Array “diff” is smaller than the other arrays (“start”, “end”, "prop") by 1 element.
import numpy as np
start=np.array([3,12,16,30])
end = np.array([5,14,18,32])
prop=np.array([10,20,10,30])
diff=np.zeros(len(start)-1)
for i in range(1,len(start)):
diff[i-1] = start[i] - end[i-1]
print('diff',diff)
diff [ 7. 2. 12.]
The events which are close timewise need to merge. If the difference between 2 neighboring events is smaller than 3 timesteps, they need to merge. For example the 2nd and 3rd events differ by 2 time steps, so they will merge into a new event whose start is time step:12, and its end time step is 18). As for the “prop” array, the max prop[i] between the merged events need to be kept (prop[1] >prop[2]), so 20 will be assigned to the new merged event (merged_prop[1]=20). I would like to have 3 new arrays with the characteristics of all events (merged and not merged) like those:
merged_start=np.array([3,12,30])
merged_end = np.array([5,18,32]) #2nd and 3rd event have been merged
merged_prop=np.array([10,20,30])
I have attached another larger example as well to be more clear about what I want. The 2nd and 3rd events merged to 1 large event, and so did the 4th up to (included) 7th did.
start_2=np.array([3,12,16,38,42,46,50,60])
end_2= np.array([5,14,32,40,44,48,54,70])
prop_2= np.array([10,8,20,10,35,10,10,10])
diff_2=np.zeros(len(start_2)-1)
for i in range(1,len(start_2)):
diff_2[i-1] = start_2[i] - end_2[i-1]
print('diff_2',diff_2)
diff_2 [7. 2. 6. 2. 2. 2. 6.]
#Desirable outputs
merged_start_2=np.array([3,12,38,60])
merged_end_2 = np.array([5,32,54,70])
merged_prop_2= np.array([10,20,35,10])
Another Example
start_3 = np.array([ 3, 12, 18, 38, 42, 46, 50, 60])
end_3 = np.array([ 5, 14, 32, 40, 44, 48, 54, 70])
prop_3 = np.array([10, 8, 20, 10, 35, 10, 10, 10])
#Desirable outputs
merged_start_3=np.array([3,12,18,38,60])
merged_end_3 = np.array([5,14,32,54,70])
merged_prop_3= np.array([10,8,20,35,10])
How can I do it? I am able to extract the indices from arrays "diff","diff_2" which values are lower than 3 but I do not know how to continue.
Here is a way you can do that:
import numpy as np
MERGE_THRESHOLD = 3
start = np.array([ 3, 12, 16, 38, 42, 46, 50, 60])
end = np.array([ 5, 14, 32, 40, 44, 48, 54, 70])
prop = np.array([10, 8, 20, 10, 35, 10, 10, 10])
# Gap between events
dists = start[1:] - end[:-1]
# Mask events to merge
m = dists >= MERGE_THRESHOLD
# Find first and last indices of each merged group
first_indices = np.flatnonzero(np.r_[True, m])
last_indices = np.r_[first_indices[1:], len(start)] - 1
# Make results
merged_start = start[first_indices]
merged_end = end[last_indices]
merged_prop_max = np.maximum.reduceat(prop, first_indices)
merged_prop_sum = np.add.reduceat(prop, first_indices)
elems_per_merge = last_indices - first_indices + 1
merged_prop_avg = merged_prop_sum / elems_per_merge
print(merged_start)
# [ 3 12 38 60]
print(merged_end)
# [ 5 32 54 70]
print(merged_prop_max)
# [10 20 35 10]
print(merged_prop_sum)
# [10 28 65 10]
print(merged_prop_avg)
# [10. 14. 16.25 10. ]

Count the number of times values appear within a range of values

How do I output a list which counts and displays the number of times different values fit into a range?
Based on the below example, the output would be x = [0, 3, 2, 1, 0] as there are 3 Pro scores (11, 24, 44), 2 Champion scores (101, 888), and 1 King score (1234).
- P1 = 11
- P2 = 24
- P3 = 44
- P4 = 101
- P5 = 1234
- P6 = 888
totalsales = [11, 24, 44, 101, 1234, 888]
Here is ranking corresponding to the sales :
Sales___________________Ranking
0-10____________________Noob
11-100__________________Pro
101-1000________________Champion
1001-10000______________King
100001 - 200000__________Lord
This is one way, assuming your values are integers and ranges do not overlap.
from collections import Counter
# Ranges go to end + 1
score_ranges = [
range(0, 11), # Noob
range(11, 101), # Pro
range(101, 1001), # Champion
range(1001, 10001), # King
range(10001, 200001) # Lord
]
total_sales = [11, 24, 44, 101, 1234, 888]
# This counter counts how many values fall into each score range (by index).
# It works by taking the index of the first range containing each value (or -1 if none found).
c = Counter(next((i for i, r in enumerate(score_ranges) if s in r), -1) for s in total_sales)
# This converts the above counter into a list, taking the count for each index.
result = [c[i] for i in range(len(score_ranges))]
print(result)
# [0, 3, 2, 1, 0]
As a general rule homework should not be posted on stackoverflow. As such, just a pointer on how to solve this, implementation is up to you.
Iterate over the totalsales list and check if each number is in range(start,stop). Then for each matching check increment one per category in your result list (however using a dict to store the result might be more apt).
Here a possible solution with no use of modules such as numpy or collections:
totalsales = [11, 24, 44, 101, 1234, 888]
bins = [10, 100, 1000, 10000, 20000]
output = [0]*len(bins)
for s in totalsales:
slot = next(i for i, x in enumerate(bins) if s <= x)
output[slot] += 1
output
>>> [0, 3, 2, 1, 0]
If your sales-to-ranking mapping always follows a logarithmic curve, the desired output can be calculated in linear time using math.log10 with collections.Counter. Use an offset of 0.5 and the abs function to handle sales of 0 and 1:
from collections import Counter
from math import log10
counts = Counter(int(abs(log10(abs(s - .5)))) for s in totalsales)
[counts.get(i, 0) for i in range(5)]
This returns:
[0, 3, 2, 1, 0]
Here, I have used the power of dataframe to store the values, then using bin and cut to group the values into the right categories. The extracting the value count into list.
Let me know if it is okay.
import pandas as pd
import numpy
df = pd.DataFrame([11, 24, 44, 101, 1234, 888], columns=['P'])# Create dataframe
bins = [0, 10, 100, 1000, 10000, 200000]
labels = ['Noob','Pro', 'Champion', 'King', 'Lord']
df['range'] = pd.cut(df.P, bins, labels = labels)
df
outputs:
P range
0 11 Pro
1 24 Pro
2 44 Pro
3 101 Champion
4 1234 King
5 888 Champion
Finally, to get the value count. Use:
my = df['range'].value_counts().sort_index()#this counts to the number of occurences
output=map(int,my.tolist())#We want the output to be integers
output
The result below:
[0, 3, 2, 1, 0]
You can use collections.Counter and a dict:
from collections import Counter
totalsales = [11, 24, 44, 101, 1234, 888]
ranking = {
0: 'noob',
10: 'pro',
100: 'champion',
1000: 'king',
10000: 'lord'
}
c = Counter()
for sale in totalsales:
for k in sorted(ranking.keys(), reverse=True):
if sale > k:
c[ranking[k]] += 1
break
Or as a two-liner (credits to #jdehesa for the idea):
thresholds = sorted(ranking.keys(), reverse=True)
c = Counter(next((ranking[t] for t in thresholds if s > t)) for s in totalsales)

Finding where a value lands between two numbers in Python

I have a problem where I need to determine where a value lands between other values. This is an awful long question...but its a convoluted problem (at least to me).
The simplest presentation of the problem can be seen with the following data:
I have a value of 24.0. I need to determine where that value lands within six 'ranges'. The ranges are: 10, 20, 30, 40, 50, 60. I need to calculate where along the ranges, the value lands. I can see that it lands between 20 and 30. A simple if statement can find that for me.
My if statement for checking if the value is between 20 and 30 would be:
if value >=20 and value <=30:
Pretty simple stuff.
What I'm having trouble with is when I try to rank the output.
As an example, let's say that each range value is given an integer representation. 10 =1, 20=2, 30=3, 40=4, 50=5, 60=6, 70=7. Additionally, lets say that if the value is less than the midpoint between two values, it is assigned the rank output of the lower value. For example, my value of 24 is between 20 and 30 so it should be ranked as a "2".
This in and of itself is fairly straightforward with this example, but using real world data, I have ranges and values like the following:
Value = -13 with Ranges = 5,35,30,25,-25,-30,-35
Value = 50 with Ranges = 5,70,65,60,40,35,30
Value = 6 with Ranges = 1,40,35,30,5,3,0
Another wrinkle - the orders of the ranges matter. In the above, the first range number equates to a ranking of 1, the second to a ranking of 2, etc as I mentioned a few paragraphs above.
The negative numbers in the range values were causing trouble until I decided to use a percentile ranking which gets rid of the negative values all together. To do this, I am using an answer from Map each list value to its corresponding percentile like this:
y=[stats.percentileofscore(x, a, 'rank') for a in x]
where x is the ranges AND the value I'm checking. Running the value=6 values above through this results in y being:
x = [1, 40, 35, 30, 5, 3, 0, 6]
y=[stats.percentileofscore(x, a, 'rank') for a in x]
Looking at "y", we see it as:
[25.0, 100.0, 87.5, 75.0, 50.0, 37.5, 12.5, 62.5]
What I need to do now is compare that last value (62.5) with the other values to see what the final ranking will be (rankings of 1 through 7) according to the following ranking map:
1=25.0
2=100.0
3=87.5
4=75.0
5=50.0
6=37.5
7=12.5
If the value lies between two of the values, it should be assigned the lower rank. In this example, the 62.5 value would have a final ranking value of 4 because it sits between 75.0 (rank=4) and 50.0 (rank=5).
If I take 'y' and break it out and use those values in multiple if/else statements it works for some but not all (the -13 example does not work correctly).
My question is this:
How can I programmatically analyze any value/range set to find the final ranking without building an enormous if/elif structure? Here are a few sample sets. Rankings are in order of presentation below (first value in Ranges =1 , second = 2, etc etc)
Value = -13 with Ranges = 5, 35, 30, 25, -25, -30, -35 --> Rank = 4
Value = 50 with Ranges = 5, 70, 65, 60, 40, 35, 30 --> Rank = 4
Value = 6 with Ranges = 1, 40, 35, 30, 5, 3,0 --> Rank = 4
Value = 24 with Ranges = 10, 20, 30, 40, 50, 60, 70 --> Rank = 2
Value = 2.26 with Ranges = 0.1, 0.55, 0.65, 0.75, 1.75, 1.85, 1.95 --> Rank = 7
Value = 31 with Ranges = 10, 20, 30, 40, 60, 70, 80 --> Rank = 3
I may be missing something very easy within python to do this...but I've bumped my head on this wall for a few days with no progress.
Any help/pointers are appreciated.
def checker(term):
return term if term >= 0 else abs(term)+1e10
l1, v1 = [5, 35, 30, 25, -25, -30, -35], -13 # Desired: 4
l2, v2 = [5, 70, 65, 60, 40, 35, 30], 50 # Desired: 4
l3, v3 = [1, 40, 35, 30, 5, 3, 0], 6 # Desired: 4
l4, v4 = [10, 20, 30, 40, 50, 60, 70], 24 # Desired: 2
l5, v5 = [0.1, 0.55, 0.65, 0.75, 1.75, 1.85, 1.95], 2.26 # Desired: 7
l6, v6 = [10, 20, 30, 40, 60, 70, 80], 31 # Desired: 3
Result:
>>> print(*(sorted(l_+[val], key=checker).index(val) for
... l_, val in zip((l1,l2,l3,l4,l5,l6),(v1,v2,v3,v4,v5,v6))), sep='\n')
4
4
4
2
7
3
Taking the first example of -13.
y = [5, 35, 30, 25, -25, -30, -35]
value_to_check = -13
max_rank = len(y) # Default value in case no range found (as per 2.26 value example)
for ii in xrange(len(y)-1,0,-1):
if (y[ii] <= value_to_check <= y[ii-1]) or (y[ii] >= value_to_check >= y[ii-1]):
max_rank = ii
break
>>> max_rank
4
In function form:
def get_rank(y, value_to_check):
max_rank = len(y) # Default value in case no range found (as per 2.26 value example)
for ii in xrange(len(y)-1,0,-1):
if (y[ii] <= value_to_check <= y[ii-1]) or (y[ii] >= value_to_check >= y[ii-1]):
max_rank = ii
break
return max_rank
When you call:
>>> get_rank(y, value_to_check)
4
This correctly finds the answer for all your data:
def get_rank(l,n):
mindiff = float('inf')
minindex = -1
for i in range(len(l) - 1):
if l[i] <= n <= l[i + 1] or l[i + 1] <= n <= l[i]:
diff = abs(l[i + 1] - l[i])
if diff < mindiff:
mindiff = diff
minindex = i
if minindex != -1:
return minindex + 1
if n > max(l):
return len(l)
return 1
>>> test()
[5, 35, 30, 25, -25, -30, -35] -13 Desired: 4 Actual: 4
[5, 70, 65, 60, 40, 35, 30] 50 Desired: 4 Actual: 4
[1, 40, 35, 30, 5, 3, 0] 6 Desired: 4 Actual: 4
[10, 20, 30, 40, 50, 60, 70] 24 Desired: 2 Actual: 2
[0.1, 0.55, 0.65, 0.75, 1.75, 1.85, 1.95] 2.26 Desired: 7 Actual: 7
[10, 20, 30, 40, 60, 70, 80] 31 Desired: 3 Actual: 3
For completeness, here is my test() function, but you only need get_rank for what you are doing:
>>> def test():
lists = [[[5, 35, 30, 25, -25, -30, -35],-13,4],[[5, 70, 65, 60, 40, 35, 30],50,4],[[1, 40, 35, 30, 5, 3,0],6,4],[[10, 20, 30, 40, 50, 60, 70],24,2],[[0.1, 0.55, 0.65, 0.75, 1.75, 1.85, 1.95],2.26,7],[[10, 20, 30, 40, 60, 70, 80],31,3]]
for l,n,desired in lists:
print l,n,'Desired:',desired,'Actual:',get_rank(l,n)

Categories

Resources