numpy.concatenate float64(101,1) and float64(101,) - python

I'm a MatLab user who recently converted to python. I am running a for loop that cuts a longer signal into individual trials, normalizes them to 100% trial and then would like to have the trials listed horizontally in a single variable. My code is
RHipFE=np.empty([101, 1])
newlength = 101
for i in range(0,len(R0X)-1,2):
iHipFE=redataf.RHipFE[R0X[i]:R0X[i+1]]
x=np.arange(0,len(iHipFE),1)
new_x = np.linspace(x.min(), x.max(), newlength)
iHipFEn = interpolate.interp1d(x, iHipFE)(new_x)
RHipFE=np.concatenate((RHipFE,iHipFEn),axis=1)
When I run this, I get the error "ValueError: all the input arrays must have same number of dimensions". Which I assume is because RHipFE is (101,1) while iHipFEn is (101,). Is the best solution to make iHipFEn (101,1)? If so, how does one do this in the above for loop?

Generally it's faster to collect arrays in a list, and use some form of concatenate once. List append is faster than concatenate:
In [51]: alist = []
In [52]: for i in range(3):
...: alist.append(np.arange(i,i+5))
...:
In [53]: alist
Out[53]: [array([0, 1, 2, 3, 4]), array([1, 2, 3, 4, 5]), array([2, 3, 4, 5, 6])]
Various ways of joining
In [54]: np.vstack(alist)
Out[54]:
array([[0, 1, 2, 3, 4],
[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6]])
In [55]: np.column_stack(alist)
Out[55]:
array([[0, 1, 2],
[1, 2, 3],
[2, 3, 4],
[3, 4, 5],
[4, 5, 6]])
In [56]: np.stack(alist, axis=1)
Out[56]:
array([[0, 1, 2],
[1, 2, 3],
[2, 3, 4],
[3, 4, 5],
[4, 5, 6]])
In [57]: np.array(alist)
Out[57]:
array([[0, 1, 2, 3, 4],
[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6]])
Internally, vstack, column_stack, stack expand the dimension of the components, and concatenate on the appropriate axis:
In [58]: np.concatenate([l[:,None] for l in alist],axis=1)
Out[58]:
array([[0, 1, 2],
[1, 2, 3],
[2, 3, 4],
[3, 4, 5],
[4, 5, 6]])

Related

Trying to convert pandas df to np array, dtaidistance computes list instead

I am attempting to compute the distance matrix for an ndarray that I have converted from pandas. I tried to convert the pandas df currently in this format:
move_df =
movement
0 [4, 3, 6, 2]
1 [5, 2, 3, 6, 2]
2 [4, 7, 2, 3, 6, 1]
3 [4, 4, 4, 3]
... ...
33410 [2, 6, 3, 1, 8]
[33410 x 1 columns]
to a numpy ndarray by using the following:
1) m = move_df.to_numpy()
2) m = pd.DataFrame(move_df.tolist()).values
3) m = [move_df.tolist() for i in move_df.columns]
Each of these conversions resulted in a numpy array in this format:
[[list([4, 3, 6, 2])]
[list([5, 2, 3, 6, 2])]
[list([4, 7, 2, 3, 6, 1])]
[list([4, 4, 4, 3])]
...
[list([2, 6, 3, 1, 8])]]
So when I try to run dtaidistance matrix, I get the following error:
d_m = dtw.distance_matrix(m)
TypeError: unsupported operand type(s) for -: 'list' and 'list'
But when I create a list of lists by copying and pasting several of the numpy arrays created with any of the methods mentioned above, the code works. But this is not feasible in the long run since the arrays are over 30k rows. Is there something I am doing wrong in the conversion from pandas df to numpy array? I used
print(type(m))
and it outputs that it is a numpy array and I already know that I cannot subtract a list from a list, hence the error.
EDIT:
For move_df.head(10).to_dict()
{'movement': {0: [4, 3, 6, 2],
1: [5, 2, 3, 6, 2],
2: [4, 7, 2, 3, 6, 1],
3: [4, 4, 4, 3],
4: [3, 6, 2, 3, 3],
5: [6, 2, 1],
6: [1, 1, 1, 1],
7: [7, 2, 3, 1, 1],
8: [7, 2, 3, 2, 1],
9: [6, 2, 3, 1]}}
(one of the dtaidistance authors here)
The dtaidistance package expects one of three formats:
A 2D numpy array (where all sequences have the same length by definition)
A Python list of 1D numpy.array or array.array.
A Python list of Python lists
In your case you could do:
series = move_df['movement'].to_list()
dtw.distance_matrix(series)
which works then on a list of lists.
To use the fast C implementation an array is required (either Numpy or std lib array). If you want to keep different lengths you can do
series = move_df['movement'].apply(lambda a: np.array(a, dtype=np.double)).to_list()
dtw.distance_matrix_fast(series)
Note that it might make sense to do the apply operation inplace on your move_df datastructure such that you only have to do it once and not keep track of two nearly identical datastructures. After you do this, the to_list call is sufficient. Thus:
move_df['movement'] = move_df['movement'].apply(lambda a: np.array(a, dtype=np.double))
series = move_df['movement'].to_list()
dtw.distance_matrix_fast(series)
If you want to use a 2D numpy matrix, you would need to truncate or pad all series to be the same length as is explained in other answers (for dtw padding is more common to not lose information).
ps. This assumes you want to do univariate DTW, the ndim subpackage for multivariate time series expects a different datastructure.
Assuming you want to form an array with the lists of length 4:
m = df['movement'].str.len().eq(4)
a = np.array(df.loc[m, 'movement'].to_list())
output:
array([[4, 3, 6, 2],
[4, 4, 4, 3],
[1, 1, 1, 1],
[6, 2, 3, 1]])
used input:
df = pd.DataFrame({'movement': [[4, 3, 6, 2],
[5, 2, 3, 6, 2],
[4, 7, 2, 3, 6, 1],
[4, 4, 4, 3],
[3, 6, 2, 3, 3],
[6, 2, 1],
[1, 1, 1, 1],
[7, 2, 3, 1, 1],
[7, 2, 3, 2, 1],
[6, 2, 3, 1]]})
A dataframe created with:
In [112]: df = pd.DataFrame({'movement': {0: [4, 3, 6, 2],
...: 1: [5, 2, 3, 6, 2],
...: 2: [4, 7, 2, 3, 6, 1],
...: 3: [4, 4, 4, 3],
...: 4: [3, 6, 2, 3, 3],
...: 5: [6, 2, 1],
...: 6: [1, 1, 1, 1],
...: 7: [7, 2, 3, 1, 1],
...: 8: [7, 2, 3, 2, 1],
...: 9: [6, 2, 3, 1]}})
has an object dtype column that contains lists. The array derived from that column is object dtype:
In [121]: arr = df['movement'].to_numpy()
In [122]: arr
Out[122]:
array([list([4, 3, 6, 2]), list([5, 2, 3, 6, 2]),
list([4, 7, 2, 3, 6, 1]), list([4, 4, 4, 3]),
list([3, 6, 2, 3, 3]), list([6, 2, 1]), list([1, 1, 1, 1]),
list([7, 2, 3, 1, 1]), list([7, 2, 3, 2, 1]), list([6, 2, 3, 1])],
dtype=object)
By selecting the column I get a 1d array, not the 2d you get. Otherwise it's the same
This cannot be converted into a 2d numeric dtype array. For most purposes we can think of this as a list of lists.
In [123]: arr.tolist()
Out[123]:
[[4, 3, 6, 2],
[5, 2, 3, 6, 2],
[4, 7, 2, 3, 6, 1],
[4, 4, 4, 3],
[3, 6, 2, 3, 3],
[6, 2, 1],
[1, 1, 1, 1],
[7, 2, 3, 1, 1],
[7, 2, 3, 2, 1],
[6, 2, 3, 1]]
If the lists were all the same length, or if we pick a subset, it is possible to construct a 2d array:
In [125]: arr[[0,3,6,9]]
Out[125]:
array([list([4, 3, 6, 2]), list([4, 4, 4, 3]), list([1, 1, 1, 1]),
list([6, 2, 3, 1])], dtype=object)
In [126]:
In [126]: np.stack(arr[[0,3,6,9]])
Out[126]:
array([[4, 3, 6, 2],
[4, 4, 4, 3],
[1, 1, 1, 1],
[6, 2, 3, 1]])
Padding and slicing could also be used to force the lists to matching lengths - but that could mean losing information.
But without knowing what dtw.distance_matrix expects (looks like it wants a 2d numeric array), or what these lists represent, I can't go further.
The fundamental point is that your dataframe contains lists that vary in length.

convert numpy open mesh to coordinates

I'd like to turn an open mesh returned by the numpy ix_ routine to a list of coordinates
eg, for:
In[1]: m = np.ix_([0, 2, 4], [1, 3])
In[2]: m
Out[2]:
(array([[0],
[2],
[4]]), array([[1, 3]]))
What I would like is:
([0, 1], [0, 3], [2, 1], [2, 3], [4, 1], [4, 3])
I'm pretty sure I could hack it together with some iterating, unpacking and zipping, but I'm sure there must be a smart numpy way of achieving this...
Approach #1 Use np.meshgrid and then stack -
r,c = np.meshgrid(*m)
out = np.column_stack((r.ravel('F'), c.ravel('F') ))
Approach #2 Alternatively, with np.array() and then transposing, reshaping -
np.array(np.meshgrid(*m)).T.reshape(-1,len(m))
For a generic case with for generic number of arrays used within np.ix_, here are the modifications needed -
p = np.r_[2:0:-1,3:len(m)+1,0]
out = np.array(np.meshgrid(*m)).transpose(p).reshape(-1,len(m))
Sample runs -
Two arrays case :
In [376]: m = np.ix_([0, 2, 4], [1, 3])
In [377]: p = np.r_[2:0:-1,3:len(m)+1,0]
In [378]: np.array(np.meshgrid(*m)).transpose(p).reshape(-1,len(m))
Out[378]:
array([[0, 1],
[0, 3],
[2, 1],
[2, 3],
[4, 1],
[4, 3]])
Three arrays case :
In [379]: m = np.ix_([0, 2, 4], [1, 3],[6,5,9])
In [380]: p = np.r_[2:0:-1,3:len(m)+1,0]
In [381]: np.array(np.meshgrid(*m)).transpose(p).reshape(-1,len(m))
Out[381]:
array([[0, 1, 6],
[0, 1, 5],
[0, 1, 9],
[0, 3, 6],
[0, 3, 5],
[0, 3, 9],
[2, 1, 6],
[2, 1, 5],
[2, 1, 9],
[2, 3, 6],
[2, 3, 5],
[2, 3, 9],
[4, 1, 6],
[4, 1, 5],
[4, 1, 9],
[4, 3, 6],
[4, 3, 5],
[4, 3, 9]])

Combinatorics: list with alternative elements

I have a python list with 1D numpy arrays as elements, which have one or more elements. Consider each of the array elements as alternatives for the respective list element.
An example:
[array([1]),array([2]),array([2,3]),array([3]),array([4]),array([3,4,5])]
I want a two things:
1) All combinations regarding the alternatives:
array([[1,2,2,3,4,3],
[1,2,3,3,4,3],
[1,2,2,3,4,4],
[1,2,3,3,4,4],
[1,2,2,3,4,5],
[1,2,3,3,4,5]])
2. The combination that has the least amount repetitions:
array([1,2,2,3,4,5])
or
array([1,2,3,3,4,5]).
The second should not be so hard to get, once one has the first thing, but I am not sure.
I would also like to use my more complex setups like
datasets_complete = [("iris1", iris1), ("iris2", iris2)]
percentages = [0.05, 0.1, 0.2, 0.5]
imputers = [SimpleFill(), KNN(k=3), SoftImpute(), MICE()]
gridWidths = [0.1, 0.2]
seq = [datasets_complete, percentages, imputers, gridWidths]
testgrid = all_combinations(seq)
where iris1 and iris2 are pandas DataFrames.
Here's a NumPy based approach -
def all_combs(a): # Parte-1
num_combs = np.prod(list(map(len,a)))
return np.array(np.meshgrid(*a)).reshape(-1,num_combs).T
def get_minrep_combs(a): # Parte-2
out = all_combs(a)
counts = (np.diff(np.sort(out,axis=1),axis=1)==0).sum(1)
return out[counts == counts.min()]
Sample run -
In [161]: a = [np.array([1]),np.array([2]),np.array([2,3]),np.array([3]),\
...: np.array([4]),np.array([3,4,5])]
In [162]: all_combs(a) # Part-1 results
Out[162]:
array([[1, 2, 2, 3, 4, 3],
[1, 2, 2, 3, 4, 4],
[1, 2, 2, 3, 4, 5],
[1, 2, 3, 3, 4, 3],
[1, 2, 3, 3, 4, 4],
[1, 2, 3, 3, 4, 5]])
In [163]: get_minrep_combs(a) # Part-2 results
Out[163]:
array([[1, 2, 2, 3, 4, 5],
[1, 2, 3, 3, 4, 5]])
Just to give you guys a sense of all_combs, here's a bit more "normal" sample case runs -
In [166]: a = [np.array([2,3]), np.array([5,6,7])]
In [167]: all_combs(a)
Out[167]:
array([[2, 5],
[3, 5],
[2, 6],
[3, 6],
[2, 7],
[3, 7]])
In [164]: a = [np.array([2,3,4]), np.array([5,6,7,9])]
In [165]: all_combs(a)
Out[165]:
array([[2, 5],
[3, 5],
[4, 5],
[2, 6],
[3, 6],
[4, 6],
[2, 7],
[3, 7],
[4, 7],
[2, 9],
[3, 9],
[4, 9]])
For performance
For performance, we can avoid the transpose in part-1 and perform the operations in part-2 along the columns (axis=0) and also use slicing to avoid np.diff and thus have one optimized version, like so -
def get_minrep_combs_optimized(a): # Parte-1,2
num_combs = np.prod(list(map(len,a)))
out = np.array(np.meshgrid(*a)).reshape(-1,num_combs)
sorted_out = np.sort(out,axis=0)
counts = (sorted_out[1:] == sorted_out[:-1]).sum(0)
return out[:,counts == counts.min()].T
Sample run -
In [188]: a = [np.array([1]),np.array([2]),np.array([2,3]),np.array([3]),\
...: np.array([4]),np.array([3,4,5])]
In [189]: get_minrep_combs_optimized(a)
Out[189]:
array([[1, 2, 2, 3, 4, 5],
[1, 2, 3, 3, 4, 5]])
Runtime test
Here's one way to create a sample input data, which has upto 3 elems and each sub-list has some matches across elements in other sub-lists -
In [42]: lens = np.random.randint(1,4,(20))
In [43]: a = [np.random.randint(1,10,L) for L in lens]
In [44]: lens
Out[44]: array([1, 1, 2, 2, 2, 2, 1, 2, 3, 2, 1, 2, 2, 3, 1, 1, 3, 2, 2, 3])
In [45]: a
Out[45]:
[array([8]),
array([8]),
array([7, 9]),
array([5, 5]),
array([6, 4]),
array([3, 1]),
array([8]),
array([1, 9]),
array([9, 5, 7]),
array([1, 1]),
array([3]),
array([1, 5]),
array([5, 5]),
array([7, 9, 2]),
array([5]),
array([1]),
array([3, 2, 9]),
array([3, 7]),
array([5, 3]),
array([2, 7, 3])]
Timings -
In [46]: %timeit leastReps(combinations(a)) ##Daniel Forsman's soln
1 loops, best of 3: 330 ms per loop
In [47]: %timeit get_minrep_combs_optimized(a)
10 loops, best of 3: 28.7 ms per loop
Let's have more matches -
In [50]: a = [np.random.randint(1,4,L) for L in lens]
In [51]: %timeit leastReps(combinations(a)) ##Daniel Forsman's soln
1 loops, best of 3: 328 ms per loop
In [52]: %timeit get_minrep_combs_optimized(a)
10 loops, best of 3: 29.5 ms per loop
Doesn't change much for the performance difference.
You could use the itertools.product function like this:
import itertools
from numpy import array
test = [array([1]),array([2]),array([2,3]),array([3]),array([4]),array([3,4,5])]
combinations = [list(tup) for tup in itertools.product(*test)]
print(combinations)
This returns:
[[1, 2, 2, 3, 4, 3],
[1, 2, 2, 3, 4, 4],
[1, 2, 2, 3, 4, 5],
[1, 2, 3, 3, 4, 3],
[1, 2, 3, 3, 4, 4],
[1, 2, 3, 3, 4, 5]]
Part number 2 is not solvable, as there can be non-unique solutions...
def combinations(arrList):
mesh=np.meshgrid(*arrList)
mesh=[arr.ravel() for arr in mesh]
return np.array(mesh).T
def leastReps(combs):
uniques=np.array([np.unique(arr).size for arr in list(combs)])
mostUni = (uniques == np.max(uniques))
return combs[mostUni]
Only difference between mine and Divakar's is that I don't need to calculate the number of products in advance, and I use most uniques and he uses least repetitions, which should be equivalent.

numpy delete list element from list of lists

I have an array of numpy arrays:
a = [[1, 2, 3, 4], [1, 2, 3, 5], [2, 5, 4, 3], [5, 2, 3, 1]]
I need to find and remove a particular list from a:
rem = [1,2,3,5]
numpy.delete(a,rem) does not return the correct results. I need to be able to return:
[[1, 2, 3, 4], [2, 5, 4, 3], [5, 2, 3, 1]]
is this possible with numpy?
A list comprehension can achieve this.
rem = [1,2,3,5]
a = [[1, 2, 3, 4], [1, 2, 3, 5], [2, 5, 4, 3], [5, 2, 3, 1]]
a = [x for x in a if x != rem]
outputs
[[1, 2, 3, 4], [2, 5, 4, 3], [5, 2, 3, 1]]
Numpy arrays do not support random deletion by element. Similar to strings in Python, you need to generate a new array to delete a single or multiple sub elements.
Given:
>>> a
array([[1, 2, 3, 4],
[1, 2, 3, 5],
[2, 5, 4, 3],
[5, 2, 3, 1]])
>>> rem
array([1, 2, 3, 5])
You can get each matching sub array and create a new array from that:
>>> a=np.array([sa for sa in a if not np.all(sa==rem)])
>>> a
array([[1, 2, 3, 4],
[2, 5, 4, 3],
[5, 2, 3, 1]])
To use np.delete, you would use an index and not a match, so:
>>> a
array([[1, 2, 3, 4],
[1, 2, 3, 5],
[2, 5, 4, 3],
[5, 2, 3, 1]])
>>> np.delete(a, 1, 0) # delete element 1, axis 0
array([[1, 2, 3, 4],
[2, 5, 4, 3],
[5, 2, 3, 1]])
But you can't loop over the array and delete elements...
You can pass multiple elements to np.delete however and you just need to match sub elements:
>>> a
array([[1, 2, 3, 5],
[1, 2, 3, 5],
[2, 5, 4, 3],
[5, 2, 3, 1]])
>>> np.delete(a, [i for i, sa in enumerate(a) if np.all(sa==rem)], 0)
array([[2, 5, 4, 3],
[5, 2, 3, 1]])
And given that same a, you can have an all numpy solution by using np.where:
>>> np.delete(a, np.where((a == rem).all(axis=1)), 0)
array([[2, 5, 4, 3],
[5, 2, 3, 1]])
Did you try list remove?
In [84]: a = [[1, 2, 3, 4], [1, 2, 3, 5], [2, 5, 4, 3], [5, 2, 3, 1]]
In [85]: a
Out[85]: [[1, 2, 3, 4], [1, 2, 3, 5], [2, 5, 4, 3], [5, 2, 3, 1]]
In [86]: rem = [1,2,3,5]
In [87]: a.remove(rem)
In [88]: a
Out[88]: [[1, 2, 3, 4], [2, 5, 4, 3], [5, 2, 3, 1]]
remove matches on value.
np.delete works with an index, not value. Also it returns a copy; it does not act in place. And the result is an array, not a nested list (np.delete converts the input to an array before operating on it).
In [92]: a = [[1, 2, 3, 4], [1, 2, 3, 5], [2, 5, 4, 3], [5, 2, 3, 1]]
In [93]: a1=np.delete(a,1, axis=0)
In [94]: a1
Out[94]:
array([[1, 2, 3, 4],
[2, 5, 4, 3],
[5, 2, 3, 1]])
This is more like list pop:
In [96]: a = [[1, 2, 3, 4], [1, 2, 3, 5], [2, 5, 4, 3], [5, 2, 3, 1]]
In [97]: a.pop(1)
Out[97]: [1, 2, 3, 5]
In [98]: a
Out[98]: [[1, 2, 3, 4], [2, 5, 4, 3], [5, 2, 3, 1]]
To delete by value you need first find the index of the desired row. With integer arrays that's not too hard. With floats it is trickier.
=========
But you don't need to use delete to do this in numpy; boolean indexing works:
In [119]: a = [[1, 2, 3, 4], [1, 2, 3, 5], [2, 5, 4, 3], [5, 2, 3, 1]]
In [120]: A = np.array(a) # got to work with array, not list
In [121]: rem=np.array([1,2,3,5])
Simple comparison; rem is broadcasted to match rows
In [122]: A==rem
Out[122]:
array([[ True, True, True, False],
[ True, True, True, True],
[False, False, False, False],
[False, True, True, False]], dtype=bool)
find the row where all elements match - this is the one we want to remove
In [123]: (A==rem).all(axis=1)
Out[123]: array([False, True, False, False], dtype=bool)
Just not it, and use it to index A:
In [124]: A[~(A==rem).all(axis=1),:]
Out[124]:
array([[1, 2, 3, 4],
[2, 5, 4, 3],
[5, 2, 3, 1]])
(the original A is not changed).
np.where can be used to convert the boolean (or its inverse) to indicies. Sometimes that's handy, but usually it isn't required.

How to replace each array element by 4 copies in Python?

How do I use numpy / python array routines to do this ?
E.g. If I have array [ [1,2,3,4,]] , the output should be
[[1,1,2,2,],
[1,1,2,2,],
[3,3,4,4,],
[3,3,4,4]]
Thus, the output is array of double the row and column dimensions. And each element from original array is repeated three times.
What I have so far is this
def operation(mat,step=2):
result = np.array(mat,copy=True)
result[::2,::2] = mat
return result
This gives me array
[[ 98.+0.j 0.+0.j 40.+0.j 0.+0.j]
[ 0.+0.j 0.+0.j 0.+0.j 0.+0.j]
[ 29.+0.j 0.+0.j 54.+0.j 0.+0.j]
[ 0.+0.j 0.+0.j 0.+0.j 0.+0.j]]
for the input
[[98 40]
[29 54]]
The array will always be of even dimensions.
Use np.repeat():
In [9]: A = np.array([[1, 2, 3, 4]])
In [10]: np.repeat(np.repeat(A, 2).reshape(2, 4), 2, 0)
Out[10]:
array([[1, 1, 2, 2],
[1, 1, 2, 2],
[3, 3, 4, 4],
[3, 3, 4, 4]])
Explanation:
First off you can repeat the arrya items:
In [30]: np.repeat(A, 3)
Out[30]: array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4])
then all you need is reshaping the result (based on your expected result this can be different):
In [32]: np.repeat(A, 3).reshape(2, 3*2)
array([[1, 1, 1, 2, 2, 2],
[3, 3, 3, 4, 4, 4]])
And now you should repeat the result along the the first axis:
In [34]: np.repeat(np.repeat(A, 3).reshape(2, 3*2), 3, 0)
Out[34]:
array([[1, 1, 1, 2, 2, 2],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 2, 2, 2],
[3, 3, 3, 4, 4, 4],
[3, 3, 3, 4, 4, 4],
[3, 3, 3, 4, 4, 4]])
Another approach could be with np.kron -
np.kron(a.reshape(-1,2),np.ones((2,2),dtype=int))
Basically, we reshape input array into a 2D array keeping the second axis of length=2. Then np.kron essentially replicates the elements along both rows and columns for a length of 2 each with that array : np.ones((2,2),dtype=int).
Sample run -
In [45]: a
Out[45]: array([7, 5, 4, 2, 8, 6])
In [46]: np.kron(a.reshape(-1,2),np.ones((2,2),dtype=int))
Out[46]:
array([[7, 7, 5, 5],
[7, 7, 5, 5],
[4, 4, 2, 2],
[4, 4, 2, 2],
[8, 8, 6, 6],
[8, 8, 6, 6]])
If you would like to have 4 rows, use a.reshape(2,-1) instead.
The better solution is to use numpy but you could use iteration also:
a = [[1, 2, 3, 4]]
v = iter(a[0])
b = []
for i in v:
n = next(v)
[b.append([i for k in range(2)] + [n for k in range(2)]) for j in range(2)]
print b
>>> [[1, 1, 2, 2], [1, 1, 2, 2], [3, 3, 4, 4], [3, 3, 4, 4]]

Categories

Resources