Split a list in evently distributed chunks - python

I have a huge list of about 12000 elements, separated by ) each 20 elements.
An example of the first three:
(((76, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 232, 233, 242, 244, 248, 251, 261, 315, 329), 76, 151, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 232, 233, 242, 244, 248, 251, 261), 76, 151, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 221, 232, 233, 242, 244, 248, 251), 76, 151, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 221, 229, 232, 233, 242, 244, 248),
How do I get another list with 600 sublists corresponding to each of the 20 agrupations?

We have a recursively nested iterable, and it could be huge.
In [1]: (((1, 2, 3, 4), 5, 6, 7, 8), 9, 10, 11, 12)
Out[1]: (((1, 2, 3, 4), 5, 6, 7, 8), 9, 10, 11, 12)
Since the data structure is recursive, a recursive function would solve this elegantly, but would also consume an arbitrary amount of stack, which is rude.
Such a function would however be a classical example of tail-recursive function, i.e. one whose tail call is easy to eliminate manually. In fact, if we let the Python generator mechanism take care of the intermediary results for us, the resulting function is almost as elegant as the recursive one, while requiring very little RAM (stack or otherwise). That generator function could also be used for other purposes than creating the target list, and we do not have to restrict it to tuples.
def unpack_recursive_iterable(data):
while True:
head, *tail = data if data else (None,)
try:
# Can we go deeper?
len(head)
except TypeError:
# Deepest level
yield list(data)
return
yield tail
data = head
Now, assuming we want a list of lists, in the order of the original iterable, we can create an additional adapter function:
def list_from_recursive_iterable(data):
unpacked = list(unpack_recursive_iterable(data))
unpacked.reverse()
return unpacked
Validation tests (the solution works for any kind of iterables, and sub-parts may be empty):
In [4]: list_from_recursive_iterable((((1, 2, 3, 4), 5, 6, 7, 8), 9, 10, 11, 12))
Out[4]: [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]
In [5]: list_from_recursive_iterable(())
Out[5]: []
In [6]: list_from_recursive_iterable(((((1,), 2), 3), 4))
Out[6]: [[1], [2], [3], [4]]
In [7]: list_from_recursive_iterable((((),),))
Out[7]: [[], [], []]
In [8]: list_from_recursive_iterable(((1,),))
Out[8]: [[1], []]
In [9]: list_from_recursive_iterable({1})
Out[9]: [[1]]
In [10]: list_from_recursive_iterable({1:2})
Out[10]: [[1]]
In [11]: list_from_recursive_iterable({1,2})
Out[11]: [[1, 2]]
In [12]: list_from_recursive_iterable([[1],2])
Out[12]: [[1], [2]]
It should be noted that this solution does in fact only fulfill the OP's requirement of "evenly distributed chunks" if the input itself is "evenly distributed", i.e. if every group of scalars found in the input data is equally-sized. But that requirement is fulfilled in the OP's input data.

Maybe i'm reading the question wrong, but you could probably do some string manipulation:
a = ((((76, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 232, 233, 242, 244, 248,
251, 261, 315, 329), 76, 151, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217,
232, 233, 242, 244, 248, 251, 261), 76, 151, 152, 158, 185, 193, 193, 200, 208, 211,
214, 217, 221, 232, 233, 242, 244, 248, 251), 76, 151, 152, 158, 185, 193, 193, 200,
208, 211, 214, 217, 221, 229, 232, 233, 24
# Convert to string
astr = f"{a}"
# Output list
lines = []
# Iterate over split lines
for line in astr.split("),"):
# Check to see if left parenthesis starts a line
if line.startswith("("):
# Subindex the line by the number of left parenthesis
line = line[line.count("("):]
# Remove trailing parenthesis
if line.endswith(")"):
line = line[:-1]
# Append the line to the lines list, stripping any white space away.
lines.append(line.strip())
Output:
['76, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 232, 233, 242, 244, 248, 251, 261, 315, 329',
'76, 151, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 232, 233, 242, 244, 248, 251, 261',
'76, 151, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 221, 232, 233, 242, 244, 248, 251',
'76, 151, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 221, 229, 232, 233, 242, 244, 248']

You can iteratively 'unpack' the tuple into 2 parts, the second part being the 20 (or N) variables of the subset.
data = (((76, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 232, 233, 242, 244, 248, 251, 261, 315, 329), 76, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 232, 233, 242, 244, 248, 251, 261, 315, 329), 76, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 232, 233, 242, 244, 248, 251, 261, 315, 329)
result = []
if data and not isinstance(data[0], tuple):
result.append(data)
else:
part_1, *part_2 = data
result.append(part_2)
while part_1 and isinstance(part_1[0], tuple):
part_1, *part_2 = part_1
result.append(part_2)
result.append(list(part_1))
print(result)

Related

I need to save element of list in seperated lists

I have this list:
[29, 64, 65, 66, 128, 129, 130, 166, 167, 168, 184, 185, 186, 215, 216, 217, 237, 238, 239, 349, 350, 351, 443, 483, 484, 485, 495, 496, 497, 526, 527, 528, 542, 543, 544, 564, 565, 566]
and i want to separate them that if the difference between the element and the next one different from 1, the code saves the next elements in another lit like
list1=[29]
liste2=[64, 65, 66]
liste3=[128, 129, 130]
liste3=[166, 167, 168]
until the end
Using numpy you could do this in a single line:
import numpy as np
lst = [29, 64, 65, 66, 128, 129, 130, 166, 167, 168, 184, 185, 186, 215, 216, 217, 237, 238, 239, 349, 350, 351, 443, 483, 484, 485, 495, 496, 497, 526, 527, 528, 542, 543, 544, 564, 565, 566]
[x.tolist() for x in np.split(lst, np.where(np.diff(lst) > 1)[0]+1)]
Output:
[[29],
[64, 65, 66],
[128, 129, 130],
[166, 167, 168],
[184, 185, 186],
[215, 216, 217],
[237, 238, 239],
[349, 350, 351],
[443],
[483, 484, 485],
[495, 496, 497],
[526, 527, 528],
[542, 543, 544],
[564, 565, 566]]
Edit 1: To store each list to separate variables (Not Recommended)
sub_lists = [x.tolist() for x in np.split(lst, np.where(np.diff(lst) > 1)[0]+1)]
for i in range(1, len(sub_lists)+1):
globals()['list_%s' % i] = sub_lists[i-1]
Output:
print(list_1)
>> [29]
print(list_2)
>> [64, 65, 66]
Note:
It is not recommended to have individual variables for multiple reasons, especially in the scenarios where the number of variables can explode based on the condition.
I believe this should be what you are looking for:
sort_me = [29, 64, 65, 66, 128, 129, 130, 166, 167, 168, 184, 185, 186, 215, 216, 217, 237, 238, 239, 349, 350, 351, 443, 483, 484, 485, 495, 496, 497, 526, 527, 528, 542, 543, 544, 564, 565, 566]
sorted_lists = list()
last_num = sort_me[0] # get the first index of the list to be the last number that has been evaluated
current_list = [sort_me[0]] # get first index fo list prepared for for loop
# iterate over all numbers in the list
for i in range(1, len(sort_me)):
num = sort_me[i] # get the next number
if num == last_num + 1: # check if it meets the criteria to be put into list
current_list.append(num) # add number to the list
else:
sorted_lists.append(current_list) # add list to list of lists (say that 10 times fast)
current_list = [num] # start new list
last_num = num # save last num checked as this iteration's number
if len(current_list) > 0:
sorted_lists.append(current_list)
# print them to show that the lists are correct (chnage this bit to do what you want with them)
for ls in sorted_lists:
print(ls)
EDIT added line that missed the last list, should be working fine now.

Grabbing values from masked image areas OpenCV Python [duplicate]

I take a template
and sample 8 points from it (now 36). Each little dot is a mask and i take the average value from that little dotted area and i add them to a list.
It ends up looking something like these:
[203, 176, 160, 174, 185, 185, 152, 136, 131, 131, 131, 131, 131, 137, 144, 133, 131, 130, 130, 130, 131, 130, 139, 160, 168, 150, 141, 160, 186, 201, 209, 214, 216, 216, 216, 217]
[194, 207, 216, 217, 217, 217, 217, 217, 217, 216, 214, 170, 148, 159, 171, 175, 165, 136, 131, 131, 130, 131, 149, 170, 151, 132, 131, 131, 131, 131, 134, 169, 172, 141, 141, 172]
[131, 131, 131, 141, 171, 164, 133, 141, 178, 197, 213, 216, 216, 216, 217, 217, 217, 217, 217, 216, 175, 153, 163, 174, 183, 171, 142, 132, 130, 130, 131, 161, 170, 149, 131, 131]
[131, 131, 131, 131, 153, 151, 136, 130, 131, 131, 131, 130, 130, 150, 164, 149, 134, 145, 172, 195, 205, 215, 216, 217, 217, 217, 216, 216, 195, 161, 168, 179, 192, 171, 152, 131]
Its easier to visualize them with a Plot. The RED line refers directly to the 1st sad teenager who is standing upright. (also known as Zero degree position). I need to find someway of comparing a line or list to the RED line/list. This way i can figure out what rotation the other teenagers are at given that the RED line is the Zero.
The values are not the same but very close to each other and obviously the lines are just off shifted meaning that the images are rotated. Are there any functions i can call that would do this kind of thing or some way i could process these to give me the rotation? Thank you

How to manually reproject from a specific projection to lat/lon

I have an array from Euro-Cordex data which has a rotated pole projection from a Netcdf file:
grid_mapping_name: rotated_latitude_longitude
grid_north_pole_latitude: 39.25
grid_north_pole_longitude: -162.0
float64 rlon(rlon)
standard_name: grid_longitude
long_name: longitude in rotated pole grid
units: degrees
axis: X
unlimited dimensions:
current shape = (424,)
filling on, default _FillValue of 9.969209968386869e+36 used),
('rlat', <class 'netCDF4._netCDF4.Variable'>
float64 rlat(rlat)
standard_name: grid_latitude
long_name: latitude in rotated pole grid
units: degrees
axis: Y
unlimited dimensions:
current shape = (412,)
The dimensions are rlon (424) and rlat (412). I used some codes to convert these rotated lat lons into normal lat/lons. Now, I have two matrices with shape of (424, 412). The first one shows the longitude coordinates, and the second one shows the latitude coordinates.
Now, I want to convert the initial image (424, 412) to a image with the extents that I want:Min lon : 25, Max lon: 45, Min Lat: 35, Max lat: 43
lats = np.empty((len(rlat), len(rlon)))
lons = np.empty((len(rlat), len(rlon)))
for j in range (len(rlon)):
for i in range(len(rlat)):
lons[i, j] = unrot_lon(rlat[i],rlon[j],39.25,-162.0)
lats[i, j] = unrot_lat(rlat[i],rlon[j],39.25,-162.0)
a = lons<=45
aa = lons>=25
aaa = a*aa
b = lats<=43
bb = lats>=35
bbb = b*bb
c = bbb*aaa
The last matrix (c) is a boolean matrix which shows the pixels that I am interested according to the extents that I defined:
Now, I want to do two things that I fail in both:
First I would like to plot this image with the boundries on a basemap. For that I located the llcrnlon, llcrnlat, urcrnlon and urcrnlon based on the boolean matrix and by using some imagination:
llcrlon = 25.02#ok
llcrlat = np.nanmin(lats[c])# ok
urcrlon = np.nanmax(lons[c])#ok
urcrlat = np.nanmax(lats[np.where(lons==urcrlon)])#ok
Then I used the following codes to plot the image on a basemap:
lonss = np.linspace(np.min(lons[c]), np.max(lons[c]), (424-306+1))
latss = np.linspace(np.min(lats[c]), np.max(lats[c]), (170-73+1))
pl.figure(dpi = 250)
map = Basemap(projection='rotpole',llcrnrlon=llcrlon,llcrnrlat=llcrlat,urcrnrlon=urcrlon,urcrnrlat=urcrlat,resolution='i', o_lat_p = 39.25, o_lon_p =-162., lon_0=35, lat_0=45)
map.drawcoastlines()
map.drawstates()
parallels = np.arange(35,43,2.) #
meridians = np.arange(25,45,2.) #
map.drawparallels(parallels,labels=[1,0,0,0],fontsize=10)
map.drawmeridians(meridians,labels=[0,0,0,1],fontsize=10)
lons, lats = np.meshgrid(lonss, latss)
x, y = map(lons, lats)
mapp = map.pcolormesh(x,y,WTD[73:170, 306:])
So, the map is not well-fit to the basemap projection. I would like to find out what is wrong.
Second, I would like to reproject this map to normal lat/lon. For that, I use the following codes to define a new grid:
targ_lons = np.linspace(25, 45, 170)
targ_lats = np.linspace(43, 35, 70)
T_Map = np.empty((len(targ_lats), len(targ_lons)))
T_Map[:] = np.nan
Then, I am trying to figure out the differences between the lon/lat matrices I produced in the beginning and my newly defined grids. Then, using the indices which represent the minimum/less than a specific threshold, fill in the new gridded image.
for i in range(len(targ_lons)):
for j in range(len(targ_lats)):
lon_extr = np.where(abs(lons-targ_lons[i])<0.01)
lat_extr = np.where(abs(lats-targ_lats[j])<0.01)
So here, if we have i=0 and j=0,
then:
lon_extr = (array([ 7, 16, 25, 34, 35, 43, 44, 53, 63, 72, 73, 82, 83, 92, 93, 102, 103, 112, 113, 122, 123, 133, 143, 153, 154, 164,
174, 175, 185, 195, 196, 206, 217, 227, 238, 248, 259, 269, 280,
290, 300, 321, 331, 341, 360, 370, 389], dtype=int64),
array([320, 319, 318, 317, 317, 316, 316, 315, 314, 313, 313, 312, 312,
311, 311, 310, 310, 309, 309, 308, 308, 307, 306, 305, 305, 304,
303, 303, 302, 301, 301, 300, 299, 298, 297, 296, 295, 294, 293,
292, 291, 289, 288, 287, 285, 284, 282], dtype=int64))
and
lat_extr=(array([143, 143, 143, 143, 143, 143, 143, 143, 143, 143, 143, 143, 143,
143, 143, 143, 143, 144, 144, 144, 144, 144, 144, 145, 145, 145,
145, 146, 146, 146, 146, 147, 147, 147, 148, 148, 149, 149, 150,
150, 151, 151, 152, 152, 153, 153, 154, 154, 155, 156, 156, 157,
157, 158, 158, 159, 159, 160, 160, 161, 162, 162, 163, 164, 164,
165, 167, 168, 168, 169, 169, 170, 170, 171, 174, 175, 177, 178,
180, 181, 183, 186, 190, 191, 192, 204, 205, 210, 214], dtype=int64),
array([251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263,
264, 265, 266, 267, 227, 228, 229, 289, 290, 291, 214, 215, 303,
304, 204, 205, 313, 314, 196, 321, 322, 189, 329, 182, 336, 176,
342, 170, 348, 165, 353, 160, 358, 155, 363, 150, 146, 372, 142,
376, 138, 380, 134, 384, 130, 388, 126, 123, 395, 119, 116, 402,
405, 106, 103, 415, 100, 418, 97, 421, 94, 86, 83, 78, 75,
70, 68, 63, 56, 47, 45, 43, 19, 17, 8, 1], dtype=int64))
Now, I need to be able to pull the common coordinates and fill in the T_Map. I'm confused at this point. Is there a function for easy way to pull out the common lat/lon from these two arrays?
The problem was solved. I used the Longitude and Latitude matrices to find the nearest pixels (less than the resolution which is 0.11 degree for this case) and fill up the new defined grid. Hope this helps others who have a similar problem:
#(45-25)*111/12.5
#(43-35)*110/12.5
targ_lons = np.linspace(25, 45, 170)
targ_lats = np.linspace(43, 35, 70)
T_Map = np.empty((len(targ_lats), len(targ_lons)))
T_Map[:] = np.nan
for i in range(len(targ_lons)):
for j in range(len(targ_lats)):
lon_extr = np.where(abs(lons-targ_lons[i])<0.1)
lat_extr = np.where(abs(lats[lon_extr]-targ_lats[j])<0.1)
if len(lat_extr[0])>0:
point_to_extract = np.where(lats == lats[lon_extr][lat_extr][0])
T_Map[j, i] = (WTD[point_to_extract])

Remove content of a list out of other list

I have some code that creates a list with numbers, from 1 to 407. What I want to do it to take the numbers of the "ultimate" and "super_rare" list out of the "common" list. How can I do that? This is the general code I have.
import random
def common(x):
list = []
for i in range(1,x+1):
list.append(i)
return (list)
cid = common(407)
ultimate = [404, 200, 212, 15, 329, 214, 406, 259, 126, 160, 343, 180, 169, 297, 226, 305, 250, 373, 142, 357, 181, 113, 149, 399, 287, 341, 37, 284, 41, 328, 400, 217, 253, 204, 290, 18, 174, 36, 310, 303, 6, 108, 47, 298, 130]
super_rare = [183, 349, 134, 69, 103, 342, 83, 380, 93, 56, 86, 95, 147, 161, 403, 197, 215, 312, 375, 359, 263, 221, 340, 102, 153, 234, 54, 7, 238, 193, 90, 367, 197, 397, 33, 366, 334, 222, 394, 371, 313, 83, 276, 35, 351, 83, 347, 170, 57, 201, 137, 188, 179, 170, 65, 107, 234, 48, 2, 85, 74, 221, 23, 171, 101, 377, 63, 248, 102, 272, 129, 276, 86, 88, 51, 197, 248, 202, 244, 153, 138, 101, 330, 68, 368, 292, 340, 315, 185, 219, 381, 89, 274, 175, 385, 19, 257, 313, 191, 211]
def new_list(cid, ultimate):
new_list = []
for i in range(len(cid)):
new_list.append(cid[i])
for i in range(len(ultimate)):
new_list.remove(ultimate[i])
return (new_list)
#print (new_list(cid, ultimate))
cid_mod0 = new_list(cid, ultimate)
cid_mod1 = new_list(cid_mod0, super_rare)
print (cid_mod0)
Most of the prints and whatnot are just tries to see if it's working.
I recommend using sets for this. You can check if an item is in a set in constant time. For example:
import random
def common(x):
return list(range(1, x + 1))
cid = common(407)
ultimate = { 404, 200, ... }
super_rare = { 183, 349, ... }
def list_difference(l, s):
return [ elem for elem in l if elem not in s ]
cid_mod0 = list_difference(cid, ultimate)
cid_mod1 = list_difference(cid_mod0, super_rare)
If you don't care about the order of your resulting list you can use a set for that as well for a bit more convenience:
import random
def common(x):
return list(range(1, x + 1))
cid = set(common(407))
ultimate = { 404, 200, ... }
super_rare = { 183, 349, ... }
cid_mod0 = cid - ultimate
cid_mod1 = cid_mod0 - super_rare
Use this loop to remove the elements out of common that are in the super_rare and ultimate lists:
for x, cnum in enumerate(cid):
if cnum in ultimate or cnum in super_rare:
del cid[x]
print(cid)
The loop assumes you have a list named cid that is already established.
If you want to keep the original order for cid, you could try to use OrderDict to convert cid as an ordered dict object, and then remove keys that you want, the code would be something like:
from random import choices, seed
from collections import OrderedDict
seed(123)
ultimate = [404, 200, 212, 15, 329, 214, 406, 259, 126, 160, 343, 180, 169, 297, 226, 305, 250, 373, 142, 357, 181, 113, 149, 399, 287, 341, 37, 284, 41, 328, 400, 217, 253, 204, 290, 18, 174, 36, 310, 303, 6, 108, 47, 298, 130]
super_rare = [183, 349, 134, 69, 103, 342, 83, 380, 93, 56, 86, 95, 147, 161, 403, 197, 215, 312, 375, 359, 263, 221, 340, 102, 153, 234, 54, 7, 238, 193, 90, 367, 197, 397, 33, 366, 334, 222, 394, 371, 313, 83, 276, 35, 351, 83, 347, 170, 57, 201, 137, 188, 179, 170, 65, 107, 234, 48, 2, 85, 74, 221, 23, 171, 101, 377, 63, 248, 102, 272, 129, 276, 86, 88, 51, 197, 248, 202, 244, 153, 138, 101, 330, 68, 368, 292, 340, 315, 185, 219, 381, 89, 274, 175, 385, 19, 257, 313, 191, 211]
cid = OrderedDict.fromkeys(choices(range(407), k=407))
_ = map(cid.pop, set(ultimate + super_rare))
result = cid.keys()
If you don't need the original order, you could try to convert cid as a dict, it's superfast to remove key from a hashmap, the code would be something like:
cid = dict.fromkeys(range(407))
_ = map(cid.pop, set(ultimate + super_rare))
result = cid.keys()
Apart from the dictionary method, you can also try to convert everything into a set variable like the following:
result = set(range(407)) - set(ultimate) - set(super_rare)
Hope it helps.
You can create your target list that includes all the numbers from the target range, but without those numbers from ultimate and super_rare list, by list comprehension:
my_filtered_list = [i for i in range(1, 408) if i not in ultimate and i not in super_rare]
print(my_filtered_list)
Make a union set of both sets of numbers you want to exclude.
>>> su = set(ultimate) | set(super_rare)
Then filter the input list based on whether the value is not present in the set.
>>> list(filter(lambda i: i not in su, cid))
[1, 3, 4, 5, 8, 9, 10, 11, 12, 13, 14, 16, 17, 20, 21, 22, 24, 25, 26, 27, 28,
29, 30, 31, 32, 34, 38, 39, 40, 42, 43, 44, 45, 46, 49, 50, 52, 53, 55, 58, 59,
60, 61, 62, 64, 66, 67, 70, 71, 72, 73, 75, 76, 77, 78, 79, 80, 81, 82, 84, 87,
91, 92, 94, 96, 97, 98, 99, 100, 104, 105, 106, 109, 110, 111, 112, 114, 115,
116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 127, 128, 131, 132, 133, 135,
136, 139, 140, 141, 143, 144, 145, 146, 148, 150, 151, 152, 154, 155, 156, 157,
158, 159, 162, 163, 164, 165, 166, 167, 168, 172, 173, 176, 177, 178, 182, 184,
186, 187, 189, 190, 192, 194, 195, 196, 198, 199, 203, 205, 206, 207, 208, 209,
210, 213, 216, 218, 220, 223, 224, 225, 227, 228, 229, 230, 231, 232, 233, 235,
236, 237, 239, 240, 241, 242, 243, 245, 246, 247, 249, 251, 252, 254, 255, 256,
258, 260, 261, 262, 264, 265, 266, 267, 268, 269, 270, 271, 273, 275, 277, 278,
279, 280, 281, 282, 283, 285, 286, 288, 289, 291, 293, 294, 295, 296, 299, 300,
301, 302, 304, 306, 307, 308, 309, 311, 314, 316, 317, 318, 319, 320, 321, 322,
323, 324, 325, 326, 327, 331, 332, 333, 335, 336, 337, 338, 339, 344, 345, 346,
348, 350, 352, 353, 354, 355, 356, 358, 360, 361, 362, 363, 364, 365, 369, 370,
372, 374, 376, 378, 379, 382, 383, 384, 386, 387, 388, 389, 390, 391, 392, 393,
395, 396, 398, 401, 402, 405, 407]
If you don't want to use filter, just use a list comprehension.
>>> [v for v in cid if v not in su]
You could also do the whole thing with sets like
>>> list(set(cid) - (set(ultimate) | set(super_rare)))
Others have suggested this already (I take no credit). I'm not sure how guaranteed it is to come back in the right order. Seems to be ok on my py2 and py3, but doing the last step as a list will give you the order absolutely guaranteed (not as an implementation detail) and wont need converting back to a list as a final step.
If you want to see the changes in the original list, you can just assign back to the original variable.
cid = [v for v in cid if v not in su]
This is assigning a different list to the same variable though, so other holders of references to that list won't see the changes. You can call id(cid) before and after the assignment to see that its a different list.
If you wanted to assign back to the exact same list instance you can use
cid[:] = [v for v in cid if v not in su]
and the id will remain the same.

OpenCV format knnMatch Descriptors

I am using OpenCV 2.4.9 Python knnMatch where the query descriptors come directly from detectAndCompute and are formatted correctly, but the train descriptors will come from a list I made in a different program.
When I get the descriptors from my other program, they look like:
[array([ 14, 21, 234, 147, 215, 115, 190, 215, 94, 231, 31, 34, 200,
124, 127, 104, 255, 123, 179, 147, 180, 240, 61, 226, 111, 95,
159, 131, 151, 127, 253, 231], dtype=uint8), array([162, 150, 101, 219, 117, 151, 173, 113, 93, 29, 81, 23, 232,
13, 60, 133, 221, 2, 147, 165, 242, 188, 120, 221, 39, 26,
154, 194, 87, 140, 245, 252], dtype=uint8)]
That would be 2 descriptors.
How can I format these so I do not get the "OpenCV Error: Unsupported format or combination of formats" error when matching these descriptors with those coming straight out of detectAndCompute? I have tried using np.asarray(list, np.float32) to no avail. If I do:
[[d for d in des] for des in list] with list as the train descriptors then the two lists will LOOK the same but I get the same error!
list = [[d for d in des] for des in list]
list = np.asarray(list, np.uint8)
for d in list:
for x in d:
x = x.astype(np.uint8)

Categories

Resources