Grabbing values from masked image areas OpenCV Python [duplicate] - python

I take a template
and sample 8 points from it (now 36). Each little dot is a mask and i take the average value from that little dotted area and i add them to a list.
It ends up looking something like these:
[203, 176, 160, 174, 185, 185, 152, 136, 131, 131, 131, 131, 131, 137, 144, 133, 131, 130, 130, 130, 131, 130, 139, 160, 168, 150, 141, 160, 186, 201, 209, 214, 216, 216, 216, 217]
[194, 207, 216, 217, 217, 217, 217, 217, 217, 216, 214, 170, 148, 159, 171, 175, 165, 136, 131, 131, 130, 131, 149, 170, 151, 132, 131, 131, 131, 131, 134, 169, 172, 141, 141, 172]
[131, 131, 131, 141, 171, 164, 133, 141, 178, 197, 213, 216, 216, 216, 217, 217, 217, 217, 217, 216, 175, 153, 163, 174, 183, 171, 142, 132, 130, 130, 131, 161, 170, 149, 131, 131]
[131, 131, 131, 131, 153, 151, 136, 130, 131, 131, 131, 130, 130, 150, 164, 149, 134, 145, 172, 195, 205, 215, 216, 217, 217, 217, 216, 216, 195, 161, 168, 179, 192, 171, 152, 131]
Its easier to visualize them with a Plot. The RED line refers directly to the 1st sad teenager who is standing upright. (also known as Zero degree position). I need to find someway of comparing a line or list to the RED line/list. This way i can figure out what rotation the other teenagers are at given that the RED line is the Zero.
The values are not the same but very close to each other and obviously the lines are just off shifted meaning that the images are rotated. Are there any functions i can call that would do this kind of thing or some way i could process these to give me the rotation? Thank you

Related

Python requests in an API, pagination only saves the last interation

I have this code below, using requests into an API :
import requests
def Sults():
headers = {
"Authorization":"xxxxxxxxxxxxxxxxxxxxxxxx",
"Content-Type":"application/json;charset=UTF-8"
}
global id
id = []
for count1 in range(3):
url = "https://api/api/v1/implantacao/projeto?&start={}&dtInicio=2022-01-01T18:02:55Z".format(count1)
response = requests.get(url, headers=headers)
data = response.json()
url2 = "https://api/api/v1/implantacao/projeto?&dtInicio=2022-01-01T18:02:55Z&concluido=false"
response2 = requests.get(url2, headers=headers)
data2 = response2.json()
url3 = "https://api/api/v1/implantacao/projeto?&dtInicio=2022-01-01T18:02:55Z&concluido=true"
response3 = requests.get(url3, headers=headers)
data3 = response3.json()
# print(data)
ids = unidades(data)
print(ids)
ids2 = unidades2(data2)
print(ids2)
ids3 = unidades3(data3)
print(ids3)
def unidades(data):
for i in data['data']:
id.append(i['id']) #append id list
return id
def unidades2(data2):
id_exclude = []
for j in data2['data']:
id_exclude.append(j['id'])
return id_exclude
def unidades3(data3):
id_conclude = []
for k in data3['data']:
id_conclude.append(k['id'])
return id_conclude
if __name__ == '__main__':
Sults()
Into the line : url = "https:///api/v1/implantacao/projeto?&start={}&dtInicio=2022-01-01T18:02:55Z".format(count1) >> this count loops 0,1,2 to the pagination into the api. The problem is that when im trying to save the id from every loop into de list : id =[], the code only saves the last loop, in this case, 3(thrid page of the loop into 0,1,2).
for example : "https://api/api/v1/implantacao/projeto?&start=0&dtInicio=2022-01-01T18:02:55Z"
Output : [122, 123, 124, 125, 126, 127, 129, 132, 133, 134, 135, 137, 138, 140, 144, 145, 146, 147, 149, 150, 151, 153, 154, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233]
url = "https://api/api/v1/implantacao/projeto?&start=1&dtInicio=2022-01-01T18:02:55Z"
output about the second page : [234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271]
The output i want is to join both outputs into the id list (in this case page 0 and page 1) : [122, 123, 124, 125, 126, 127, 129, 132, 133, 134, 135, 137, 138, 140, 144, 145, 146, 147, 149, 150, 151, 153, 154, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271]
I believe it is just an indentation issue when you are assigning the data to the id variables.
Your code waits for the loop to finish and then looks to put the data into ids, ids2, ids3 but this will only take the last data retrieved in the data, data2 and data3 variables.
You should just need to move the id bit in one indentation as below
def Sults():
headers = {
"Authorization":"xxxxxxxxxxxxxxxxxxxxxxxx",
"Content-Type":"application/json;charset=UTF-8"
}
global id
id = []
for count1 in range(3):
url = "https://api/api/v1/implantacao/projeto?&start={}&dtInicio=2022-01-01T18:02:55Z".format(count1)
response = requests.get(url, headers=headers)
data = response.json()
url2 = "https://api/api/v1/implantacao/projeto?&dtInicio=2022-01-01T18:02:55Z&concluido=false"
response2 = requests.get(url2, headers=headers)
data2 = response2.json()
url3 = "https://api/api/v1/implantacao/projeto?&dtInicio=2022-01-01T18:02:55Z&concluido=true"
response3 = requests.get(url3, headers=headers)
data3 = response3.json()
# print(data)
ids = unidades(data)
print(ids)
ids2 = unidades2(data2)
print(ids2)
ids3 = unidades3(data3)
print(ids3)
This will add the response from each requests to your id variables.
A side note is why are you leaving data2 and data3 within the loop as you will be calling that 3 times unnecessarily as the url you are using stays the same.
What I would change it to is the below, however there may be some reason as to why you do this so ignore the below if needs be
def Sults():
headers = {
"Authorization":"xxxxxxxxxxxxxxxxxxxxxxxx",
"Content-Type":"application/json;charset=UTF-8"
}
global id
id = []
for count1 in range(3):
url = "https://api/api/v1/implantacao/projeto?&start={}&dtInicio=2022-01-01T18:02:55Z".format(count1)
response = requests.get(url, headers=headers)
data = response.json()
ids = unidades(data)
print(ids)
url2 = "https://api/api/v1/implantacao/projeto?&dtInicio=2022-01-01T18:02:55Z&concluido=false"
response2 = requests.get(url2, headers=headers)
data2 = response2.json()
url3 = "https://api/api/v1/implantacao/projeto?&dtInicio=2022-01-01T18:02:55Z&concluido=true"
response3 = requests.get(url3, headers=headers)
data3 = response3.json()
ids2 = unidades2(data2)
print(ids2)
ids3 = unidades3(data3)
print(ids3)

How to check if dataset that represents signal matches specific frequency?

The signal is 10hz square pulses like 0,100,0,100.
The signal is received in form of arrays of integer values that are taken with 30hz sample rate. How to find a presence of useful signal in the data set?
import random
def find_signal(s, fs, fr):
# todo find a way to find signal matching the frequency
return random.random()
signals = [[125, 116, 116, 116, 114, 114, 114, 114, 114, 114, 122, 122, 122, 126, 126, 126, 122, 122, 122, 120, 120, 120, 116, 116, 116, 116, 116, 116, 119, 119],
[84, 84, 84, 87, 87, 87, 94, 94, 95, 95, 95, 95, 100, 100, 100, 106,
106, 106, 107, 107, 107, 107, 107, 107, 106, 106, 106, 111, 111, 111],
[184, 184, 184, 191, 191, 191, 192, 192, 197, 197, 197, 197, 198, 198, 198, 199, 199, 199, 198, 198, 199, 199, 199, 199, 199, 199, 199, 197, 197, 197]]
sample_rate = 30
search_frequency = 10
maxMatch = 0
maxIndex = -1
for i in range(len(signals)):
match = find_signal(signals[i], sample_rate, sample_rate)
if match > maxMatch:
maxMatch = match
maxIndex = i
print(f"the signal is in {maxIndex} probability is {maxMatch}")
Also, would be awesome to filter the data on useful frequency and measure an amplitude of it.
PS: I can do a sine wave instead of square if it is easier to work with.

Split a list in evently distributed chunks

I have a huge list of about 12000 elements, separated by ) each 20 elements.
An example of the first three:
(((76, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 232, 233, 242, 244, 248, 251, 261, 315, 329), 76, 151, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 232, 233, 242, 244, 248, 251, 261), 76, 151, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 221, 232, 233, 242, 244, 248, 251), 76, 151, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 221, 229, 232, 233, 242, 244, 248),
How do I get another list with 600 sublists corresponding to each of the 20 agrupations?
We have a recursively nested iterable, and it could be huge.
In [1]: (((1, 2, 3, 4), 5, 6, 7, 8), 9, 10, 11, 12)
Out[1]: (((1, 2, 3, 4), 5, 6, 7, 8), 9, 10, 11, 12)
Since the data structure is recursive, a recursive function would solve this elegantly, but would also consume an arbitrary amount of stack, which is rude.
Such a function would however be a classical example of tail-recursive function, i.e. one whose tail call is easy to eliminate manually. In fact, if we let the Python generator mechanism take care of the intermediary results for us, the resulting function is almost as elegant as the recursive one, while requiring very little RAM (stack or otherwise). That generator function could also be used for other purposes than creating the target list, and we do not have to restrict it to tuples.
def unpack_recursive_iterable(data):
while True:
head, *tail = data if data else (None,)
try:
# Can we go deeper?
len(head)
except TypeError:
# Deepest level
yield list(data)
return
yield tail
data = head
Now, assuming we want a list of lists, in the order of the original iterable, we can create an additional adapter function:
def list_from_recursive_iterable(data):
unpacked = list(unpack_recursive_iterable(data))
unpacked.reverse()
return unpacked
Validation tests (the solution works for any kind of iterables, and sub-parts may be empty):
In [4]: list_from_recursive_iterable((((1, 2, 3, 4), 5, 6, 7, 8), 9, 10, 11, 12))
Out[4]: [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]
In [5]: list_from_recursive_iterable(())
Out[5]: []
In [6]: list_from_recursive_iterable(((((1,), 2), 3), 4))
Out[6]: [[1], [2], [3], [4]]
In [7]: list_from_recursive_iterable((((),),))
Out[7]: [[], [], []]
In [8]: list_from_recursive_iterable(((1,),))
Out[8]: [[1], []]
In [9]: list_from_recursive_iterable({1})
Out[9]: [[1]]
In [10]: list_from_recursive_iterable({1:2})
Out[10]: [[1]]
In [11]: list_from_recursive_iterable({1,2})
Out[11]: [[1, 2]]
In [12]: list_from_recursive_iterable([[1],2])
Out[12]: [[1], [2]]
It should be noted that this solution does in fact only fulfill the OP's requirement of "evenly distributed chunks" if the input itself is "evenly distributed", i.e. if every group of scalars found in the input data is equally-sized. But that requirement is fulfilled in the OP's input data.
Maybe i'm reading the question wrong, but you could probably do some string manipulation:
a = ((((76, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 232, 233, 242, 244, 248,
251, 261, 315, 329), 76, 151, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217,
232, 233, 242, 244, 248, 251, 261), 76, 151, 152, 158, 185, 193, 193, 200, 208, 211,
214, 217, 221, 232, 233, 242, 244, 248, 251), 76, 151, 152, 158, 185, 193, 193, 200,
208, 211, 214, 217, 221, 229, 232, 233, 24
# Convert to string
astr = f"{a}"
# Output list
lines = []
# Iterate over split lines
for line in astr.split("),"):
# Check to see if left parenthesis starts a line
if line.startswith("("):
# Subindex the line by the number of left parenthesis
line = line[line.count("("):]
# Remove trailing parenthesis
if line.endswith(")"):
line = line[:-1]
# Append the line to the lines list, stripping any white space away.
lines.append(line.strip())
Output:
['76, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 232, 233, 242, 244, 248, 251, 261, 315, 329',
'76, 151, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 232, 233, 242, 244, 248, 251, 261',
'76, 151, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 221, 232, 233, 242, 244, 248, 251',
'76, 151, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 221, 229, 232, 233, 242, 244, 248']
You can iteratively 'unpack' the tuple into 2 parts, the second part being the 20 (or N) variables of the subset.
data = (((76, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 232, 233, 242, 244, 248, 251, 261, 315, 329), 76, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 232, 233, 242, 244, 248, 251, 261, 315, 329), 76, 152, 158, 185, 193, 193, 200, 208, 211, 214, 217, 232, 233, 242, 244, 248, 251, 261, 315, 329)
result = []
if data and not isinstance(data[0], tuple):
result.append(data)
else:
part_1, *part_2 = data
result.append(part_2)
while part_1 and isinstance(part_1[0], tuple):
part_1, *part_2 = part_1
result.append(part_2)
result.append(list(part_1))
print(result)

How to manually reproject from a specific projection to lat/lon

I have an array from Euro-Cordex data which has a rotated pole projection from a Netcdf file:
grid_mapping_name: rotated_latitude_longitude
grid_north_pole_latitude: 39.25
grid_north_pole_longitude: -162.0
float64 rlon(rlon)
standard_name: grid_longitude
long_name: longitude in rotated pole grid
units: degrees
axis: X
unlimited dimensions:
current shape = (424,)
filling on, default _FillValue of 9.969209968386869e+36 used),
('rlat', <class 'netCDF4._netCDF4.Variable'>
float64 rlat(rlat)
standard_name: grid_latitude
long_name: latitude in rotated pole grid
units: degrees
axis: Y
unlimited dimensions:
current shape = (412,)
The dimensions are rlon (424) and rlat (412). I used some codes to convert these rotated lat lons into normal lat/lons. Now, I have two matrices with shape of (424, 412). The first one shows the longitude coordinates, and the second one shows the latitude coordinates.
Now, I want to convert the initial image (424, 412) to a image with the extents that I want:Min lon : 25, Max lon: 45, Min Lat: 35, Max lat: 43
lats = np.empty((len(rlat), len(rlon)))
lons = np.empty((len(rlat), len(rlon)))
for j in range (len(rlon)):
for i in range(len(rlat)):
lons[i, j] = unrot_lon(rlat[i],rlon[j],39.25,-162.0)
lats[i, j] = unrot_lat(rlat[i],rlon[j],39.25,-162.0)
a = lons<=45
aa = lons>=25
aaa = a*aa
b = lats<=43
bb = lats>=35
bbb = b*bb
c = bbb*aaa
The last matrix (c) is a boolean matrix which shows the pixels that I am interested according to the extents that I defined:
Now, I want to do two things that I fail in both:
First I would like to plot this image with the boundries on a basemap. For that I located the llcrnlon, llcrnlat, urcrnlon and urcrnlon based on the boolean matrix and by using some imagination:
llcrlon = 25.02#ok
llcrlat = np.nanmin(lats[c])# ok
urcrlon = np.nanmax(lons[c])#ok
urcrlat = np.nanmax(lats[np.where(lons==urcrlon)])#ok
Then I used the following codes to plot the image on a basemap:
lonss = np.linspace(np.min(lons[c]), np.max(lons[c]), (424-306+1))
latss = np.linspace(np.min(lats[c]), np.max(lats[c]), (170-73+1))
pl.figure(dpi = 250)
map = Basemap(projection='rotpole',llcrnrlon=llcrlon,llcrnrlat=llcrlat,urcrnrlon=urcrlon,urcrnrlat=urcrlat,resolution='i', o_lat_p = 39.25, o_lon_p =-162., lon_0=35, lat_0=45)
map.drawcoastlines()
map.drawstates()
parallels = np.arange(35,43,2.) #
meridians = np.arange(25,45,2.) #
map.drawparallels(parallels,labels=[1,0,0,0],fontsize=10)
map.drawmeridians(meridians,labels=[0,0,0,1],fontsize=10)
lons, lats = np.meshgrid(lonss, latss)
x, y = map(lons, lats)
mapp = map.pcolormesh(x,y,WTD[73:170, 306:])
So, the map is not well-fit to the basemap projection. I would like to find out what is wrong.
Second, I would like to reproject this map to normal lat/lon. For that, I use the following codes to define a new grid:
targ_lons = np.linspace(25, 45, 170)
targ_lats = np.linspace(43, 35, 70)
T_Map = np.empty((len(targ_lats), len(targ_lons)))
T_Map[:] = np.nan
Then, I am trying to figure out the differences between the lon/lat matrices I produced in the beginning and my newly defined grids. Then, using the indices which represent the minimum/less than a specific threshold, fill in the new gridded image.
for i in range(len(targ_lons)):
for j in range(len(targ_lats)):
lon_extr = np.where(abs(lons-targ_lons[i])<0.01)
lat_extr = np.where(abs(lats-targ_lats[j])<0.01)
So here, if we have i=0 and j=0,
then:
lon_extr = (array([ 7, 16, 25, 34, 35, 43, 44, 53, 63, 72, 73, 82, 83, 92, 93, 102, 103, 112, 113, 122, 123, 133, 143, 153, 154, 164,
174, 175, 185, 195, 196, 206, 217, 227, 238, 248, 259, 269, 280,
290, 300, 321, 331, 341, 360, 370, 389], dtype=int64),
array([320, 319, 318, 317, 317, 316, 316, 315, 314, 313, 313, 312, 312,
311, 311, 310, 310, 309, 309, 308, 308, 307, 306, 305, 305, 304,
303, 303, 302, 301, 301, 300, 299, 298, 297, 296, 295, 294, 293,
292, 291, 289, 288, 287, 285, 284, 282], dtype=int64))
and
lat_extr=(array([143, 143, 143, 143, 143, 143, 143, 143, 143, 143, 143, 143, 143,
143, 143, 143, 143, 144, 144, 144, 144, 144, 144, 145, 145, 145,
145, 146, 146, 146, 146, 147, 147, 147, 148, 148, 149, 149, 150,
150, 151, 151, 152, 152, 153, 153, 154, 154, 155, 156, 156, 157,
157, 158, 158, 159, 159, 160, 160, 161, 162, 162, 163, 164, 164,
165, 167, 168, 168, 169, 169, 170, 170, 171, 174, 175, 177, 178,
180, 181, 183, 186, 190, 191, 192, 204, 205, 210, 214], dtype=int64),
array([251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263,
264, 265, 266, 267, 227, 228, 229, 289, 290, 291, 214, 215, 303,
304, 204, 205, 313, 314, 196, 321, 322, 189, 329, 182, 336, 176,
342, 170, 348, 165, 353, 160, 358, 155, 363, 150, 146, 372, 142,
376, 138, 380, 134, 384, 130, 388, 126, 123, 395, 119, 116, 402,
405, 106, 103, 415, 100, 418, 97, 421, 94, 86, 83, 78, 75,
70, 68, 63, 56, 47, 45, 43, 19, 17, 8, 1], dtype=int64))
Now, I need to be able to pull the common coordinates and fill in the T_Map. I'm confused at this point. Is there a function for easy way to pull out the common lat/lon from these two arrays?
The problem was solved. I used the Longitude and Latitude matrices to find the nearest pixels (less than the resolution which is 0.11 degree for this case) and fill up the new defined grid. Hope this helps others who have a similar problem:
#(45-25)*111/12.5
#(43-35)*110/12.5
targ_lons = np.linspace(25, 45, 170)
targ_lats = np.linspace(43, 35, 70)
T_Map = np.empty((len(targ_lats), len(targ_lons)))
T_Map[:] = np.nan
for i in range(len(targ_lons)):
for j in range(len(targ_lats)):
lon_extr = np.where(abs(lons-targ_lons[i])<0.1)
lat_extr = np.where(abs(lats[lon_extr]-targ_lats[j])<0.1)
if len(lat_extr[0])>0:
point_to_extract = np.where(lats == lats[lon_extr][lat_extr][0])
T_Map[j, i] = (WTD[point_to_extract])

Remove content of a list out of other list

I have some code that creates a list with numbers, from 1 to 407. What I want to do it to take the numbers of the "ultimate" and "super_rare" list out of the "common" list. How can I do that? This is the general code I have.
import random
def common(x):
list = []
for i in range(1,x+1):
list.append(i)
return (list)
cid = common(407)
ultimate = [404, 200, 212, 15, 329, 214, 406, 259, 126, 160, 343, 180, 169, 297, 226, 305, 250, 373, 142, 357, 181, 113, 149, 399, 287, 341, 37, 284, 41, 328, 400, 217, 253, 204, 290, 18, 174, 36, 310, 303, 6, 108, 47, 298, 130]
super_rare = [183, 349, 134, 69, 103, 342, 83, 380, 93, 56, 86, 95, 147, 161, 403, 197, 215, 312, 375, 359, 263, 221, 340, 102, 153, 234, 54, 7, 238, 193, 90, 367, 197, 397, 33, 366, 334, 222, 394, 371, 313, 83, 276, 35, 351, 83, 347, 170, 57, 201, 137, 188, 179, 170, 65, 107, 234, 48, 2, 85, 74, 221, 23, 171, 101, 377, 63, 248, 102, 272, 129, 276, 86, 88, 51, 197, 248, 202, 244, 153, 138, 101, 330, 68, 368, 292, 340, 315, 185, 219, 381, 89, 274, 175, 385, 19, 257, 313, 191, 211]
def new_list(cid, ultimate):
new_list = []
for i in range(len(cid)):
new_list.append(cid[i])
for i in range(len(ultimate)):
new_list.remove(ultimate[i])
return (new_list)
#print (new_list(cid, ultimate))
cid_mod0 = new_list(cid, ultimate)
cid_mod1 = new_list(cid_mod0, super_rare)
print (cid_mod0)
Most of the prints and whatnot are just tries to see if it's working.
I recommend using sets for this. You can check if an item is in a set in constant time. For example:
import random
def common(x):
return list(range(1, x + 1))
cid = common(407)
ultimate = { 404, 200, ... }
super_rare = { 183, 349, ... }
def list_difference(l, s):
return [ elem for elem in l if elem not in s ]
cid_mod0 = list_difference(cid, ultimate)
cid_mod1 = list_difference(cid_mod0, super_rare)
If you don't care about the order of your resulting list you can use a set for that as well for a bit more convenience:
import random
def common(x):
return list(range(1, x + 1))
cid = set(common(407))
ultimate = { 404, 200, ... }
super_rare = { 183, 349, ... }
cid_mod0 = cid - ultimate
cid_mod1 = cid_mod0 - super_rare
Use this loop to remove the elements out of common that are in the super_rare and ultimate lists:
for x, cnum in enumerate(cid):
if cnum in ultimate or cnum in super_rare:
del cid[x]
print(cid)
The loop assumes you have a list named cid that is already established.
If you want to keep the original order for cid, you could try to use OrderDict to convert cid as an ordered dict object, and then remove keys that you want, the code would be something like:
from random import choices, seed
from collections import OrderedDict
seed(123)
ultimate = [404, 200, 212, 15, 329, 214, 406, 259, 126, 160, 343, 180, 169, 297, 226, 305, 250, 373, 142, 357, 181, 113, 149, 399, 287, 341, 37, 284, 41, 328, 400, 217, 253, 204, 290, 18, 174, 36, 310, 303, 6, 108, 47, 298, 130]
super_rare = [183, 349, 134, 69, 103, 342, 83, 380, 93, 56, 86, 95, 147, 161, 403, 197, 215, 312, 375, 359, 263, 221, 340, 102, 153, 234, 54, 7, 238, 193, 90, 367, 197, 397, 33, 366, 334, 222, 394, 371, 313, 83, 276, 35, 351, 83, 347, 170, 57, 201, 137, 188, 179, 170, 65, 107, 234, 48, 2, 85, 74, 221, 23, 171, 101, 377, 63, 248, 102, 272, 129, 276, 86, 88, 51, 197, 248, 202, 244, 153, 138, 101, 330, 68, 368, 292, 340, 315, 185, 219, 381, 89, 274, 175, 385, 19, 257, 313, 191, 211]
cid = OrderedDict.fromkeys(choices(range(407), k=407))
_ = map(cid.pop, set(ultimate + super_rare))
result = cid.keys()
If you don't need the original order, you could try to convert cid as a dict, it's superfast to remove key from a hashmap, the code would be something like:
cid = dict.fromkeys(range(407))
_ = map(cid.pop, set(ultimate + super_rare))
result = cid.keys()
Apart from the dictionary method, you can also try to convert everything into a set variable like the following:
result = set(range(407)) - set(ultimate) - set(super_rare)
Hope it helps.
You can create your target list that includes all the numbers from the target range, but without those numbers from ultimate and super_rare list, by list comprehension:
my_filtered_list = [i for i in range(1, 408) if i not in ultimate and i not in super_rare]
print(my_filtered_list)
Make a union set of both sets of numbers you want to exclude.
>>> su = set(ultimate) | set(super_rare)
Then filter the input list based on whether the value is not present in the set.
>>> list(filter(lambda i: i not in su, cid))
[1, 3, 4, 5, 8, 9, 10, 11, 12, 13, 14, 16, 17, 20, 21, 22, 24, 25, 26, 27, 28,
29, 30, 31, 32, 34, 38, 39, 40, 42, 43, 44, 45, 46, 49, 50, 52, 53, 55, 58, 59,
60, 61, 62, 64, 66, 67, 70, 71, 72, 73, 75, 76, 77, 78, 79, 80, 81, 82, 84, 87,
91, 92, 94, 96, 97, 98, 99, 100, 104, 105, 106, 109, 110, 111, 112, 114, 115,
116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 127, 128, 131, 132, 133, 135,
136, 139, 140, 141, 143, 144, 145, 146, 148, 150, 151, 152, 154, 155, 156, 157,
158, 159, 162, 163, 164, 165, 166, 167, 168, 172, 173, 176, 177, 178, 182, 184,
186, 187, 189, 190, 192, 194, 195, 196, 198, 199, 203, 205, 206, 207, 208, 209,
210, 213, 216, 218, 220, 223, 224, 225, 227, 228, 229, 230, 231, 232, 233, 235,
236, 237, 239, 240, 241, 242, 243, 245, 246, 247, 249, 251, 252, 254, 255, 256,
258, 260, 261, 262, 264, 265, 266, 267, 268, 269, 270, 271, 273, 275, 277, 278,
279, 280, 281, 282, 283, 285, 286, 288, 289, 291, 293, 294, 295, 296, 299, 300,
301, 302, 304, 306, 307, 308, 309, 311, 314, 316, 317, 318, 319, 320, 321, 322,
323, 324, 325, 326, 327, 331, 332, 333, 335, 336, 337, 338, 339, 344, 345, 346,
348, 350, 352, 353, 354, 355, 356, 358, 360, 361, 362, 363, 364, 365, 369, 370,
372, 374, 376, 378, 379, 382, 383, 384, 386, 387, 388, 389, 390, 391, 392, 393,
395, 396, 398, 401, 402, 405, 407]
If you don't want to use filter, just use a list comprehension.
>>> [v for v in cid if v not in su]
You could also do the whole thing with sets like
>>> list(set(cid) - (set(ultimate) | set(super_rare)))
Others have suggested this already (I take no credit). I'm not sure how guaranteed it is to come back in the right order. Seems to be ok on my py2 and py3, but doing the last step as a list will give you the order absolutely guaranteed (not as an implementation detail) and wont need converting back to a list as a final step.
If you want to see the changes in the original list, you can just assign back to the original variable.
cid = [v for v in cid if v not in su]
This is assigning a different list to the same variable though, so other holders of references to that list won't see the changes. You can call id(cid) before and after the assignment to see that its a different list.
If you wanted to assign back to the exact same list instance you can use
cid[:] = [v for v in cid if v not in su]
and the id will remain the same.

Categories

Resources