I am trying to write an script to remove cells in a part in ABAQUS if the cell volume is smaller than a given value.
Is there a simple command to delete a cell?
This is what I have tried:
# Keeps cells bigger than a certain minimum value 'paramVol': paramVol=volCell/part_volume_r
cellsVolume = []
pfacesInter_clean = []
allCells = pInterName.cells
mask_r = pInter.cells.getMask();
cellobj_sequence_r = pInter.cells.getSequenceFromMask(mask=mask_r);
part_volume_r = pInterName.getVolume(cells=cellobj_sequence_r);
volume_sliver = 0
# get faces
for i in range(0, len(allCells)):
volCell = allCells[i].getSize()
cellsVolume.append(volCell)
paramVol = volCell / part_volume_r
print 'paramVol= '+str(paramVol)
if paramVol < 0.01:
print 'liver Volume'
#session.viewports['Viewport: 1'].setColor(initialColor='#FF0000') #-->RED
faces = allCells[i].getFaces()
highlight(allCells[i].getFaces())
#pfacesInter_clean = [x for i, x in enumerate(pfacesInter) if i not in faces]
volume_sliver += volCell
else:
print 'Not an sliver Volume'
Thanks!
How about this, assuming pInter is a Part object:
pInter.RemoveFaces(faceList=[pInter.faces[j] for j in pInter.cells[i].getFaces()])
Update: once the common face of two cells is deleted, both cells cease to exist. Therefore, we need to do a little workaround:
faces_preserved = # List of faces that belong to cells with 'big' volume.
for cell in pInter.cells:
pInter.RemoveFaces(faceList=[face for face in pInter.faces if \
face not in faces_preserved])
Related
I am currently using Python ver 3.9.7
So I have a lot of serial port data continuously incoming. The data I have is incoming as dictionaries appending to a list. Each element is a dictionary, as the first part is an integer, and the rest of a string. I then have subcategorised each string, and am saving it to an excel spreadsheet.
I need to prevent duplicates appending to my list. Below is my code trying to do this, however when I view the Excel log being created, I am often seeing sometimes 50k rows of the same data repeatedly.
I was able to successfully prevent duplicate excel rows with reading from a text file with my approaches, but can't seem to find a solution for the continuous incoming data.
The output I would like is unique values only for each element appending to my list, and appearing on my excel file.
Code below:
import serial
import xlsxwriter
#ser-prt-connection
ser = serial.Serial(port='COM2',baudrate=9600)
#separate X (int) and Y (string)
regex = '(?:.*\)?X=(?P<X>-?\d+)\sY=(?P<Y>.*)'
extracted_vals = [] #list to be appended
less_vals = [] #want list with no duplicates
row = 0
workbook = xlsxwriter.Workbook('Serial_port_data.xlsx')
worksheet = workbook.add_worksheet()
worksheet.write(row, 0, 'X')
worksheet.write(row, 1, 'Ya')
worksheet.write(row, 2, 'Yb')
worksheet.write(row, 3, 'Yc')
while True:
for line in ser.readlines():
signal = str(line)
for part in parts:
m = re.match(regex,parts)
if m is None:
continue
X,Y = m.groupdict(),values()
# each element appending to list (incl. duplicates)
item = dict(X=int(X),Y=str(Y.lower())
extracted_vals.append(item)
for i in range(0, len(extracted_vals):
i+=0
X_val =extracted_vals[i].setdefault('X')
Key = extracted_vals[i].keys()
data = extracted_vals[i].setdefault('Y')
for val in extracted_vals:
if val not in less_vals:
less_vals.append(val)
for j in range(0,len(less_vals)):
j+=0
X_val = less_vals[j].setdefault('X')
less_data = less_vals[j].setdefault('Y')
#separate Y part into substrings
Ya = less_data[0:3]
Yb = less_data[3:6]
Yc = less_data[6:]
less = X_val,Ya,Yb,Yc
#to check for no duplicates in ouput, compared with raw data
print(signal) #raw incoming data continuously
print(less) #want list, no duplicates
# write to excel file, column for X value
# 3 more columns for Ya, Yb, Yc each
if True:
row = row+1
worksheet.write(row,0,X_val)
worksheet.write(row,1,Ya)
worksheet.write(row,2,Yb)
worksheet.write(row,3,Yc)
#Chosen no. rows to write to file
if row == 10:
workbook.close()
serial.close()
Example of what a line of raw data 'signal' looks like:
X=-10Y=yyAyyBthisisYc
Example of what list 'less' is looking like for one line of raw data:
(-10, 'yyA','yyB', 'thisisYc')
#(repeats in simalar fashion for subsequent lines)
#each part has its own row in excel file
#-10 is X value
My main issue is that sometimes the data being printed is unique, but the excel file has many duplicates.
My other issue is that sometimes the data is printed as every second duplicate: like 1,2,1,2,1,2,1,2
and the same is being saved to the Excel file.
I have only been programming a few weeks now, so any advice at all in general is welcome
Your program has a real lot of problems. I just tried to show you how you can improve it. Bear in mind that I did not test it.
import serial
import xlsxwriter
#ser-prt-connection
ser = serial.Serial(port='COM2',baudrate=9600)
#separate X (int) and Y (string)
regex = '(?:.*\)?X=(?P<X>-?\d+)\sY=(?P<Y>.*)'
# First problem:
# maybe you don't need these lists. Perhaps, you could use a set and a list. Let's do it:
# extracted_vals = [] #list to be appended
extracted_vals = set()
less_vals = [] #want list with no duplicates
row = 0 # this will be used at the end of the program.
MAXROWS = 10 # Given that you want to limit the number of rows, I'd write that limit where it can be seen and changed easily.
#
workbook = xlsxwriter.Workbook('Serial_port_data.xlsx')
worksheet = workbook.add_worksheet()
worksheet.write(row, 0, 'X')
worksheet.write(row, 1, 'Ya')
worksheet.write(row, 2, 'Yb')
worksheet.write(row, 3, 'Yc')
while True: # Second problem: this loop lacks a way to terminate!
# When you write 'while True:', you also MUST write 'break' or 'return' somewhere inside the loop! I'll take care of it.
for line in ser.readlines():
signal = str(line)
#
# Third problem: where is this 'parts' defined? The following line will throw an error! Maybe you meant "for part in signal"?
for part in signal: # was 'parts'
m = re.match(regex,signal) # was 'parts'
if m is None:
continue
X,Y = m.groupdict(),values()
#
# each element appending to list (incl. duplicates)
item = {(int(X), Y.lower())} # you forgot the closing parens, this is the Fourth problem. But this also counts as Fifth:
# you cannot create a dictionary as "dict(X=int(X),Y=str(Y.lower())"! It's a syntax error.
#
# Now that extracted_vals is a set, you can write:
extracted_vals.add(item)
# instead of 'extracted_vals.append(item)'
# At this point, you have no duplicates: sets automatically ignore them!
less_vals = list(extracted_vals) # And now you have a non-duplicated list.
# Sixth problem: all the following lines were included in the "for line in ser.readlines():" loop, you forgot to de-dent them.
# I've done it for you. Written the way you wrote them, for each line read from serial you also repeated all of the following!
# That might be the main reason for creating many duplicate rows.
for i in range(0, len(less_vals) ): # I added the last closing parens, this was the Fourth problem again
# Seventh problem: if you need to process all items in a sequence, you should use
# "for item in sequence:" not "for i in range(len(sequence)):" Moreover, the
# range() function does not need a starting value, if you need it to start from 0.
i+=0 # Eighth problem: this line does nothing. Why did you write it?
X_val = less_vals[i].setdefault('X') # this will put an item {'X': None} if missing in the dictionary extracted_vals[i]
# and assigns the value of less_vals[i]["X"] (possibly None) to X_val.
#Are you sure that you need an {'X': None} item?
# You could have written: X_val = less_vals[i]["X"] or, much better:
# for elem in less_vals:
# X_val, data = elem["X"], elem["Y"]
Key = less_vals[i].keys() # this assigns to 'Key' a dictionary view object, that you will never use. Ninth problem.
data = less_vals[i].setdefault('Y') # this will put an item {'Y': None} if missing in the dictionary less_vals[i]
# and assigns the value of less_vals[i]["Y"] (possibly None) to data.
#Are you sure that you need an {'Y': None} item?
# this loop is completely useless.
# for val in less_vals: # Sixth problem, again: this loop was included in the "for i in range(0, len(less_vals) ):" loop. I de-dented it.
# if val not in less_vals:
# less_vals.append(val)
# I noticed that I could put all following lines in the previous loop, without re-reading all less_vals.
#
# for j in range(0,len(less_vals)): # Sixth problem, again: this loop too was included in the "for i in range(0, len(less_vals) ):" loop. I de-dented it.
# j+=0 # Seventh problem, again: this line does nothing. Why did you write it?
#
# so, now we are continuing the loop on less_vals:
# X_val = less_vals[j].setdefault('X') we already have it
# less_data = less_vals[j].setdefault('Y') instead of 'less_data', we use 'data' that we already have
#separate Y part into substrings
# Remember that data can be None? Tenth problem, you should have dealt with this. I'll take care of it:
if data is None:
Ya = Yb = Yc = ""
else:
Ya = data[0:3]
Yb = data[3:6]
Yc = data[6:]
less = X_val,Ya,Yb,Yc
#to check for no duplicates in ouput, compared with raw data
print(signal) # it may contain only the last line read from serial, if the raw data contained more than 1 line
print(less) # I don't think this check is useful, but if you like it...
# write to excel file, column for X value
# 3 more columns for Ya, Yb, Yc each
# if True: Eleventh problem: this line does nothing
row = row+1
worksheet.write(row,0,X_val)
worksheet.write(row,1,Ya)
worksheet.write(row,2,Yb)
worksheet.write(row,3,Yc)
#
if row >= MAXROWS:
break # this exits the for line in ser.readlines(): loop
if row >= MAXROWS:
workbook.close()
break # this exits the while True: loop, solving the Second problem.
serial.close()
This is a refined version of your program; I tried to follow my own advices. I have no serial communication, (and I have no time to mock one) so I didn't test it properly.
# A bit refined, albeit untested.
import serial
import xlsxwriter
ser = serial.Serial(port='COM2',baudrate=9600)
#separate X (int) and Y (string)
regex = '(?:.*\)?X=(?P<X>-?\d+)\sY=(?P<Y>.*)'
MAXROWS = 50
row = 0 # this will be used at the end of the program.
#
workbook = xlsxwriter.Workbook('Serial_port_data.xlsx')
worksheet = workbook.add_worksheet()
worksheet.write(row, 0, 'X')
worksheet.write(row, 1, 'Ya')
worksheet.write(row, 2, 'Yb')
worksheet.write(row, 3, 'Yc')
while True:
extracted_vals = set()
for line in ser.readlines():
m = re.match(regex,line)
if m is None:
continue
values = m.groupdict()
extracted_vals.add((int(values['X']), values['Y'].lower())) # each element in extracted_vals will be a tuple (integer, string)
#
for this in extracted_vals:
X_val = this[0]
data = this[1]
Ya = data[0:3]
Yb = data[3:6]
Yc = data[6:]
row = row+1
worksheet.write(row,0,X_val)
worksheet.write(row,1,Ya)
worksheet.write(row,2,Yb)
worksheet.write(row,3,Yc)
#
if row >= MAXROWS:
break
if row >= MAXROWS:
workbook.close()
break
serial.close()
I have never done any video-based programming before, and although this SuperUser post provides a way to do it on the command line, I prefer a programmatic approach, preferably with Python.
I have a bunch of sub-videos. Suppose one of them is called 1234_trimmed.mp4 which is a short segment cut from the original, much-longer video 1234.mp4. How can I figure out the start and end timestamps of 1234_trimmed.mp4 inside 1234.mp4?
FYI, the videos are all originally on YouTube anyway ("1234" corresponds to the YouTube video ID) if there's any shortcut that way.
I figured it out myself with cv2. My strategy was to get the first and last frames of the subvideo and iterate over each frame of the original video, where I compare the current frame's dhash (minimum hamming distance instead of checking for equality in case of resizing and other transformations) against the first and last frames. I'm sure there may be some optimization opportunities but I need this yesterday.
import cv2
original_video_fpath = '5 POPULAR iNSTAGRAM BEAUTY TRENDS (DiY Feather Eyebrows, Colored Mascara, Drippy Lips, Etc)-vsNVU7y6dUE.mp4'
subvideo_fpath = 'vsNVU7y6dUE_trimmed-out.mp4'
def dhash(image, hashSize=8):
# resize the input image, adding a single column (width) so we
# can compute the horizontal gradient
resized = cv2.resize(image, (hashSize + 1, hashSize))
# compute the (relative) horizontal gradient between adjacent
# column pixels
diff = resized[:, 1:] > resized[:, :-1]
# convert the difference image to a hash
return sum([2 ** i for (i, v) in enumerate(diff.flatten()) if v])
def hamming(a, b):
return bin(a^b).count('1')
def get_video_frame_by_index(video_cap, frame_index):
# get total number of frames
totalFrames = video_cap.get(cv2.CAP_PROP_FRAME_COUNT)
if frame_index < 0:
frame_index = int(totalFrames) + frame_index
# check for valid frame number
if frame_index >= 0 & frame_index <= totalFrames:
# set frame position
video_cap.set(cv2.CAP_PROP_POS_FRAMES, frame_index)
_, frame = video_cap.read()
return frame
def main():
cap_original_video = cv2.VideoCapture(original_video_fpath)
cap_subvideo = cv2.VideoCapture(subvideo_fpath)
first_frame_subvideo = get_video_frame_by_index(cap_subvideo, 0)
last_frame_subvideo = get_video_frame_by_index(cap_subvideo, -1)
first_frame_subvideo_gray = cv2.cvtColor(first_frame_subvideo, cv2.COLOR_BGR2GRAY)
last_frame_subvideo_gray = cv2.cvtColor(last_frame_subvideo, cv2.COLOR_BGR2GRAY)
hash_first_frame_subvideo = dhash(first_frame_subvideo)
hash_last_frame_subvideo = dhash(last_frame_subvideo)
min_hamming_dist_with_first_frame = float('inf')
closest_frame_index_first = None
closest_frame_timestamp_first = None
min_hamming_dist_with_last_frame = float('inf')
closest_frame_index_last = None
closest_frame_timestamp_last = None
frame_index = 0
while(cap_original_video.isOpened()):
frame_exists, curr_frame = cap_original_video.read()
if frame_exists:
timestamp = cap_original_video.get(cv2.CAP_PROP_POS_MSEC) // 1000
hash_curr_frame = dhash(curr_frame)
hamming_dist_with_first_frame = hamming(hash_curr_frame, hash_first_frame_subvideo)
hamming_dist_with_last_frame = hamming(hash_curr_frame, hash_last_frame_subvideo)
if hamming_dist_with_first_frame < min_hamming_dist_with_first_frame:
min_hamming_dist_with_first_frame = hamming_dist_with_first_frame
closest_frame_index_first = frame_index
closest_frame_timestamp_first = timestamp
if hamming_dist_with_last_frame < min_hamming_dist_with_last_frame:
min_hamming_dist_with_last_frame = hamming_dist_with_last_frame
closest_frame_index_last = frame_index
closest_frame_timestamp_last = timestamp
frame_index += 1
else:
print('processed {} frames'.format(frame_index+1))
break
cap_original_video.release()
print('timestamp_start={}, timestamp_end={}'.format(closest_frame_timestamp_first, closest_frame_timestamp_last))
if __name__ == '__main__':
main()
MP4 utilizes relative timestamps. When the file was trimmed the old timestamps were lost, and the new file now begins at time stamp zero.
So the only way to identify where this file may overlap with another file is to use computer vision or perceptual hashing. Both options are too complex to describe in a single stackoverflow answer.
If they were simply -codec copy'd, the timestamps should be as they were in the original file. If they weren't, ffmpeg is not the tool for the job. In that case, you should look into other utilities that can find an exactly matching video and audio frame in both files and get the timestamps from there.
My program is search the upper and lower value from .txt file according to that input value.
def find_closer():
file = 'C:/.../CariCBABaru.txt'
data = np.loadtxt(file)
x, y = data[:,0], data[:,1]
print(y)
for k in range(len(spasi_baru)):
a = y #[0, 20.28000631, 49.43579604, 78.59158576, 107.7473755, 136.9031652, 166.0589549,
176.5645474, 195.2147447]
b = spasi_baru[k]
# diff_list = []
diff_dict = OrderedDict()
if b in a:
b = input("Number already exists, please enter another number ")
else:
for x in a:
diff = x - b
if diff < 0:
# diff_list.append(diff*(-1))
diff_dict[x] = diff*(-1)
else:
# diff_list.append(diff)
diff_dict[x] = diff
#print("diff_dict", diff_dict)
# print(diff_dict[9])
sort_dict_keys = sorted(diff_dict.keys())
#print(sort_dict_keys)
closer_less = 0
closer_more = 0
#cl = []
#cm = []
for closer in sort_dict_keys:
if closer < b:
closer_less = closer
else:
closer_more = closer
break
#cl.append(closer_less == len(spasi_baru) - 1)
#cm.append(closer_more == len(spasi_baru) - 1)
print(spasi_baru[k],": lower value=", closer_less, "and upper
value =", closer_more)
data = open('C:/.../Batas.txt','w')
text = "Spasi baru:{spasi_baru}, File: {closer_less}, line:{closer_more}".format(spasi_baru=spasi_baru[k], closer_less=closer_less, closer_more=closer_more)
data.write(text)
data.close()
print(spasi_baru[k],": lower value=", closer_less, "and upper value =", closer_more)
find_closer()
The results image is here 1
Then, i want to write these results to file (txt/csv no problem) into rows and columns sequence. But the problem that i have, the file contain just one row or written the last value output in terminal like below,
Spasi baru:400, File: 399.3052727, line: 415.037138
any suggestions to help fix my problem please? I stuck in a several hours to tried any different code algorithms. I'm using Python 3.7
The best solution is to use w+ or a+ mode when you're trying to append into the same test file.
Instead of doing this:
data = open('C:/.../Batas.txt','w')
Do this:
data = open('C:/.../Batas.txt','w+')
or
data = open('C:/.../Batas.txt','a+')
The reason is because you are overwriting the same file over and over inside the loop, so it will keep just the last interaction. Look for ways to save files without overwriting them.
‘r’ – Read mode which is used when the file is only being read
‘w’ – Write mode which is used to edit and write new information to the file (any existing files with the same name will be erased when this mode is activated)
‘a’ – Appending mode, which is used to add new data to the end of the file; that is new information is automatically amended to the end
‘r+’ – Special read and write mode, which is used to handle both actions when working with a file
I am using Blender 2.8. I want to import an object into blender that is made up of a few pieces that aren't connected. So I want to split the object up and only export the largest of the pieces.
So lets say there are 3 pieces in one object, one big and two small. I'm able to turn this object into three objects, each containing one of the pieces. I would like to delete the two smaller objects and only keep the largest one. I'm thinking maybe to somehow find the surface area of the three different objects and only keep the largest while deleting all others? I'm pretty new at Blender.
bpy.ops.import_mesh.stl(filepath='path/of/file.stl')
bpy.ops.mesh.separate(type='LOOSE')
amount_of_pieces = len(context.selected_objects)
if amount_of_pieces > 1:
highest_surface_area = 0
#the rest is pseudocode
for object in scene:
if object.area > highest_surface_area:
highest_surface_area = object.area
else:
bpy.ops.object.delete()
bpy.ops.export_mesh.stl(filepath='path/of/new/file.stl')
The steps would be :-
import file
break into multiple objects
for safety, get a list of mesh objects
list the surface area of each object
get the max from the list of areas
delete the not biggest objects
export the largest
cleanup
We don't need to use bmesh to get the surface area, the normal mesh data includes polygon.area.
Using list comprehension, we can get most steps into one line each.
import bpy
# import and separate
file = (r'path/of/file.stl')
bpy.ops.import_mesh.stl(filepath= file)
bpy.ops.mesh.separate(type='LOOSE')
# list of mesh objects
mesh_objs = [o for o in bpy.context.scene.objects
if o.type == 'MESH']
# dict with surface area of each object
obj_areas = {o:sum([f.area for f in o.data.polygons])
for o in mesh_objs}
# which is biggest
big_obj = max(obj_areas, key=obj_areas.get)
# select and delete not biggest
[o.select_set(o is not big_obj) for o in mesh_objs]
bpy.ops.object.delete(use_global=False, confirm=False)
#export
bpy.ops.export_mesh.stl(filepath= 'path/of/new/file.stl')
# cleanup
bpy.ops.object.select_all(action='SELECT')
bpy.ops.object.delete(use_global=False, confirm=False)
I was able to write a code that works for this, however, it is very long and chaotic. I would appreciate it if anyone could give me some advice on cleaning it up.
import bpy
import os
import bmesh
context = bpy.context
file = (r'path\to\file.stl')
bpy.ops.import_mesh.stl(filepath= file)
fileName = os.path.basename(file)[:-4].capitalize()
bpy.ops.mesh.separate(type='LOOSE')
bpy.ops.object.select_all(action='SELECT')
piece = len(context.selected_objects)
bpy.ops.object.select_all(action='DESELECT')
high = 0
if piece > 1:
bpy.data.objects[fileName].select_set(True)
obj = bpy.context.active_object
bm = bmesh.new()
bm.from_mesh(obj.data)
area = sum(f.calc_area() for f in bm.faces)
high = area
bm.free()
bpy.ops.object.select_all(action='DESELECT')
for x in range (1, piece):
name = fileName + '.00' + str(x)
object = bpy.data.objects[name]
context.view_layer.objects.active = object
bpy.data.objects[name].select_set(True)
obj = bpy.context.active_object
bm = bmesh.new()
bm.from_mesh(obj.data)
newArea = sum(f.calc_area() for f in bm.faces)
bm.free()
if newArea > high:
high = newArea
bpy.ops.object.select_all(action='DESELECT')
else:
bpy.ops.object.delete()
bpy.ops.object.select_all(action='DESELECT')
if area != high:
bpy.data.objects[fileName].select_set(True)
bpy.ops.object.delete()
bpy.ops.export_mesh.stl(filepath= 'path/to/export/file.stl')
bpy.ops.object.select_all(action='SELECT')
bpy.ops.object.delete(use_global=False, confirm=False)
I am extracting 150 different cell values from 350,000 (20kb) ascii raster files. My current code is fine for processing the 150 cell values from 100's of the ascii files, however it is very slow when running on the full data set.
I am still learning python so are there any obvious inefficiencies? or suggestions to improve the below code.
I have tried closing the 'dat' file in the 2nd function; no improvement.
dat = None
First: I have a function which returns the row and column locations from a cartesian grid.
def world2Pixel(gt, x, y):
ulX = gt[0]
ulY = gt[3]
xDist = gt[1]
yDist = gt[5]
rtnX = gt[2]
rtnY = gt[4]
pixel = int((x - ulX) / xDist)
line = int((ulY - y) / xDist)
return (pixel, line)
Second: A function to which I pass lists of 150 'id','x' and 'y' values in a for loop. The first function is called within and used to extract the cell value which is appended to a new list. I also have a list of files 'asc_list' and corresponding times in 'date_list'. Please ignore count / enumerate as I use this later; unless it is impeding efficiency.
def asc2series(id, x, y):
#count = 1
ls_id = []
ls_p = []
ls_d = []
for n, (asc,date) in enumerate(zip(asc, date_list)):
dat = gdal.Open(asc_list)
gt = dat.GetGeoTransform()
pixel, line = world2Pixel(gt, east, nort)
band = dat.GetRasterBand(1)
#dat = None
value = band.ReadAsArray(pixel, line, 1, 1)[0, 0]
ls_id.append(id)
ls_p.append(value)
ls_d.append(date)
Many thanks
In world2pixel you are setting rtnX and rtnY which you don't use.
You probably meant gdal.Open(asc) -- not asc_list.
You could move gt = dat.GetGeoTransform() out of the loop. (Rereading made me realize you can't really.)
You could cache calls to world2Pixel.
You're opening dat file for each pixel -- you should probably turn the logic around to only open files once and lookup all the pixels mapped to this file.
Benchmark, check the links in this podcast to see how: http://talkpython.fm/episodes/show/28/making-python-fast-profiling-python-code