I used one stl file to split the stl using Paraview. I traced the method using python trace in paraview.
Now, I used the code in python to run it. It runs perfectly, but it does not save the splitted mesh as needed. The code is used as per the trace obtained from paraview. Below is the snipped of the code where SaveData is used to save the file. How to save stl file?
import sys #sys- append path
import numpy as np
ParaViewBuildPath = "/home/ParaView-5.7.0/"
sys.path.append(ParaViewBuildPath + "lib/")
sys.path.append(ParaViewBuildPath + "lib/python3.7/site-packages")
sys.path.append(ParaViewBuildPath + "lib/python3.7/site-packages/vtkmodules")
from paraview.simple import *
import vtk
# find source
mesh176_rightstl = FindSource('mesh176_right.stl')
generateSurfaceNormals1 = GenerateSurfaceNormals(Input=mesh176_rightstl)
# Properties modified on generateSurfaceNormals1
generateSurfaceNormals1.FeatureAngle = 15.0
# create a new 'Connectivity'
connectivity1 = Connectivity(Input=generateSurfaceNormals1)
# create a new 'Threshold'
threshold1 = Threshold(Input=connectivity1)
#threshold1.Scalars = ['POINTS', 'RegionId']
# Properties modified on threshold1
threshold1.ThresholdRange = [10.0, 982.0]
# create a new 'Extract Surface'
extractSurface1 = ExtractSurface(Input=threshold1)
# save data
SaveData('surf176.stl', proxy=extractSurface1, FileType='Ascii')
I have addressed the error I am facing from the line "generateSurfaceNormals1".
[paraview ]vtkDemandDrivenPipeline:713 ERR| vtkPVCompositeDataPipeline (0x556f782fe7c0): Input port 0 of algorithm vtkPPolyDataNormals(0x556f7a16b2a0) has 0 connections but is not optional.
How to overcome this error?
Any leads will be appreciated.
Regards,
Sunag R A.
The error message means that mesh176_rightstl is None, so the FindSource does not find anything. Is the source name correct ? Is the data correctly loaded ?
As an error raised, the script stops and SaveData is not called. But its syntax is correct.
Minimal code to test stl writer:
s = Sphere()
SaveData('sphere.stl', proxy = s, FileType='Ascii')
It correctly produces the stl file with ParaView 5.9
edit
You should uncomment the line
#threshold1.Scalars = ['POINTS', 'RegionId']
Because pipeline is not executed until you ask for (e.g. with the SaveData), no default array can be found when you created the Threshold.
Related
I'm following along this tutorial for depth estimation: https://learnopencv.com/depth-perception-using-stereo-camera-python-c/
Using python3 in a virtual env on my MacBook Pro. I'm running this block of code:
import numpy as np
import cv2
# Check for left and right camera IDs
# These values can change depending on the system
CamL_id = 2 # Camera ID for left camera
CamR_id = 1 # Camera ID for right camera
CamL= cv2.VideoCapture(CamL_id)
CamR= cv2.VideoCapture(CamR_id)
# Reading the mapping values for stereo image rectification
cv_file = cv2.FileStorage("data/stereo_rectify_maps.xml", cv2.FILE_STORAGE_READ)
Left_Stereo_Map_x = cv_file.getNode("Left_Stereo_Map_x").mat()
Left_Stereo_Map_y = cv_file.getNode("Left_Stereo_Map_y").mat()
Right_Stereo_Map_x = cv_file.getNode("Right_Stereo_Map_x").mat()
Right_Stereo_Map_y = cv_file.getNode("Right_Stereo_Map_y").mat()
cv_file.release()
And I keep getting the following error:
[ERROR:0#1.008] global /Users/runner/work/opencv-python/opencv-python/opencv/modules/core/src/persistence.cpp (505) open Can't open file: 'data/stereo_rectify_maps.xml' in read mode
I've tried using different methods like cv2.FileStorage.open(filename,flags) but I get similar errors. I've also tried to open in write mode and got a similar error. Any help would be great!
As you have already seen in Tensorflow objects detection they provide pipeline.config file with respect to a particular model. But there we need to manually open these config files & change the parameter by hard coding. My query is like how can I read this pipeline.config file by python & change the parameter in runtime. Please help me with that.
There's an example in the tutorial notebook.
from object_detection.utils import config_util, save_pipeline_config
pipeline_config = 'configs/tf2/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.config'
configs = config_util.get_configs_from_pipeline_file(pipeline_config)
configs['model'].ssd.num_classes = 10 # change number of classes
Then, you can save:
save_pipeline_config(configs, 'path/to/save/dir/')
See the source code.
The answer of #Nicolas Gervais seems to be a bit outdated.
This seems to be the fully working version right now:
from object_detection.utils import config_util
pipeline_config = 'configs/tf2/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.config'
configs = config_util.get_configs_from_pipeline_file(pipeline_config)
configs['model'].ssd.num_classes = 10 # change number of classes
After you can save your pipeline.config in the following way:
# Convert dictionary to pipeline_pb2.TrainEvalPipelineConfig to be able to save it
pipeline_proto = config_util.create_pipeline_proto_from_configs(configs)
config_util.save_pipeline_config(pipeline_proto, 'path/to/save/dir/')
I have an exisiting vtk file (of a FE mesh, regular hexahedron mesh) and I would like to add a data set to this that I have in Python. Specifically, I would like to add this numpy data set to each node and then visualize it in ParaView.
Any tips on how I can get started on this?
VTK (and by extension ParaView) has great NumPy integration facilities. For a wonderful overview on these, please see the blog post series starting with Improved VTK – numpy integration].
The important parts are:
You need to wrap your VTK data object in an adapter class
You add your NumPy array to the wrapped data set
Sketching this out, you can write:
import vtk
from vtk.numpy_interface import dataset_adapter as dsa
dataSet = ...
numpyArray = ...
adaptedDataSet = dsa.WrapDataObject(dataSet)
dataSet.PointData.append(numpyArray, 'arrayname')
If your data were instead associated with cells rather than points, you would change that last line to
dataSet.CellData.append(numpyArray, 'arrayname')
You'll have to be sure that the order of the data in the NumPy array matches the order of points in the hexahedron mesh.
Now, how do you do this in ParaView? You can add a Programmable Filter. The Python environment in which the script set on the Programmable Filter is executed already does this wrapping for you, so you can simplify the script above to:
# Shallow copy the input data to the output
output.VTKObject.ShallowCopy(inputs[0].VTKObject)
# Define the numpy array
numpyArray = ...
# Add the numpy array as a point data set
output.PointData.append(numpyArray, 'arrayName')
In the script above, output is a wrapped copy of the dataset produced by the Programmable Filter, saving you from having to do the wrapping manually. You do need to shallow copy the input object to the output as the script shows.
Thanks for your assistance. Here is how I solved my problem:
import vtk
from vtk.numpy_interface import dataset_adapter as dsa
# Read in base vtk
fileName = "Original.vtk"
reader = vtk.vtkUnstructuredGridReader()
reader.SetFileName(fileName)
reader.Update()
mesh = reader.GetOutput()
# Add data set and write VTK file
meshNew = dsa.WrapDataObject(mesh)
meshNew.PointData.append(NewDataSet, "new data")
writer = vtk.vtkUnstructuredGridWriter()
writer.SetFileName("New.vtk")
writer.SetInputData(meshNew.VTKObject)
writer.Write()
I have a huge grid in *.pvd format. I would like to ensure some cells size specification have been respected when building said grid. To do so, I should get a cell data array with (dx,dy,dz)
I first tried to check this in Paraview with very little success. Then I resolved to export the mesh in various format (vtk, vtu, ex2) and import things into python using the vtk module, as in the code below. Unfortunately, the size of the mesh forbids it and I get various error messages stating "Unable to allocate n cells of size x".
import vtk
reader = vtk.vtkXMLUnstructuredGridReader()
reader.SetFileName("my_mesh.vtu")
reader.Update()
Finally, in Paraview there is a python-shell that allows me to open the grid file in either pvd or vtk format:
>>> from paraview.simple import *
>>> my_vtk = OpenDataFile("my_mesh.vtk")
>>> print dir(my_vtk)
Despite my browsing the methods and attribute of this reader object, I remain clueless about where to fetch any geometry information on the grid. I also browsed through the simple module documentation and I can't really wrap my head around it.
So how can one retrieve information regarding the geometry of cells from a paraview.servermanager.LegacyVTKReader object?
Any clue about how to achieve this with through the paraview GUI, or any kludge to load the vtk object into python vtk despite the memory issue is also very welcome. Sorry for such a hazy question, but I don't really know where to get started...
You can use GetClientSideObject() (see here) to get a VTK object in the Paraview Python shell. After that you can use all the regular VTK Python functions. For example, you can write the following in the Paraview Python shell
>>> from paraview.simple import *
>>> currentSelection = GetActiveSource()
>>> readerObj = currentSelection.GetClientSideObject()
>>> unstructgrid = readerObj.GetOutput()
>>> firstCell = unstructgrid.GetCell(0)
>>> cellPoints = firstCell.GetPoints()
Alternatively, you can use Programmable Filter in ParaView. This allows access to full VTK python module and even NumPy or other modules. You can enter following script in the script window of the programmable filter:
import vtk as v
import numpy as np
inp = self.GetUnstructuredGridInput()
cells = inp.GetCells()
cells.InitTraversal()
cellPtIds = v.vtkIdList()
lenArr = v.vtkDoubleArray()
lenArr.SetNumberOfComponents(3)
lenArr.SetName('CellSize')
while cells.GetNextCell( cellPtIds ):
pts = []
for i in range( cellPtIds.GetNumberOfIds() ):
ptCoords = inp.GetPoint( cellPtIds.GetId(i) )
pts.append( ptCoords )
pts = np.array( pts )
dx = np.max(pts[:,0]) - np.min(pts[:,0])
dy = np.max(pts[:,1]) - np.min(pts[:,1])
dz = np.max(pts[:,2]) - np.min(pts[:,2])
lenArr.InsertNextTuple3(dx, dy, dz)
out = self.GetUnstructuredGridOutput()
out.ShallowCopy( inp )
out.GetCellData().AddArray( lenArr )
In Paraview when you select the 'ProgrammableFilter1' icon in your pipeline, a new cell data array will be available to you from the drop-down as shown in the screenshot below. You can modify the script above to save the data to file to analyze externally.
This information is visible in the Information Tab.
I'm trying to create an lmdb file that contains all of my database images (in order to train CNN).
This is my 'test code', that I took from here:
import numpy as np
import lmdb
import caffe
import cv2
import glob
N = 18
# Let's pretend this is interesting data
X = np.zeros((N, 1, 32, 32), dtype=np.uint8)
y = np.zeros(N, dtype=np.int64)
# We need to prepare the database for the size. We'll set it 10 times
# greater than what we theoretically need. There is little drawback to
# setting this too big. If you still run into problem after raising
# this, you might want to try saving fewer entries in a single
# transaction.
map_size = X.nbytes * 10
train_data = [img for img in glob.glob("/home/roishik/Desktop/Thesis/Code/cafe_cnn/third/code/train_images/*png")]
for i , img_path in enumerate(train_data):
img = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)
X[i]=img
y[i]=i%2
env = lmdb.open('train', map_size=map_size)
print X
print y
with env.begin(write=True) as txn:
# txn is a Transaction object
for i in range(N):
datum = caffe.proto.caffe_pb2.Datum()
datum.channels = X.shape[1]
datum.height = X.shape[2]
datum.width = X.shape[3]
datum.data = X[i].tobytes() # or .tostring() if numpy < 1.9
print 'a ' + str(X[i])
datum.label = int(y[i])
print 'b ' + str(datum.label)
str_id = '{:08}'.format(i)
txn.put(str_id.encode('ascii'), datum.SerializeToString())
As you can see I specified random binary labels (0 or 1, for even or odd, respectively). before I create much larger lmdb file I wanna make sure that I'm doing it the right way.
After creating this file I wanted to 'look into the file' and check if it's OK, but I couldn't. the file didn't open properly using python, Access 2016, and .mdb reader (linux ubunto software). my problems are:
I don't understand what this code is doing. what is str_id? what is X[i].tobytes? what does the last line do?
After I run the code, I got 2 files: 'data.mdb' and 'key.mdb'. what are those two? maybe those 2 files are the reason why I can't open the database?
Thanks a lot, really appreciate your help!
str_id is the internal name of the data set (e.g. one JPG image) used inside the LMDB. It's derived from the path and sequence number i.
tobytes ... here, let me search that for you. This overall process, through the end of the loop, converts the data set (datum) to the LMDB format, and then copies that binary representation straight to the file. tobytes and SerializeToString are the critical methods that transfer the bit pattern as-is.
data.mdb is the relatively huge data file, containing all of these bit sequences in a readily recoverable form. In other words, it's not blocking your DB access, because it is the data base.
lock.mdb is the record-level lock file: each datum gets appropriately locked (fully or read-only) during any read or write.
That should explain the open questions. lock will not block opening the data base; it operates only during access operations. Check your file permissions. Check your user identity as well: did the LMDB creation run as root, perhaps, and not give you read permissions? Have you tried opening it read-only with a simple-minded editor, such as vi or wordpad?
I hope this gets you moving toward a solution.
You can use the mdb_dump tool to inspect the contents of the database.