I have a .vtu file representing a mesh which I read through vtkXMLUnstructuredGridReader. Then I create a numpy array (nbOfPoints x 3) in which I store the mesh vertex coordinates, which I'll call meshArray.
I also have a column array (nOfPoints x 1), which I'll call brightnessArray, which represents a certain property I want to assign to the vertexes of the meshArray; so to each vertex corresponds a scalar value. For example: to the element meshArray[0] will correspond brightnessArray[0] and so on.
How can I do this?
It is then possible to interpolate the value at the vertexes of the mesh to obtain a smooth variation of the property I had set in order to visualize it in paraview?
Thank you.
Simon
Here is what you need to do :
Write a Python Programmable Source to read your numpy data as a vtkUnstructuredGrid.
Here are a few examples of programmable sources :
https://www.paraview.org/Wiki/ParaView/Simple_ParaView_3_Python_Filters
https://www.paraview.org/Wiki/Python_Programmable_Filter
Read your .vtu dataset
Use a "Ressample with Dataset" filter on your python programmable source output and select your dataset as "source"
And you're done.
The hardest part is writing the programmble source script.
Related
I'm working on a computer science project which is a CNC plotter basically all of the methods I see for getting Gcode uses Inkscape. I have already written software to convert Normal images to black and white edges only and I have pulled the coordinates from the image. Is there any way X,Y coordinates can be used to generate Gcode ? or would i have to use Inkscape.
GCode is just instructions called where you can pass arguments.
The machine will execute the Gcode one by one and interpret it for moving his motors or do regulation depending on his firmware.
So if you want to create Gcode in python, just create a txt file and append commands.
You need to have the Gcode availables instructions of you machine first (here InkScape).
For example in Marlin:
G1 X90.6 Y13.8 ; move to 90.6mm on the X axis and 13.8mm on the Y axis
To get this file in python:
positions = [ # Get your datas of format them like this:
[90.6, 13.8], # Point 1 [x, y]
[10.6, 3.98]
]
with open("myGCode.gcode", "w") as f:
for x, y in positions:
f.write(f"G1 X{x} Y{y} ;\n")
File created content:
G1 X90.6 Y13.8 ;
G1 X10.6 Y3.98 ;
It really depends on the machine and its controller, but most of the time,
Linear interpolation like
G1 or G01 usually only needs to be specified once, like
G01 X1.0 Y2.0;
And then its linear interpolation enabled already so it can just be
X1.0 Y3.0;
...
Up to the point where you wanna go back to rapid movement (G0 G00)
Or circular interpolation with (G02, G03)
But then it's still usually just coordinates are enough after switching
to specific interpolation once.
Yet then I assume its for simple milling and more recent mills (I was trained for Haas) has some fancy pocketing functions, where you just specify
few key points for contour and they can kinda be deducted mathematically.
It would be interesting to see your program for getting a contour out of a photo.
But specifying type of interpolation between each set of coordinates is also
OK, just it might make code readability slightly more difficult.
I have an xarray object containing MODIS data imported from HDF4 format. The structure of the DataSet is something like the structure below where I have spectral bands in an array format stored within a Data Variable - each band is a different variable.
# Create example DataSet with 5 variables (let's pretend each layer is a band)
ds = xr.Dataset(
{var: (("band", "x", "y"), np.random.rand(1, 10, 10)) for var in "abcde"}
)
If the bands were stored in an array, it would be easy to explore the data and plot each band using .plot and the built in facet grid tools. However in this case i have to plot each layer individually or using a loop. Is there a quick way to automate grabbing say x number of those variables or bands (maybe b,c and e as an example) and plotting them?
In some cases you may need to plot an RGB image - so i'd do something like this:
# This is one way of combining several bands into a single array object however it's very
# manual. I need to specify each band in the concat statement. But it does achieve
# the output that I want using a manual approach.
new_object = xr.concat([ds.b,
ds.c],
ds.e,
dim="band")
# Now I can plot the data
new_object.imshow.plot()
My goal is to automate the process of selecting x bands (or x number of data variables) for both plotting / visualization and analysis. I don't want to hard code each band into a concat() function as I did above. In the example above i wanted to plot an RGB image. In other examples i'd want visually explore each band before addition processing OR just extract a few bands for other types of calculations & analysis.
Thank you for any direction!!
I think xarray.Dataset.to_array() may be what you are looking for. For example for the RGB image I think something like the following would work:
ds.squeeze("band").to_array(dim="band").sel(band=["b", "c", "e"]).plot.imshow()
You could also facet over the "band" dimension in that case:
ds.squeeze("band").to_array(dim="band").plot(col="band")
I'm using xarray.open_mfdataset() to open and combine 8 netcdf files (output from model simulations with different settings) without loading them into memory. This works great if I specify concat_dim='run_number', which adds run_number as a dimension without coordinates and just fills it with values from 0 to 7.
The problem is that now, I don't know which run_number belongs to which simulation. The original netcdf's all have attributes that help me to distinguish them, e.g. identifyer=1, identifyer=2, etc., but this is not recognized by xarray, even if I specify concat_dim='identifyer' (perhaps because there are many attributes?).
Is there any way in which I can tell xarray that it has to use this attribute as concat_dim? Or alternatively, in which order does xarray read the input files, so that I can infer which value of the new dimension corresponds to which simulation?
Xarray will use the values of existing scalar coordinates to label result coordinates, but it doesn't look at attributes. Only looking at metadata found in coordinates is a general theme in xarray: we leave attrs to user code only. So this should work you assign scalar 'identifyer' coordinates to each dataset, e.g., using the preprocess argument to open_mfdataset:
def add_id(ds):
ds.coords['identifyer'] = ds.attrs['identifyer']
xarray.open_mfdataset(path, preprocess=add_id)
Alternatively, you can either pass an explicit list of filenames to open_mfdataset or rely on the fact that open_mfdataset sorts the glob of filenames before combining them: the datasets will always be combined in lexicographic order of their names.
I wanted to extract stress on top surface of my model on each node but it can't be done normally. when I use this script:
odb = visualization.openOdb('My.odb')
frame=odb.steps['AStep'].frames[-1]
dispNode = odb.rootAssembly.nodeSets['UPPER']
STRESS= frame.fieldOutputs['S'].getSubset(region=dispNode).values
COORD= frame.fieldOutputs['COORD'].getSubset(region=dispNode).values
print(STRESS)
print(COORD[1].data)
STRESS returns an empty array.
How can I edit my script to have stress and its corresponding coordinates??
Your Code can't work, if you only calculated your stress values on the integration points. There are simply no values at the nodes, so if you request values at nodes you will get an empty array.
This is how it should work:
Extrapolate your integration point results to the nodes
Average your ElementNodal values. This is how that works: https://stackoverflow.com/a/43175485/4045774
Extract your node coordinates (deformed or undeformed)
get the node labels from your point set
With the node labels from your point set find the corresponding unique nodal values https://docs.scipy.org/doc/numpy/reference/generated/numpy.in1d.html
If you need a small example code, feel free to ask.
I want to read sdf file (containing many molecules) and return the weighted adjacency matrix of the molecule. Atoms should be treated as vertices and bond as edges. If i and j vertex are connected by single, double, or triple bond then corresponding entries in the adjacency matrix should be 1,2, and 3 respectively. I need to further obtain a distance vector for each vertex which list the number of vertices at different distance.
Are there any python package available to do this?
I would recommend Pybel for reading and manipulating SDF files in Python. To get the bonding information, you will probably need to also use the more full-featured but less pythonic openbabel module, which can be used in concert with Pybel (as pybel.ob).
To start with, you would write something like this:
import pybel
for mol in pybel.readfile('sdf', 'many_molecules.sdf'):
for atom in mol:
coords = atom.coords
for neighbor in pybel.ob.OBAtomAtomIter(atom.OBAtom):
neighbor_coords = pybel.atom(neighbor).coords
See
http://code.google.com/p/cinfony/
However for your exact problem you will need to consult the documentation.