STRESS array is empty in python scripting Abaqus - python

I wanted to extract stress on top surface of my model on each node but it can't be done normally. when I use this script:
odb = visualization.openOdb('My.odb')
frame=odb.steps['AStep'].frames[-1]
dispNode = odb.rootAssembly.nodeSets['UPPER']
STRESS= frame.fieldOutputs['S'].getSubset(region=dispNode).values
COORD= frame.fieldOutputs['COORD'].getSubset(region=dispNode).values
print(STRESS)
print(COORD[1].data)
STRESS returns an empty array.
How can I edit my script to have stress and its corresponding coordinates??

Your Code can't work, if you only calculated your stress values on the integration points. There are simply no values at the nodes, so if you request values at nodes you will get an empty array.
This is how it should work:
Extrapolate your integration point results to the nodes
Average your ElementNodal values. This is how that works: https://stackoverflow.com/a/43175485/4045774
Extract your node coordinates (deformed or undeformed)
get the node labels from your point set
With the node labels from your point set find the corresponding unique nodal values https://docs.scipy.org/doc/numpy/reference/generated/numpy.in1d.html
If you need a small example code, feel free to ask.

Related

Calculating the adjacency matrix and finding nearest neighbors for different node type/classes

Before I describe my problem I'll summarise what I think I'm looking for. I think I need a method for nearest-neighbor searches which are restricted by node type in python (In my case a node represents an atom and the node type represents the element the atom is). So only returning the nearest neighbors of a given type. Maybe I am wording my problem incorrectly. I haven't been able to find any existing methods for this.
I am writing some ring statistics code to find different types of rings for molecular dynamics simulation data. The input data structure is a big array of atom id, atom type, and XYZ positions.
For example.
At the moment I only consider single-element systems (for example graphene, so only Carbon atoms are present). So each node is considered the same type when finding its nearest neighbors and calculating the adjacency matrix.
For this, I am using KDTree and scipy.spatial algorithms and find all atoms within the bond length, r, from any given atom. If an atom is within, r radius of a given atom I consider it connected and then populate and update an adjacency dictionary accordingly.
def create_adjacency_dict(data, r, leaf_size=5, box_size=None):
from scipy.spatial import KDTree
tree = KDTree(data, leafsize=leaf_size,
boxsize=box_size)
all_nn_indices = tree.query_ball_point(data, r, workers=5) # Calculates neighbours within radius r of a point.
adj_dict = {}
for count, item in enumerate(all_nn_indices):
adj_dict[count] = item # Populate adjacency dictionary
for node, nodes in adj_dict.items():
if node in nodes:
nodes.remove(node) # Remove duplicates
adj_dict = {k: set(v) for k, v in adj_dict.items()}
return adj_dict
I would like to expand the code to deal with multi-species systems. For example AB2, AB2C4 etc (Where A,B and C represent different atomic species). However, I am struggling to figure out a nice way to do this.
A
/ \
B B
The obvious method would be to just do a brute force Euclidean approach. My idea is to input the bond types for a molecule, so for AB2 (shown above), you would input something like AB to indicate the different types of bonds to consider, and then the respective bond lengths. Then loop over each atom finding the distance to all other atoms and, for this example of AB2, if an atom of type A is within the bond length of an atom B, consider them connected and populate the adjacency matrix. However, I'd like to be able to use the code on large datasets of 50,000+ atoms, so this method seems wasteful.
I suppose I could still use my current method, but just search for say the 10 nearest neighbors of a given atom and then do a Euclidean search for each atom pair, following the same approach as above. Still seems like a better method would already exist though.
Do better methods already exist for this type of problem? Finding nearest neighbors restricted by node type? Or maybe someone knows a more correct wording of my problem, which is I think one of my issues here.
"Then search the data."
This sounds like that old cartoon where someone points to a humourously complex diagram with a tiny label in the middle that says "Here a miracle happens"
Seriously, I am guessing that this searching is what you need to optimize ( you do not exactly say )
In turn, this suggests that you are doing a linear search through every atom and calculating the distance of each. Could it be so!?
There is a standard answer for this problem, called an octree.
https://en.wikipedia.org/wiki/Octree
A netflix tv miniseries 'The Billion Dollar Code' dramatizes the advantages of this approach https://www.netflix.com/title/81074012

Is there a chain constraint work around for CVXPY?

I keep encountering the same issue while trying to solve an Integer Programming Problem with cvxpy particularly with the constraints.
Some background on my problem and use case. I am trying to write a program that optimizes cut locations for 3D objects. The goal is to have as few interfaces as possible, but there is a constraint that each section can only have a certain maximum length. To visualize, you could picture a tree. If you cut at the bottom, you only have to make one large cut, but if the tree is larger than the maximum allowed length (if you needed to move it with a trailer of certain length for example) you would need to make another or several more cuts along the tree. As you go further up, it is likely that in addition to the main stem of the tree, you would need to cut some smaller, side branches along the same horizontal plane. I have written a program that output the number of interfaces (or cuts) needed at many evenly spaced horizontal planes along the height of an object. Now I am trying to pass that data to a new piece of code that will perform an Integer Programming optimization to determine the best location(s) to cut the tree if it treats each of the horizontal cutting planes as either active or inactive.
Below is my code:
#Create ideal configuration to solve for
config = cp.Variable(layer_split_number, boolean=True)
#Create objective
objective = sum(config*layer_islands_data[0])
problem = cp.Problem(cp.Minimize(objective),[layer_height_constraint(layer_split_number,layer_height,config) <= ChunkingParameters.max_reach_z])
#solve
problem.solve(solver = cp.GLPK_MI)
Layer Height Constraint function
def layer_height_constraint(layer_split_number,layer_height,config):
#create array of the absolute height (relative to ground) of each layer
layer_height_array = (np.array(range(1,layer_split_number+1))+1)*layer_height
#filter set inactive cuts to 0
active_heights = layer_height_array * config
#filter out all 0's
active_heights_trim = active_heights[active_heights != 0]
#insert top and bottom values
active_heights = np.append(active_heights,[(layer_split_number+1)*layer_height])
active_heights_trim = np.insert(active_heights,0,0)
#take the difference between active cuts to find distance
active_heights_diff = np.diff(active_heights_trim)
#find the maximum of those differences
max_height = max(active_heights_diff)
return(max_height)
With this setup, I get the following error:
Cannot evaluate the truth value of a constraint or chain constraints, e.g., 1 >= x >= 0.
I know that the two problem spots are using the python 'max' function in the last step and the step in the middle where I filter out the 0s in the array (because this introduces another equality of sorts. However, I can't really think of another way to solve this or setup the constraints differently. Is it possible to have cvxpy just accept a value into the constraint? My function is setup to just output a singular maximum distance value based on a given configuration, so to me it would make sense if I could just feed it the value being tried for the configuration (array of 0s and 1s representing inactive and active cuts respectively) for the current iteration and function would return the result which can then just be compared to the maximum allowed distance. However, I'm pretty sure IP solvers are a bit more complex than just running a bunch of iterations but I don't really know.
Any help on this or ideas would be greatly appreciated. I have tried an exhaustive search of the solution space, but when I have 10 or even 50+ cuts an exhaustive search is super inefficient. I would need to try 2^n combinations for n number of potential cuts.

Extracting random nodes in a graph using an attributes

I have a networkX graph where every node has an attribute.
I need to extract nodes based on a numerical attribute made in the range [0,inf] to create edges.
I tried using random.choice(G.nodes(), p) with p=attribute/(sum of the attributes in the graph).
The problem is that everytime i extract a node to create the edge my attribute change (for example let's say the attribute+=1) so I need to update all the probabilities because also the sum increases by 1.
For example I could have a graph with G.nodes(data=True)={1:{att=10},2:{att=5}, 3:{att=2}}
So p=[10/17, 5/17, 2/17].
If I extract for example 1 at the first extraction my graph will be G.nodes(data=True)={1:{att=11},2:{att=5}, 3:{att=2}} and p=[11/18, 5/18, 2/18].
Now, because i have more than a thousand graph and for every one of them I need to do a 50000 for clause that create edges, it's not computationally feasible to update all the probability every time i create an edge.
Is there a way to just use the node's attribute or to not calculate my probability every time?
By using numpy array I have done this:
G=nx.Graph()
G.add_nodes_from([1,2,3])
G.nodes[1]["att"]=10
G.nodes[2]["att"]=5
G.nodes[3]["att"]=2
dict={}
for i in G.nodes():
dict[i]=G.nodes[i]["att"]
extracted=random.chance(G.nodes(),p=np.fromiter(dict.values(),dtype="float")/np.sum(np.fromiter(dict.values(),dtype="float")))
When extracted (for example node 1) G.nodes[1]["att"]+=1 and nothing else need to be updated

Add custom property to vtkXMLUnstructuredGrid using python

I have a .vtu file representing a mesh which I read through vtkXMLUnstructuredGridReader. Then I create a numpy array (nbOfPoints x 3) in which I store the mesh vertex coordinates, which I'll call meshArray.
I also have a column array (nOfPoints x 1), which I'll call brightnessArray, which represents a certain property I want to assign to the vertexes of the meshArray; so to each vertex corresponds a scalar value. For example: to the element meshArray[0] will correspond brightnessArray[0] and so on.
How can I do this?
It is then possible to interpolate the value at the vertexes of the mesh to obtain a smooth variation of the property I had set in order to visualize it in paraview?
Thank you.
Simon
Here is what you need to do :
Write a Python Programmable Source to read your numpy data as a vtkUnstructuredGrid.
Here are a few examples of programmable sources :
https://www.paraview.org/Wiki/ParaView/Simple_ParaView_3_Python_Filters
https://www.paraview.org/Wiki/Python_Programmable_Filter
Read your .vtu dataset
Use a "Ressample with Dataset" filter on your python programmable source output and select your dataset as "source"
And you're done.
The hardest part is writing the programmble source script.

Implementing Disjoint Set Data Structure in Python

I'm working on a small project involving cluster, and I think the code given here https://www.ics.uci.edu/~eppstein/PADS/UnionFind.py might be a good starting point for my work. However, I have come across a few difficulties implementing it to my work:
If I make a set containing all my clusters cluster=set([0,1,2,3,4,...,99]) (there are 100 points with the numbers labelling them), then I would like to to group the numbers into cluster, do I simply write cluster=UnionFind()? Now what is the data type of cluster?
How can I perform the usual operations for set on cluster? For instance, I would like to read all the points (which may have been grouped together) in cluster, but type print cluster results in <main.UnionFind instance at 0x00000000082F6408>. I would also like to keep adding new elements to cluster, how do I do it? Do I need to write the specific methods for UnionFind()?
How do I know all the members of a group with one of its member is called? For instance, 0,1,3,4 are grouped together, then if I call 3, I want it to print 0,1,3,4, how do I do this?
thanks
Here's a small sample code on how to use the provided UnionFind class.
Initialization
The only way to create a set using the provided class is to FIND it, because it creates a set for a point only when it doesn't find it. You might want to create an initialization method instead.
union_find = UnionFind()
clusters = set([0,1,2,3,4])
for i in clusters:
union_find[i]
Union
# Merge clusters 0 and 1
union_find.union(0, 1)
# Add point 2 to the same set
union_find.union(0, 2)
Find
# Get the set for clusters 0 and 1
print union_find[0]
print union_find[1]
Getting all Clusters
# print all clusters and their sets
for cluster in union_find:
print cluster, union_find[cluster]
Note:
There is no direct way that gets you all the points given a cluster number. You can loop over all the points and pick the ones that have the required cluster number. You might want to modify the given class to support that operation more efficiently.

Categories

Resources