How to suppress "Setting display..." in Abaqus scripting - python

I have many abaqus plugins that perform actions in Abaqus CAE through a scripting interface. For many actions after they are executed Abaqus performs some display refresh procedures that take some time. If the models are small and the script does not do too much it is alright. However, e.g. one of my scripts takes a part and replicates it in the assembly at the coordinates specified by user's CSV file. Sometimes there might be up to 2000 such replications. In this case it takes forever to complete the procedure and most of the time you only see "Setting display..."
Furthermore this "Setting display..." overwrites your scripts' progress (milestone) so it is difficult to see how far the script had advanced.
Is there any way to suspend this display updating behavior until the script finishes? Maybe there is a hack that you could redefine abaqus update function until the script is done or something because according to the manual the only thing you can do is to prevent the updating of the color scheme, but that does not help at all.
Any tips will be appreciated, thanks!
EDIT: To clarify I used folowing methods on a viewport object with no vain:
disableRefresh()
disableColorCodeUpdates()
What does the disableColorCodeUpdates() do is quite clear and the benefit is apparent when color coding is used in the model viewport. However, I see no difference between using and not using disableRefresh().
EDIT: Sorry for such a long wait, only now I had a chance to get back at abaqus. Here is a simple example script that takes a part and places it in assembly:
import random
modelName='Model-1'
partName='Part-1'
noInst=1000
i = 0
lists = []
for i in range(noInst):
lists.append([random.random()*10,random.random()*10,random.random()*10])
totalParts = len(lists)
session.Viewport(name='Viewport: 2', origin=(6.63750028610229,
20.7638893127441), width=335.746887207031, height=149.77685546875)
session.viewports['Viewport: 2'].makeCurrent()
session.viewports['Viewport: 2'].maximize()
session.viewports['Viewport: 1'].restore()
for n,l in enumerate(lists):
milestone('Replicating parts', 'parts', n+1, totalParts)
a = mdb.models[modelName].rootAssembly
p = mdb.models[modelName].parts[partName]
a.Instance(name='Random_'+'-'+str(n), part=p, dependent=ON)
a = mdb.models[modelName].rootAssembly
a.translate(instanceList=('Random_'+'-'+str(n), ), vector=(float(l[0]), float(l[1]), float(l[2])))
If I try to create viewport object without specifying displayed object. Viewport defaults to current displayed object in initial Viewport. I noticed if I change a module from amssembly to any other e.g. Part, I gain speed improvement, but it relies that the part is sufficiently empty. If I have model with parts that are large and complex it is still rather slow on "setting display.." also my milestone is overwritten by "setting display.." by any case.

I think I've seen this before in a similar situation, where I was creating elements one-by-one in a GUI CAE session. I could not figure out how to disable the screen refresh, and it was maddeningly slow. There were two workarounds:
1. Use an alternative command, if one exists, that creates many items at once. In my example above, instead of creating each new element one at a time using the Element method, I was able to generate an entire Part and the mesh at once with makePartFromNodesAndElements which was significantly faster. In your case, it might be possible to do something similar.
2. If you do not need an active GUI, run the script from the system shell: abaqus cae noGui=script.py. You can even pass arguments to the script from the command line interface.
Another (untested) possibility I just thought of is that you may be able to create and switch to a new viewport without specifying the displayed object. Then try your code and see if that speeds things up.

Related

Enterprise Architect Python Scripting - Add an element to a Diagram

I am trying to write a python script so that I can add an element to a diagram in Sparx Enterprise Architect.
TestDiagEl=TestDiag.DiagramObjects.AddNew("l=10;r=110;t=-20;b=-80","")
TestDiagEl.ElementID=TestEl.ElementID
TestDiagEl.Update
TestDiag.Update
eaRep.ReloadDiagram(TestDiag.DiagramID)
It doesn't seem to work, what am I doing wrong?
EDIT
It seems that #geert was right, additionally i didnt add () after Update.
When writing code you should know what each line of code does, and why you are writing it.
In this case
TestDiagEl=TestDiag.DiagramObjects.AddNew("l=10;r=110;t=-20;b=-80","")
Creates a new diagramObject in memory
TestDiagEl.ElementID=TestEl.ElementID
Sets the elementID of the diagramObject to the elementID of my element
TestDiagEl.Update
Save the diagramObject to the database
TestDiag.Update
Save the current diagram in memory to the database
eaRep.ReloadDiagram(TestDiag.DiagramID)
Get the diagramDetails from the database and show them in the GUI
One problem is the TestDiag.Update. Since your diagram in memory doesn't know about the new DiagramObject yet, you are effectively undoing the addition of the new DiagramObject. Remove that line, and all should be OK.
Another problem is the parameter you pass to the DiagramObjects.AddNew method. The top and bottom values should be positive as you can see in the provided documentation So:
TestDiagEl=TestDiag.DiagramObjects.AddNew("l=10;r=110;t=20;b=80","")

In python, what is the optimal way to implement "debug mode" functions which only run conditionally (e.g. if app is in dev mode), but skips otherwise?

I am currently developing (Python 2.7) a device which does fairly complex imaging work with a stereo camera rig on a Raspberry Pi. The functionality isn't particularly relevant to this post, however the fact that we are limited to a Pi's processing power while doing costly operations is very relevant.
The device is governed by one central class, which I'll call BaseClass, where most of the functionality lives, however it is instantiated in one of two subclasses which extend it: DeviceClass, which is instantiated when the device itself is booting up, and DesktopClass which is instantiated on a computer, and contains functions and overrides as needed in order to emulate the exact same functionality as we get on the Pi - essentially, it is used to streamline the development process.
Often times the code does image manipulation and remapping, and I want to save the image in between steps to confirm that everything looks the way it should. I want this functionality to be toggleable based on a parameter you set on instantiation, without having to constantly change code, and only in DesktopClass. There are many ways to get this functionality, however, most of them involve conditionals or leveraging polymorphism, which either sacrifice unnecessary clock cycles when run on the device (bad, because of limited processing power), or lead to repetitive code in separate classes with only a couple lines changed, which I want to avoid just as much to keep the codebase clean.
So the question is, is there a way to create this functionality in such a way that I can have a single call which outputs the debug code (e.g. displaying an image) when the development mode is enabled, but simply skips without wasting any extra clock cycles when not in development mode, as might be achieved with macros or something else in a compiled language?
NOTE: This is also not the best example because compared to an image manipulation, a function call which returns immediately or a conditional is practically negligible. However there are other cases where I will want to call similar debug functions on a much more granular level, where it could add a nonnegligible overhead.
For example, in the following code, I have a function which does several successive image operations using functions that are the same on desktop and device, so it is defined in BaseClass and defining it in the subclasses would be wasteful. In desktop mode however, I want to save the images between each step if self.dev is True.
class BaseClass:
def __init__(self):
self.dev = False
def imaging(self, img_list, map_a, map_b):
imgsout = []
for img in img_list:
# Do some image remapping
step1 = cv2.remap(img, map_a[0], map_a[1])
#DEBUG SAVE IMAGE
step2 = cv2.remap(step1, map_b[0], map_b[1])
#DEBUG SAVE IMAGE
imgsout.append(step2)
return imgsout
class DesktopClass(BaseClass):
def __init__(self, dev=False):
BaseClass.__init__(self)
self.dev = dev
class DeviceClass(BaseClass):
def __init__(self):
BaseClass.__init__(self)
# Body code
# Bunch of images to work on, doesn't matter what this is
imgs = [img1, img2, img3]
# This code runs on the desktop development setting
desktop = DesktopClass(dev=True)
output1 = desktop.imaging(imgs, map_a, map_b)
# When stepping through this code, I should save the intermediary images several steps
desktop.dev = False
output2 = desktop.imaging(imgs, map_a, map_b)
# This code should run without saving any images but give the same output
# This code runs on the raspberry pi
device = DeviceClass()
output3 = desktop.imaging(imgs, map_a, map_b)
# This code should run without saving any images but give the same output
Some potential solutions and their shortcomings:
Define a class function BaseClass.debug_saveim() which checks self.dev is True before executing any debug code.
Pros: Single line for each debug call in the base class - doesn't clutter the code significantly, functionality is obvious at first glance and doesn't hurt readability of code.
Cons: On the device, enters a new function just to fail a conditional and exit. Relatively wasteful when trying to do stuff in realtime.
Define a class function BaseClass.saveim() which saves the image. Every time it is used in the imaging() function (or elsewhere), wrap it in a conditional so that it only runs if in dev mode
Pros: Avoids entering a new function stack frame on device, only has to do the conditional which is more efficient
Cons: Clutters the code. Requires two lines for a single function which doesn't even execute on the intended, final product. Hurts readability and honestly just looks bad. Still wastes a tiny bit of time with a conditional.
Define DesktopClass.imaging() separately from BaseClass.imaging(), where the base one has no debug calls, and the desktop one overrides it.
Pros: Device never wastes any cycles because the debug calls don't even exist in that context
Cons: Redundant code, and all the bad things that come with it.
Define an identically named debug function separately such as BaseClass.debug_save_im() and DesktopClass.debug_save_im(). These pass without doing anything, and save the image, respectively. This function is called in BaseClass.imaging().
Pros: One very readable line per debug output, doesn't clutter code. Leverages polymorphism without using any redundant code. Elegant.
Cons: Still has to enter a new function in the device context, even if it just passes. Have to write two definitions per debug function.
Is there a standard, or commonly accepted practice to get this functionality as efficiently as possible? Is there a tool or library which does this as effectively as possible?
Thank you!
P.S. I know python is very suboptimal for any sort of realtime operation, but we have our reasons for using it, and truth be told having conditionals and whatnot probably won't hurt out operation at all, but it still feels dirty on principle, and I would very much like to know if there is a clean and elegant solution that also optimizes performance.

Performing Pywinauto scripts from C# based application is slow

Updated on May 7th, history information is under line "***********************"
#Jakub Sowa I cannot add any comment below yours, I tried top_window and children(), it didn't seem to work to me. fox example
tp = cg.appConnect().top_window()
tp.children(title="finance", control_type="Button").draw_outline()
error occurs like-- AttributeError: 'list' object has no attribute 'draw_outline'
So would you give me a specific example to demonstrate how does it work in your case?
I've been doing pywinauto automation for a couple of months, but it runs slowly for some code, for example:
I click the button for confirmation in following code:
self.dlg = cg.appConnect().window(title="Hygine_Platform", control_type="Window")
self.regdlg = self.dlg.child_window(title ="registry", auto_id ="FormRegBalance", control_type="Window")
self.okbtn = self.regdlg.child_window(title="confirm",auto_id="btnOk",control_type="Button")
def clickConfSettle(self):
self.okbtn.click_input()
If I use clickConfSettle(), it takes at least five seconds to complete. Does anybody have the same problem, is there any solution for this? I've checked the structure of the controls, it is quite simple, only has 3 levels.
I've been using the library for a week or two only, but I've figured that if you get the Window Specification of the top window and it's children as soon as possible like so self.app.top_window().children() where app is the pywinauto.Application(backend='uia', allow_magic_lookup=False).connect(handle=self.handle) then your application should work much faster.
It takes the dict lookup away from you, however, you can still access items by indexes or iterate over them (which in my case was way faster than using child_window method). I'm not sure about the clicking part though as it may require some more time, but for the most part - all of the lookups in my program have gone under 1s after that change.
I'd like to answer with what I've tried so far. If I use var.children(attributes), index is still required. so it works with var.children(attributes)[], that runs faster than before.

PyCharm: Storing variables in memory to be able to run code from a "checkpoint"

I've been searching everywhere for an answer to this but to no avail. I want to be able to run my code and have the variables stored in memory so that I can perhaps set a "checkpoint" which I can run from in the future. The reason is that I have a fairly expensive function that takes some time to compute (as well as user input) and it would be nice if I didn't have to wait for it to finish every time I run after I change something downstream.
I'm sure a feature like this exists in PyCharm but I have no idea what it's called and the documentation isn't very clear to me at my level of experience. It would save me a lot of time if someone could point me in the right direction.
Turns out this is (more or less) possible by using the PyCharm console. I guess I should have realized this earlier because it seems so simple now (though I've never used a console in my life so I guess I should learn).
Anyway, the console lets you run blocks of your code presuming the required variables, functions, libraries, etc... have been specified beforehand. You can actually highlight a block of your code in the PyCharm editor, right click and select "Run in console" to execute it.
This feature is not implement in Pycharm (see pycharm forum) but seems implemented in Spyder.

Save Python variables with Maya scene

How can I save Python variables in a Maya scene?
For example I create a variable x in Maya's Script Editor and I save the scene. When I reopen Maya and reopen the saved scene, x variable doesn't exist anymore.
Is it possible?
Yes, use a script node, use a optionVar or store the variable in a attribute.
In general
Scene persistence is supposed to be built with nodes. In fact everything in Maya is supposed to be a node or built out of a node tree. The bonus is if you make your computation a node then maya will automatically handle all the stuff for you. This is how render engines for example store the data. They register a special node and read the data from the node.
Needing to ask this question, "How can I save Python variables in a Maya scene?", is a indication that you indeed should have built a nodal solution. There are a few exceptions to this and those are related to general user GUI preferences that should be saved as optionVars.
Maya does not actually enforce that you do things sanely. You are free to do whatever you want, sane or not. So it is possible, tough a bit fishy, to use a scene save scriptjob to store the snapshot of your environment to a script node that auto runs on load. This is undesirable in the same way as using global variables is undesirable for code in general. It is also slightly unreliable, as a user can disable auto running of on load scriptNodes, for good reason.
About nodal solutions
Maya as a rule does not work the way most people intuitively expect at first glance. Just like the text you write in code is not what your computer generally executes, but rather the compiled code is what gets executed. The compiler has a bit of leeway on what i can do in this context and usually throws away some stuff. So in code its not really sane to think in terms of what the text looks like in code but rather what structures it builds.
In Maya the structures you should be building are nodes. There are two extra use cases outside this and those are:
Exporters (Importers but due to their nature they are node bound because they target Maya)
Graphical user interfaces that introspect Mayas current state or bring external info to the user
These two could only specialize in reading nodes. Mayas object auxiliary function nature can quite efficiently hide the fact from the user but this is essentially want your doing. Anything outside this scope does not benefit form using Maya. So whenever you use maya you want to capitalize on nodes. Think of it as if you are using a second language on top of your own programming language. This one is the actual language of Maya.
Let us start with a naive approach for randomizing point positions for a mesh (note i avoid the term object as Maya has no such concept):
import maya.cmds as cmds
import random
def randomize_points(scale):
sel = cmds.polyListComponentConversion(ff=1, fe=1, fuv=1, fvf=1, tv=1)
sel = cmds.ls(sel ,fl=1)
for item in sel:
cmds.move((random.random()-0.5)*scale,
(random.random()-0.5)*scale,
(random.random()-0.5)*scale,
item, r=1)
It works, but has three deficiencies:
You can not know what it looks like until you run it.
You can undo tough so test and undo. Maybe youd need a seed variable for the random too?
You need to build a GUI for this so the user can work it.
There's is no way to specify the profile of the randomness.
(This one is here so it easy for you to run the code i could use one of many pythons noise implementations. I will fix this nonetheless.)
The code is straightforward in sense id does what a user would do in the GUI. But what really happens? Where do the changes go? Simply put the answer is 2 fold they go either to the tweak array (most likely), or the objects position array. But there is a better way i can use Maya primitives to manipulate a similar behavior. Enter nodes.
First you need to go a bit node shopping, what nodes could actually provide a noise functionality? The thing is there does not seem to be many contenders here. There is despite this one node that's basically roll your own deformation and that is the particle node. So this is how you'd attack the problem with a particle node, some noise and a few attributes:
... to be continued ...
Here is the simple example to saving a python variable inside maya file.
lets Save Variable X in Maya file and the value of X is 100.
and we dont want see this error:
# Error: NameError: name 'x' is not defined #
To Save Variable in Maya file Execute this code before you save your Maya file.
import maya.cmds as cmds
var_node = cmds.scriptNode(scriptType=1, name='CustomVariable', beforeScript='python("x = 100")')
cmds.scriptNode( var_node, executeBefore=True )
open file again and execute:
print x

Categories

Resources