rebar_set_command=[]
for i in rebar_instances:
rebar_set_command.append('m.rootAssembly.instances[\''+i+'\'].faces.getByBoundingBox(0,0,0,X,Y,Z')
a='+'.join(rebar_set_command)
m.rootAssembly.Set(faces=a, name='A')
However it cannot be done because I think the value a in faces=a contains quotation marks.
How can I call the string to this Abaqus command without the opening and closing quote? Thanks!
The problem is 'm.rootAssembly.instances[\''+i+'\'].faces.getByBoundingBox(0,0,0,X,Y,Z' gives you a string. It's not a Python expression. You may want to check out some Python tutorials to get an introduction to expressions, variables, and basic types, such as strings.
In your case you need to run what's in the string as an actual expression to get the returned face sequence, and build an array of faces from that. To make it more complicated, Abaqus returns their own internal type, Sequence, when you call getByBoundingBox. This type is a pain to work with because you can't create an empty one (at least not that I'm aware of). So dynamically building a list of faces for a set requires some extra attention. In the below code I get the face sequence for each rebar instance, and then add each individual face to my own list of faces. Finally, to create the set, Abaqus is picky about the type on the faces argument. We need to create a part.FaceArray object for Abaqus to be satisfied.
import part
faces = []
for i in rebar_instances:
face_sequence = m.rootAssembly.instances[i].faces.getByBoundingBox(0,0,0,X,Y,Z)
faces += [f for f in face_sequence]
m.rootAssembly.Set(faces=part.FaceArray(faces), name='A')
Related
I have a question about string usage in lists in python for Maya. I am writing a script meant to take a selected object, then instance it 100 times with random translate, scale, and orient attributes. The script itself works and does what it's meant to, however I'm not being able to decipher how to instance the objects with the original object name, and then add a suffix that ends with "_instance#", where # assigns 1, 2, 3, etc. in order to the copies of the original mesh. This is where I'm at so far:
#Capture selected objects, sort into list
thing = MC.ls(sl=True)
print thing
#Create instances of objects
instanceObj = MC.instance(thing, name='thing' + '_instance#')
This returns a result that looks like "thing_instance1, thing_instance2".
Following this, I figured the single quote around the string for the object was causing it to just name it "thing", so I attempted to write it as follows
MC.instance(thing, name=thing + '_instance1'
I guess because instance uses a list, it's not accepting the second usage of the string as valid and returns a concatenate error. I've tried rewriting this a few times and the closest I get is with
instanceObj = MC.instance(thing)
which results in a list of (pCube1,2,3,4), but is lacking the suffix.
I'm not sure where to go from here to end up with a result where the instanced objects are named with the convention "pCube1_instance1, pCube1_instance2" etc.
Any assistance would be appreciated.
It is not clear if you want to use only one source object or more. In any case the
MC.ls(sl=True)
returns a list of strings. And concatenating a list and a string does not work. So use thing[0] or simply
MC.ls(sl=True)[0]
If you get errormessages, please always include the message in your question, it helps a lot to see what error appears.
I am trying to transfer controls from one rig to another.
I think I have most of it figured out but I'm getting a bit stuck.
I have a function that I am feeding, the duplicated control that I want to attach as well as the list of controls from the original rig that I need to find to move the control to.
My issue is that I keep getting this error:
Error: ValueError: file line 132: More than one object matches name: Index_2_L_ctrl
I searched through everything and I'm pretty sure that there is only one thing named each, but I can't figure out how to list any additional items named the same way. Or better yet to get rid of them.
Here is my function; let me know if anything is unclear I will try to clarify:
def spltString(wtlf, arr):
ndp = wtlf
print ndp
dlb = difflib.get_close_matches(ndp, arr)
fil = dlb[0]
cmds.pointConstraint(ndp, dlb[0])
Try passing in the long names of the controls you want, rather than the short names. That will disambiguate different copies of Index_2_L_ctrl
You can find all of the copies of the control like this:
controls = cmds.ls('Index_2_L_ctrl', long = True)
the results will be object names with the complete hierarchy prepended, like
|skeleton|pelvis|spine1|spine2|chest|r_arm|r_forearm
or whatever. cmds.ls() with the long=True flag will convert short names to long ones for you.
It's a good habit to use long names most of the time precisely because of the problems you're having.
There's some weird mysterious behavior here.
EDIT This has gotten really long and tangled, and I've edited it like 10 times. The TL/DR is that in the course of processing some text, I've managed to write a function that:
works on individual strings of a list
throws a variety of errors when I try to apply it to the whole list with a list comprehension
throws similar errors when I try to apply it to the whole list with a loop
after throwing those errors, stops working on the individual strings until I re-run the function definition and feed it some sample data, then it starts working again, and finally
turns out to work when I apply it to the whole list with map().
There's an ipython notebook saved as html which displays the whole mess here: http://paul-gowder.com/wtf.html ---I've put a link at the top to jump past some irrelevant stuff. I've also made a[nother] gist that just has the problem code and some sample data, but since this problem seems to throw around a bunch of state somehow, I can't guarantee it'll be reproducible from it: https://gist.github.com/paultopia/402891d05dd8c05995d2
End TL/DR, begin mess
I'm doing some toy text-mining on that old enron dataset, and I have the following set of functions to clean up the emails preparatory to turning them into a document term matrix, after loading nltk stopwords and such. The following uses the email library in python 2.7
def parseEmail(document):
# strip unnecessary headers, header text, etc.
theMessage = email.message_from_string(document)
tofield = theMessage['to']
fromfield = theMessage['from']
subjectfield = theMessage['subject']
bodyfield = theMessage.get_payload()
wholeMsgList = [tofield, fromfield, subjectfield, bodyfield]
# get rid of any fields that don't exist in the email
cleanMsgList = [x for x in wholeMsgList if x is not None]
# now return a string with all that stuff run together
return ' '.join(cleanMsgList)
def lettersOnly(document):
return re.sub("[^a-zA-Z]", " ", document)
def wordBag(document):
return lettersOnly(parseEmail(document)).lower().split()
def cleanDoc(document):
dasbag = wordBag(document)
# get rid of "enron" for obvious reasons, also the .com
bagB = [word for word in dasbag if not word in ['enron','com']]
unstemmed =[word for word in bagB if not word in stopwords.words("english")]
return [stemmer.stem(word) for word in unstemmed]
print enronEmails[0][1]
print cleanDoc(enronEmails[0][1])
First (T-minus half an hour) running this on an email represented as a unicode string produced the expected result: print cleanDoc(enronEmails[0][1]) yielded a list of stemmed words. To be clear, the underlying data enronEmails is a list of [label, message] lists, where label is an integer 0 or 1, and message is a unicode string. (In python 2.7.)
Then at t-10, I added a couple lines of code (since deleted and lost, unfortunately...but see below), with some list comprehensions in them to just extract the messages from the enronEmails, run my cleanup function on them, and then join them back into strings for convenient conversion into document term matrix via sklearn. But the function started throwing errors. So I put my debugging hat on...
First I tried rerunning the original definition and test cell. But when I re-ran that cell, my email parsing function suddenly started throwing an error in the message_from_string method:
AttributeError: 'list' object has no attribute 'message_from_string'
So that was bizarre. This was exactly the same function, called on exactly the same data: cleanDoc(enronEmails[0][1]). The function was working, on the same data, and I haven't changed it.
So checked to make extra-sure I didn't mutate the data. enronEmails[0][1] was still a string. Not a list. I have no idea why traceback was of the opinion that I was passing a list to cleanDoc(). I wasn't.
But the plot thickens
So then I went to a make a gist to create a wholly reproducible example for the purpose of posting this SO question. I started with the working part. The gist: https://gist.github.com/paultopia/c8c3e066c39336e5f3c2.
To make sure it was working, first I stuck it in a normal .py file and ran it from command line. It worked.
Then I stuck it in a cell at the bottom of my ipython notebook with all the other stuff in it. That worked too.
Then I tried the parseEmail function on enronEmails[0][1]. That worked again. Then I went all the way back up to the original cell that was throwing an error not five minutes ago and re-ran it (including the import from sklearn, and including the original definition of all functions). And it freaking worked.
BUT THEN
I then went back in and tried again with the list comprehensions and such. And this time, I kept track more carefully of what was going on. Adding the following cells:
1.
def atLeastThreeString(cleandoc):
return ' '.join([w for w in cleandoc if len(w)>2])
print atLeastThreeString(cleanDoc(enronEmails[0][1]))
THIS works, and produces the expected output: a string with words over 2 letters. But then:
2.
justEmails = [email[1] for email in enronEmails]
bigEmailsList = [atLeastThreeString(cleanDoc(email)) for email in justEmails]
and all of a sudden it starts throwing a whole new error, same place in the traceback:
AttributeError: 'unicode' object has no attribute 'message_from_string'
which is extra funny, because I was passing it unicode strings a minute ago and it was doing just fine. And, just to thicken the plot, then going back and rerunning cleanDoc(enronEmails[0][1]) throws the same error
This is driving me insane. How is it possible that creating a new list, and then attempting to run function A on that list, not only throws an error on the new list, but ALSO causes function A to throw an error on data that it was previously working on? I know I'm not mutating the original list...
I've posted the entire notebook in html form here, if anyone wants to see full code and traceback: http://paul-gowder.com/wtf.html The relevant parts start about 2/3 of the way down, at the cells numbered 24-5, where it works, and then the cell numbered 26, where it blows up.
help??
Another edit: I've added some more debugging efforts to the bottom of the above-linked html notebook. As you can see, I've traced the problem down to the act of looping, whether done implicitly in list comprehension form or explicitly. My function works on an individual item in the list of just e-mails, but then fails on every single item when I try to loop over that list, except when I use map() to do it. ???? Has the world gone insane?
I believe the problem is these staements:
justEmails = [email[1] for email in enronEmails]
bigEmailsList = [atLeastThreeString(cleanDoc(email)) for email in justEmails]
In python 2, the dummy variable email leaks out into the namespace, and so you are overwriting the name of the email module, and you are then trying to call a method from that module on a python string. I don't have ntlk in python 2, so I cant test it, but I think this must be it.
I am trying to contract some vertices in igraph (using the python api) while keeping the names of the vertices. It isn't clear to me how to keep the name attribute of the graph. The nodes of the graph are people and I'm trying to collapse people with corrupted names.
I looked at the R documentation and I still don't see how to do it.
For example, if I do either of the following I get an error.
smallgraph.contract_vertices([0,1,2,3,4,2,6],vertex.attr.comb=[name='first'])
smallgraph.contract_vertices([0,1,2,3,4,2,6],vertex.attr.comb=['first'])
In Python, the keyword argument you need is called combine_attrs and not vertex.attr.comb. See help(Graph.contract_vertices) from the Python command line after having imported igraph. Also, the keyword argument accepts either a single specifier (such as first) or a dictionary. Your first example is invalid because it is simply not valid Python syntax. The second example won't work because you pass a list with a single item instead of just the single item.
So, the correct variants would be:
smallgraph.contract_vertices([0,1,2,3,4,2,6], combine_attrs=dict(name="first"))
smallgraph.contract_vertices([0,1,2,3,4,2,6], combine_attrs="first")
Nevermind. You can just enter a dictionary without using the wording
vertex.attr.comb
I am new to python/GDAL and am running into perhaps a trivial issue. This may stem from the fact that I don't really understand how to use GDAL properly in python, or something careless, but even though I think I am following the help doc, I keep getting a syntax error when trying to use "gdalbuildvrt".
What I want to do is take several (amount varies for each set, call it N) geotagged 1-band binary rasters [all values are either 0 or 1] of different sizes (each raster in the set overlaps for the most part though), and "stack" them on top of each other so that they are aligned properly according to their coordinate information. I want this "stack" simply so I can sum the values and produce a 'total' tiff that has an extent to match the exclusive extent (meaning not just the overlap region) of all the original rasters. The resulting tiff would have values ranging from 0 to N, to represent the total number of "hits" the pixel in that location received over the course of the N rasters.
I was led to gdalbuildvrt [http://www.gdal.org/gdalbuildvrt.html] and after reading about it, it seemed that by using the keyword -separate, I would be able to achieve what I need. However, each time I try to run my program, I get a syntax error. The following shows two of the several different ways I tried calling gdalbuildvrt:
gdalbuildvrt -separate -input_file_list stack.vrt inputlist.txt
gdalbuildvrt -separate stack.vrt inclassfiles
Where inputlist.txt is a text file with a path to the tif on every line, just like the help doc specifies. And inclassfiles is a python list of the pathnames. Every single time, no matter which way I call it, I get a syntax error on the first word after the keywords (i.e. 'inputlist' in inputlist.txt, or 'stack' in stack.vrt).
Could someone please shed some light on what I might be doing wrong? Alternatively, does anyone know how else I could use python to get what I need?
Thanks so much.
gdalbuildvrt is a GDAL command line utility. From your example its a bit unclear how you actually run it, but when running from within Python you should execute it as a subprocess.
And in your first line you have the .vrt and the .txt in the wrong order. The textfile containing the files should follow directly after the -input_file_list.
From within Python you can call gdalbuildvrt like:
import os
os.system('gdalbuildvrt -separate -input_file_list inputlist.txt stack.vrt')
Note that the command is provided as a string. Using a Python list with the files can be done with something like:
os.system('gdalbuildvrt -separate stack.vrt %s') % ' '.join(data)
The ' '.join(data) part converts the list to a string with a space between the items.
Depending on how your GDAL is build, its sometimes possible to use wildcards as well:
os.system('gdalbuildvrt -separate stack.vrt *.tif')