ldap3 library: modify attribute with multiple values - python

Trying to modify an ldap attribute that has multiple values, can't seem to figure out the syntax.
I'm using the ldap3 library with python3.
The documentation gives an example which modifies two attributes of an entry - but each attribute only has one value.
The dictionary from that example is the bit I'm having trouble with:
c.modify('cn=user1,ou=users,o=company',
{'givenName': [(MODIFY_REPLACE, [<what do I put here>])]})
Instead of 'givenname' which would have one value I want to modify the memberuid attribute which obviously would have many names as entries.
So I spit all my memberuids into a list, make the modification, and then am trying to feed my new usernames/memberuid list to the MODIFY command.
Like so:
oldval = 'super.man'
newval = 'clark.kent'
existingmembers = ['super.man', 'the.hulk', 'bat.man']
newmemberlist = [newval if x==oldval else x for x in existingmembers]
# newmemberlist = ", ".join(str(x) for x in newmemberlist)
I've tried passing in newmemberlist as a list
'memberuid': [(MODIFY_REPLACE, ['clark.kent', 'the.hulk','bat.man'])]
which gives me TypeError: 'str' object cannot be interpreted as an integer
or various combinations (the commented line) of one long string, separated with spaces, commas, semi colons and anything else I can think of
'memberuid': [(MODIFY_REPLACE, 'clark.kent, the.hulk, bat.man')]
which does the replace but I get one memberuid looking like this
'clark.kent, the.hulk, bat.man'

You need to ensure you are passing in the DN of the ldapobject you wish to modify.
c.modify(FULL_DN_OF_OBJECT, {'memberuid': [(MODIFY_REPLACE, ['clark.kent', 'the.hulk','bat.man'])]})
Then you should be able just to pass in newmemberlist instead of ['clark.kent', 'the.hulk','bat.man']
c.modify(FULL_DN_OF_OBJECT, {'memberuid': [(MODIFY_REPLACE, newmemberlist )]})

I believe MODIFY_REPLACE command would not accept multiple values, as it would not understand which values would be replaced with new ones. Instead you should try doing MODIFY_DELETE old values first and MODIFY_ADD new values afterwards.

Related

Problem transforming a variable in logs, python

I am using Python. I would like to create a new column which is the log transformation of column 'lights1992'.
I am using the following code:
log_lights1992 = np.log(lights1992)
I obtain the following error:
I have tried two things: 1) adding a 1 to each value and transform the column 'lights1992' to numeric.
city_join['lights1992'] = pd.to_numeric(city_join['lights1992'])
city_join["lights1992"] = city_join["lights1992"] + 1
However, that two solution has not worked. Variable 'lights1992' is a float64 type. Do you know what can be the problem?
Edit:
The variable 'lights1992' comes from doing a zonal_statistics from a raster 'junk1992', maybe this affect.
zs1 = zonal_stats(city_join, junk1992, stats=['mean'], nodata=np.nan)
city_join['lights1992'] = [x['mean'] for x in zs1]
the traceback states:
'DatasetReader' object has no attribute'log'.
Did you re-assign numpy to something else at some point? I can't find much about 'DatasetReader' is that a custom class?
EDIT:
I think you would need to pass the whole column because your edit doesn't show a variable named 'lights1992'
so instead of:
np.log(lights1992)
can you try passing in the Dataframe's column to log?:
np.log(city_join['lights1992'])
2ND EDIT:
Since you've reported back that it works I'll dive into the why a little bit.
In your original statement you called the log function and gave it an argument, then you assigned the result to a variable name:
log_lights1992 = np.log(lights1992)
The problem here is that when you give python text without any quotes it thinks you are giving it a variable name (see how you have log_lights1992 on the left of the equal sign? You wanted to assign the results of the operation on the right hand side of the equal sign to the variable name log_lights1992) but in this case I don't think lights1992 had any value!
So there were two ways to make it work, either what I said earlier:
Instead of giving it a variable name you give .log the column of the city_join dataframe (that's what city_join["lights1992"]) directly.
Or
You assign the value of that column to the variable name first then you pass it in to .log, like this:
lights1992 = city_join["lights1992"]
log_lights1992 = np.log(lights1992)
Hope that clears it up for you!

Instancing maya objects with a sequential suffix, object name string not seen by cmds.instance

I have a question about string usage in lists in python for Maya. I am writing a script meant to take a selected object, then instance it 100 times with random translate, scale, and orient attributes. The script itself works and does what it's meant to, however I'm not being able to decipher how to instance the objects with the original object name, and then add a suffix that ends with "_instance#", where # assigns 1, 2, 3, etc. in order to the copies of the original mesh. This is where I'm at so far:
#Capture selected objects, sort into list
thing = MC.ls(sl=True)
print thing
#Create instances of objects
instanceObj = MC.instance(thing, name='thing' + '_instance#')
This returns a result that looks like "thing_instance1, thing_instance2".
Following this, I figured the single quote around the string for the object was causing it to just name it "thing", so I attempted to write it as follows
MC.instance(thing, name=thing + '_instance1'
I guess because instance uses a list, it's not accepting the second usage of the string as valid and returns a concatenate error. I've tried rewriting this a few times and the closest I get is with
instanceObj = MC.instance(thing)
which results in a list of (pCube1,2,3,4), but is lacking the suffix.
I'm not sure where to go from here to end up with a result where the instanced objects are named with the convention "pCube1_instance1, pCube1_instance2" etc.
Any assistance would be appreciated.
It is not clear if you want to use only one source object or more. In any case the
MC.ls(sl=True)
returns a list of strings. And concatenating a list and a string does not work. So use thing[0] or simply
MC.ls(sl=True)[0]
If you get errormessages, please always include the message in your question, it helps a lot to see what error appears.

Why is this an index error?

This is my first post, so I apologize if this has been answered previously. I have tried to look through the Python 3 documentation on string formatting and lists, and reviewed similar formatting questions here on SO.
I want to take the string (data1), break it into a list (bigData), and print out a statement using the list items. Eventually, the idea would be to read in a csv file, break it up, and print out a response, but I've tried to simplify the process since there's an error.
"Hello, John Doe. Your current balance is $53.44."
However, I'm not sure why the following code is throwing an IndexError, much less a tuple index.
data1 = "John,Doe,53.44"
bigData = data1.split(",")
bigData[-1] = float(bigData[-1])
print(bigData) # test - []'s indicate a list, not tuple?
greeting = "Hello, {} {}. Your current balance is ${}."
print(greeting.format(bigData))
My guess is that bigData is heterogeneous, which implies a tuple. If I substitute a string value instead of 53.44 (so data1 and bigData are homogeneous), it throws the same error.
data1 = "John,Doe,random"
bigData = data1.split(",")
print(bigData) # test - []'s indicate a list, not tuple?
greeting = "Hello, {} {}. Your current balance is {}."
print(greeting.format(bigData))
However, if I convert the original to Python 2.x string formatting, it formats correctly without an error.
data1 = "John,Doe,53.44"
bigData = data1.split(",")
bigData[-1] = float(bigData[-1])
print(bigData) # test - []'s indicate a list, not tuple?
greeting = "Hello, %s %s. Your current balance is $%.2f."
print(greeting % tuple(bigData))
Why is it converting my string to a tuple?
How do I write this work in Python 3?
Thank you.
Use the splat (*) to unpack your arguments (your format string wants three arguments but you only give it one, a list containter).
print(greeting.format(*bigData))
Also, you may want:
bigData[-1] = str(round(float(bigData[-1]), 2))
The str.format method takes positional arguments, not a single list. You need to unpack your list bigData using the * operator:
data1 = "John,Doe,random"
bigData = data1.split(",")
print(bigData) # test - []'s indicate a list, not tuple?
greeting = "Hello, {} {}. Your current balance is {}."
print(greeting.format(*bigData)) # here's the change
You're correct that bigData is a list, not a tuple, str.split returns a list.
The str.split() method returns a list, by definition.
I think you've misunderstood something you've read - heterogeneous vs. homogeneous refer to typical use cases of tuples vs. lists. Having the types of all the elements match or not does not magically cause the container to change to the other type!
I can see how this is surprising, though what surprises me is that the traceback doesn't show that the exception occurs in the format call.
Python's lists can be heterogenous just like tuples; this is because the common type they store is object references, which all things in Python are. The tuple is actually the argument list to the format method, in this case (bigData,). It ran out of arguments when looking for things to format, since you had three {} placeholders but only one argument (the list bigData). You can use greeting.format(*bigData) to unpack the list and use its contents as arguments.
The % formatting doesn't encounter this error because it actually expects a tuple (or one item) in the right operand.
A more idiomatic and legible approach might actually be to go to the csv module already:
import csv, io
data1 = "John,Doe,random"
for row in csv.DictReader(io.StringIO(data1),
"givenname surname balance".split()):
greeting = "Hello, {givenname} {surname}. Your current balance is {balance}."
print(greeting.format(**row))
This lets us assign meaningful names to the columns, including reordering them in the format string if needed. I've left out the float conversion, and by the way, decimal.Decimal may be better for that use.

How to use Python sets and add strings to it in as a dictionary value

I am trying to create a dictionary that has values as a Set object. I would like a collection of unique names associated with a unique reference). My aim is to try and create something like:
AIM:
Dictionary[key_1] = set('name')
Dictionary[key_2] = set('name_2', 'name_3')
Adding to SET:
Dictionary[key_2].add('name_3')
However, using the set object breaks the name string into characters which is expected as shown here. I have tried to make the string a tuple i.e. set(('name')) and Dictionary[key].add(('name2')), but this does not work as required because the string gets split into characters.
Is the only way to add a string to a set via a list to stop it being broken into characters like
'n', 'a', 'm', 'e'
Any other ideas would be gratefully received.
You can write a single element tuple as #larsmans explained, but it is easy to forget the trailing comma. It may be less error prone if you just use lists as the parameters to the set constructor and methods:
Dictionary[key_1] = set(['name'])
Dictionary[key_2] = set(['name_2', 'name_3'])
Dictionary[key_2].add(['name_3'])
should all work the way you expect.
('name') is not a tuple. It's just the expression 'name', parenthesized. A one-element tuple is written ('name',); a one-element list ['name'] is prettier and works too.
In Python 2.7, 3.x you can also write {'name'} to construct a set.

Query PyTables Nested Columns

I have a table with a nested table column, route. Beneath that are two other nested datatypes, master and slave that both have an integer id and string type field.
I would like to run something like table.readWhere('route/master/id==0') but I get "variable route refers to a nested column, not allowed in conditions"
Is there a method to query a nested datatype in pytables?
You have to create variables to be used inside the condition string. One option is to define a variable dictionary:
table.readWhere('rId==0', condvars={'rId': table.cols.route.master.id})
Another option is to define local variables for the columns to be used in the condition.
rId = table.cols.route.master.id
table.readWhere('rId==0')
As this pollutes the namespace, I recommend you to create a function to wrap the code. I tried to reference the column itself, but it seems the interpreter fetches the whole dataset before throwing a NameError.
table.readWhere('table.cols.route.master.id==0') # DOES NOT WORK
More info on the where() method in the library reference.
Building on the answer by streeto, here is a quick way to get access to all the nested columns in a table when constructing a query
condvars = {k.replace('/', '__'): v for k, v in table.colinstances.items()}
result = table.read_where('route__master__id == 0', condvars=condvars)
table.colinstances returns a flat dict whose keys are slash separated paths to all columns (including nested ones) in the table, and whose values are instances of the PyTables Col class located at that path. You can't use the slash separated path in the query, but if you replace with some other separator that is allowed within Python identifiers (in this case I chose double underscores), then everything works fine. You could choose some other separator if you like.

Categories

Resources