Still working with LDAP...
The problem i submit today is this: i'm creating a posixGroup on a server LDAP using a custom method developed in python using Django framework. I attach the method code below.The main issue is that attribute gidNumber is compulsory of posixGroup class, but usually is not required when using graphical LDAP client like phpLDAPadmin since they fill automatically this field like an auto-integer.
Here the question: gidNumber is an auto integer attribute for default, or just using client like the quoted above? Must i specify it during the posixGroup entry creation?
def ldap_cn_entry(self):
import ldap.modlist as modlist
dn = u"cn=myGroupName,ou=plant,dc=ldap,dc=dem2m,dc=it"
# A dict to help build the "body" of the object
attrs = {}
attrs['objectclass'] = ['posixGroup']
attrs['cn'] = 'myGroupName'
attrs['gidNumber'] = '508'
# Convert our dict to nice syntax for the add-function using modlist-module
ldif = modlist.addModlist(attrs)
# Do the actual synchronous add-operation to the ldapserver
self.connector.add_s(dn, ldif)
connector is first instanced in the constructor of the class where this method is built. The constructor provides also to the LDAP initialization and binding. Than, the connection will be closed by the destructor.
to use the method i begin instancing the class it belongs, so it also connects to LDAP server. Than i use the method and finally i destroy the object i instanced before to close the connection. All works, indeed, if use this procedure to create a different entry, or if i specify the gidNumber manually.
The fact is i CAN'T specify the gidNumber any time i want to create a group to goal my purpose. I should leave it filling automatically (if that's possible) or think about another way to complete it.
I'm not posting more code about the class i made to not throng the page.
I'll provide more information if needed. Thank you.
The LDAP protocol has no method for auto-integer.
You need to specify the value when creating the entry.
You can do some tricks to help.
We often put the last used value on an OU (We add an AUX class with custom Attribute to the OU) in LDAP and then read, increment and then use the value when using the gidNumber.
Found this described.
-jim
Following #jeemster suggestion, i found the way to manage gidNumber.
Fist of all: i created a new entry on my LDAP called "gidNumber" and i added the optional attribute description to contain the last gidNumber i used (class: organizationalUnit, ou: gidNumber, description: 500).
Then i created the following functions:
def ldap_gid_finder(self):
# Locates the suport-entry with a simple query
self.baseDN = "ou=impianti,dc=ldap,dc=dem2m,dc=it"
self.searchScope = ldap.SCOPE_SUBTREE
self.retrieveAttributes = None
self.searchFilter = "ou=*gidNumber*"
# Results are putted in a dictionary
self.ldap_result = self.connector.search(
self.baseDN, self.searchScope, self.searchFilter, self.retrieveAttributes)
result_set = []
while 1:
result_type, result_data = self.connector.result(self.ldap_result, 0)
if (result_data == []):
break
else:
if result_type == ldap.RES_SEARCH_ENTRY:
result_set.append(result_data)
# The attribute containing gidNumber is passed to an instanced variable
self.actual_gid_number = int(result_set[0][0][1]['description'][0])
# Provides to gidNumber incrementation
def ldap_gid_increment(self):
dn = "ou=gidNumber,ou=impianti,dc=ldap,dc=dem2m,dc=it"
old = {'description': str(self.actual_gid_number)}
new = {'description': str(self.actual_gid_number + 1)}
ldif = modlist.modifyModlist(old,new)
self.connector.modify_s(dn, ldif)
As i sad above, these methods are defined in a class of which i overrided constructor and destructor, in order to bind/unbind automatically to LDAP server when i instance or delete the instance.
Then, i used a query on LDAP to find the object called gidNumber (the ou i created before), and i filled a dictionary with resulting information. In the dictionary i found the variable representing the gidNumber and i used integer casting to manipulate it for incrementing. And that's all.
This procedure i really efficent because i server reboots you don't lose gidNumber information! Thank you again, jeemster.
Related
I created a datastore object accoring to the guestbook tutorial:
class myDS(ndb.Model):
a = ndb.StringProperty(indexed=True)
And I have an Handlers to access it and update is:
class Handler1:
my_ds = myDS()
my_ds.a = "abc" #Trying to update the value
class Handler2:
my_ds = myDS()
self.response.write(my_ds.a) #prints None although I changed the value in Handlers1
def main():
application = webapp.WSGIApplication([
('/set', Handler1),
('/get', Handler2])
I call:
Myapp.com/set
Myapp.com/get : Prints None (Didn't update to "abc")
Why wasn't the value of a updated?
How can I update across the handlers?
Cloud Datastore (GCD) stores data objects as entities, which may have one or more properties. In your case the property value type is a string 'abc'. However each entity is identified by a key which is a unique identifier within your app's datastore.
So in your case you would need to create a key for object my_ds and also define a model class. That could be Handler1 (e.g. class Handler1(ndb.Model): #your code) which defines the property you are trying to call.
Additionally, you cannot expect the value to be updated without using the put() function (e.g. my_ds.put() ). In order to use the second handler (Handler2) to create a new object and set the values of the properties you need to learn a bit more about using Webapp2 request handler.
I also suggest you follow this tutorial to get started.
I'm working on a smart home project. I've got a bunch of pieces, such as a handful of XBee readios, leds, GPS-synched clocks, water counters etc. I tried to use OOP approach, so I created many classes and subclasses. Now all you have to do in code is to define hardware, connect it by class-built-in function to a parent and enjoy.
To get an idea:
coordinator = XBee24ZBCoordinator('/dev/ttyS1', 115200,
"\x00\x13\xA2\x00\x40\x53\x56\x23", 'coord')
spalnya = XBee24ZBRemote('\x00\x13\xA2\x00\x40\x54\x1D\x12', 'spalnya')
spalnya.connectToCoordinator(coordinator)
vannaya = XBee24ZBRemote('\x00\x13\xA2\x00\x40\x54\x1D\x17', 'vannaya')
vannaya.connectToCoordinator(coordinator)
led = LED()
led.connectTo(spalnya.getPin('DO4'), 'DO')
led.on()
led.off()
I, however, don't want to do that in code. I want to have an ini file that will define the topology of this 'network'. Thus I want this file to be readable and editable by a human. Logical choise is ini (against e.j. json as json when it comes to manual editing of config files is not super friendly to at least me).
Now, I got:
[xbee-coordinator]
type = XBee24ZBCoordinator
name = coord
comport = COM4
comspeed = 115200
I can create a function BuildNetwork('my.ini'), that will read and create the required object instances and connections between them. How do I do it? There's a class XBee24ZBCoordinator, but whar I get from ini is just a string...
You have two options:
Define all these classes in a module. Modules are just objects, so you can use getattr() on them:
import types
instance = getattr(types, typename)(arguments)
Store them all in a dictionary and look them up by name; you don't have to type out the name in a string, the class has a __name__ attribute you can re-use:
types = {}
class XBee24ZBCoordinator():
# class definition
types[XBee24ZBCoordinator.__name__] = XBee24ZBCoordinator
If these are defined in the 'current' module, the globals() function returns a dictionary too, so globals()['XBee24ZBCoordinator'] is a reference to the class definition as well.
I'm working on a OpenERP environment, but maybe my issue can be answered from a pure python perspective. What I'm trying to do is define a class whose "_columns" variable can be set from a function that returns the respective dictionary. So basically:
class repos_report(osv.osv):
_name = "repos.report"
_description = "Reposition"
_auto = False
def _get_dyna_cols(self):
ret = {}
cr = self.cr
cr.execute('Select ... From ...')
pass #<- Fill dictionary
return ret
_columns = _get_dyna_cols()
def init(self, cr):
pass #Other stuff here too, but I need to set my _columns before as per openerp
repos_report()
I have tried many ways, but these code reflects my basic need. When I execute my module for installation I get the following error.
TypeError: _get_dyna_cols() takes exactly 1 argument (0 given)
When defining the the _get_dyna_cols function I'm required to have self as first parameter (even before executing). Also, I need a reference to openerp's 'cr' cursor in order to query data to fill my _columns dictionary. So, how can I call this function so that it can be assigned to _columns? What parameter could I pass to this function?
From an OpenERP perspective, I guess I made my need quite clear. So any other approach suggested is also welcome.
From an OpenERP perspective, the right solution depends on what you're actually trying to do, and that's not quite clear from your description.
Usually the _columns definition of a model must be static, since it will be introspected by the ORM and (among other things) will result in the creation of corresponding database columns. You could set the _columns in the __init__ method (not init1) of your model, but that would not make much sense because the result must not change over time, (and it will only get called once when the model registry is initialized anyway).
Now there are a few exceptions to the "static columns" rules:
Function Fields
When you simply want to dynamically handle read/write operations on a virtual column, you can simply use a column of the fields.function type. It needs to emulate one of the other field types, but can do anything it wants with the data dynamically. Typical examples will store the data in other (real) columns after some pre-processing. There are hundreds of example in the official OpenERP modules.
Dynamic columns set
When you are developing a wizard model (a subclass of TransientModel, formerly osv_memory), you don't usually care about the database storage, and simply want to obtain some input from the user and take corresponding actions.
It is not uncommon in that case to need a completely dynamic set of columns, where the number and types of the columns may change every time the model is used. This can be achieved by overriding a few key API methods to simulate dynamic columns`:
fields_view_get is the API method that is called by the clients to obtain the definition of a view (form/tree/...) for the model.
fields_get is included in the result of fields_view_get but may be called separately, and returns a dict with the columns definition of the model.
search, read, write and create are called by the client in order to access and update record data, and should gracefully accept or return values for the columns that were defined in the result of fields_get
By overriding properly these methods, you can completely implement dynamic columns, but you will need to preserve the API behavior, and handle the persistence of the data (if any) yourself, in real static columns or in other models.
There are a few examples of such dynamic columns sets in the official addons, for example in the survey module that needs to simulate survey forms based on the definition of the survey campaign.
1 The init() method is only called when the model's module is installed or updated, in order to setup/update the database backend for this model. It relies on the _columns to do this.
When you write _columns = _get_dyna_cols() in the class body, that function call is made right there, in the class body, as Python is still parsing the class itself. At that point, your _get_dyn_cols method is just a function object in the local (class body) namespace - and it is called.
The error message you get is due to the missing self parameter, which is inserted only when you access your function as a method - but this error message is not what is wrong here: what is wrong is that you are making an imediate function call and expecting an special behavior, like late execution.
The way in Python to achieve what you want - i.e. to have the method called authomatically when the attribute colluns is accessed is to use the "property" built-in.
In this case, do just this: _columns = property(_get_dyna_cols) -
This will create a class attribute named "columns" which through a mechanism called "descriptor protocol" will call the desired method whenever the attribute is accessed from an instance.
To leran more about the property builtin, check the docs: http://docs.python.org/library/functions.html#property
I'm trying to do:
MyModel({'text': db.Text('text longer than 500 byets')})
But get:
BadValueError: Indexed value fb_education must be at most 500 bytes
I'm thinking this is just a carry over from this issue with the old db api.
https://groups.google.com/forum/?fromgroups#!topic/google-appengine/wLAwrjtsuks
First create entity dynamically :
kindOfEntity = "MyTable"
class DynamicEntity(ndb.Expando):
#classmethod
def _get_kind(cls):
return kindOfEntity
then after to assign Text Properties run time/dynamically as shown below
dbObject = DynamicEntity()
key = "studentName"
value = "Vijay Kumbhani"
textProperties = ndb.TextProperty(key)
dbObject._properties[key] = {}
dbObject._values[key] = {}
dbObject._properties[key] = textProperties
dbObject._values[key] = value
dbObject.put()
then after key properties assign with Text properties
You're trying to use a db.Text, part of the old API, with NDB, which isn't going to work.
To the best of my knowledge, there's no good way to set unindexed properties in an Expando in NDB, currently. You can set _default_indexed = False on your expando subclass, as (briefly) documented here, but that will make the default for all expando properties unindexed.
A better solution would be to avoid the use of Expando alltogether; there are relatively few compelling uses for it where you wouldn't be better served by defining a model (or even defining one dynamically).
Yeah, I know question is old. But I also googled for same solutions and not found any result.
So here receipt that works for me (I expand User() with "permissions" property):
prop = ndb.GenericProperty("permissions", indexed=False)
prop._code_name = "permissions"
user._properties["permissions"] = prop
prop._set_value(user, permissions)
The previous answer was VERY use to me... Thanks!!! I just wanted to add that it appears you can also create a specific property type using this technique (if you know the datatype you want to create). When the entity is later retrieved, the dynamic property is set to the specific type instead of GenericProperty. This can be handy for ndb.PickleProperty and ndb.JsonProperty values in particular (to get the in/out conversions).
prop = ndb.TextProperty("permissions", indexed=False)
prop._code_name = "permissions"
user._properties["permissions"] = prop
prop._set_value(user, permissions)
I was trying to just change one property of an entity to Text. But when you don't map your properties explicitly, Expando/Model seem to change all properties of an entity to GenericProperty (after get).
When you put those entities again (to change the desired property), it affects other existing TextProperties, changing then to regular strings.
Only the low-level datastore api seems to work:
https://gist.github.com/feroult/75b9ab32b463fe7f9e8a
You can call this from the remote_api_shell.py:
from yawp import *
yawp(kind).migrate(20, 'change_property_to_text', 'property_name')
I'm new to Python. I'm trying to figure out how to emulate an existing application I've coded using PHP and MS-SQL, and re-create the basic back-end functionality on the Google Apps Engine.
One of the things I'm trying to do is emulate the current activity on certain tables I have in MS-SQL, which is an Insert/Delete/Update trigger which inserts a copy of the current (pre-change) record into an audit table, and stamps it with a date and time. I'm then able to query this audit table at a later date to examine the history of changes that the record went through.
I've found the following code here on stackoverflow:
class HistoryEventFieldLevel(db.Model):
# parent, you don't have to define this
date = db.DateProperty()
model = db.StringProperty()
property = db.StringProperty() # Name of changed property
action = db.StringProperty( choices=(['insert', 'update', 'delete']) )
old = db.StringProperty() # Old value for field, empty on insert
new = db.StringProperty() # New value for field, empty on delete
However, I'm unsure how this code can be applied to all objects in my new database.
Should I create get() and put() functions for each of my objects, and then in the put() function I create a child object of this class, and set its particular properties?
This is certainly possible, albeit somewhat tricky. Here's a few tips to get you started:
Overriding the class's put() method isn't sufficient, since entities can also be stored by calling db.put(), which won't call any methods on the class being written.
You can get around this by monkeypatching the SDK to call pre/post call hooks, as documented in my blog post here.
Alternately, you can do this at a lower level by implementing RPC hooks, documented in another blog post here.
Storing the audit record as a child entity of the modified entity is a good idea, and means you can do it transactionally, though that would require further, more difficult changes.
You don't need a record per field. Entities have a natural serialization format, Protocol Buffers, and you can simply store the entity as an encoded Protocol Buffer in the audit record. If you're operating at the model level, use model_to_protobuf to convert a model into a Protocol Buffer.
All of the above are far more easily applied to storing the record after it's modified, rather than before it was changed. This shouldn't be an issue, though - if you need the record before it was modified, you can just go back one entry in the audit log.
I am bit out of touch of GAE and also no sdk with me to test it out, so here is some guidelines to given you a hint what you may do.
Create a metaclass AuditMeta which you set in any models you want audited
AuditMeta while creating a new model class should copy Class with new name with "_audit" appended and should also copy the attribute too, which becomes a bit tricky on GAE as attributes are itself descriptors
Add a put method to each such class and on put create a audit object for that class and save it, that way for each row in tableA you will have history in tableA_audit
e.g. a plain python example (without GAE)
import new
class AuditedModel(object):
def put(self):
print "saving",self,self.date
audit = self._audit_class()
audit.date = self.date
print "saving audit",audit,audit.date
class AuditMeta(type):
def __new__(self, name, baseclasses, _dict):
# create model class, dervied from AuditedModel
klass = type.__new__(self, name, (AuditedModel,)+baseclasses, _dict)
# create a audit class, copy of klass
# we need to copy attributes properly instead of just passing like this
auditKlass = new.classobj(name+"_audit", baseclasses, _dict)
klass._audit_class = auditKlass
return klass
class MyModel(object):
__metaclass__ = AuditMeta
date = "XXX"
# create object
a = MyModel()
a.put()
output:
saving <__main__.MyModel object at 0x957aaec> XXX
saving audit <__main__.MyModel_audit object at 0x957ab8c> XXX
Read audit trail code , only 200 lines, to see how they do it for django