Get Primary Key Value from Object in QGIS - python

is it possible to request the primary key value of an selected object in QGis?
My Layer is stored in a postgresql db.
Loaded as wfs Layer in QGis
Python Code to select an object and return his attributes
[...]
def canvasPressEvent(self, event):
found_features = self.identify(event.x(), event.y(), [self.layer], QgsMapToolIdentify.TopDownAll)
for a in found_features:
print (a.mFeature.id(),a.mFeature.attributes())
Output
13 ['doc_1658386642','m4a',NULL, PyQt5.QtCore.QDateTime(2022,7,21,8,57,28,595),NULL]
24 ['doc_1672838277',NULL,'Dies ist eine Notiz',PyQt5.QtCore.QDateTime(2023,1,4,14,23,50,17),NULL]
Same Objects viewn in pgAdmin
Id [PK] uuid
type var char
name var char
note var char
geom geometry
lastModified timestamp
orientation integer
"017ee6fe-4fb0-4761-a2f1-eb9e6b9f19ad"
"m4a"
"doc_1658386642"
"0101000020E6100000CCF6F48C0D3B22403A3CFF610BAA4A40"
"2022-07-21 06:57:28.595+00"
"2caaaa38-e140-4a7c-b3a6-8383c60fc07b"
"doc_1672838277"
"Dies ist eine Notiz"
"0101000020E61000000965EA8BEA3A2240AC84C562D6A94A40"
"2023-01-04 13:23:50.017701+00"
I would like to get the id [PK, uuid] of these objects for example (017ee6fe-4fb0-4761-a2f1-eb9e6b9f19ad) - any suggestions? is it even possible?

Related

How to do left outer join with PeeWee and no ForeignKey?

Using PeeWee on top of SQLite, I am trying to do a left outer join between two tables that do not have a ForeignKey relation defined. I can get the data if the right table an entry that matches the left table, but if there is no match, the columns in the right table do not make it into the returned models.
class BaseModel(Model):
class Meta:
database = db
class Location(BaseModel):
location_key = CharField(primary_key=True)
lat = FloatField(null = False)
lon = FloatField(null = False)
class Household(BaseModel):
name = CharField(null=True)
location_id = CharField(null=True)
I am trying to do something like:
for h in Household.select(Household,Location).join(Location, on=(Household.location_id == Location.location_key), join_type=JOIN.LEFT_OUTER):
print(type(h), h, h.location, h.location.lat)
This works if Household.location_id matches something in Location, but if Household.location_id is None (null), then I get an AttributeError: 'Household' object has no attribute 'location'
I would have expected location to be present, but have a valid of None.
How can I check for the existence of location before using it? I am trying to avoid using ForeignKey, there are a lot of mismatches between Household.location_id and Location.location_key and PeeWee really gets angry about that...
I think I understand what you're trying to do after re-reading. What I'd suggest is to use Peewee's "on" keyword argument in the join, which can patch the related Location (if it exists) onto a different attr than "location":
query = (HouseHold
.select(HouseHold, Location)
.join(Location, on=(HouseHold.location_id == Location.location_key),
attr='location_obj', join_type=JOIN.LEFT_OUTER))
Then you can check the "location_obj" to retrieve the related object.
for house in query:
# if there was a match, get the location obj or None.
location_obj = getattr(house, 'location_obj', None)
# the location_id is still present.
print(house.location_id, location_obj)
Found my own answer. Implement __getattr__(self) in the Household model, and return None if the name is 'location'. __getattr__(self) is only called if there is no property with that name.

Generate GUID when creating model object using SQLAlchemy (Python)

I'm using postgres with SQLAlchemy. I want to create Profile objects and have them autogenerate a GUID. However currently my profile IDs don't store any values, e.g:
profile = Profile(name='some_profile')
-> print(profile.name)
some_profile
-> print(profile.id)
None
I've looked into how others are implementing GUIDs into their models (How can I use UUIDs in SQLAlchemy?)
I understand that many people don't recommend using GUIDs as IDs, but I would like to know where I'm going wrong despite this.
Here's my current implementation:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, String
from sqlalchemy.types import TypeDecorator, CHAR
import uuid
Base = declarative_base()
class GUID(TypeDecorator):
"""Platform-independent GUID type.
Uses Postgresql's UUID type, otherwise uses
CHAR(32), storing as stringified hex values.
"""
impl = CHAR
def process_bind_param(self, value, dialect):
if value is None:
return value
elif dialect.name == 'postgresql':
return str(value)
else:
if not isinstance(value, uuid.UUID):
return "%.32x" % uuid.UUID(value).int
else:
# hexstring
return "%.32x" % value.int
def process_result_value(self, value, dialect):
if value is None:
return value
else:
if not isinstance(value, uuid.UUID):
value = uuid.UUID(value)
return value
class Profile(Base):
__tablename__ = 'profile'
id = Column(GUID(), primary_key=True, default=uuid.uuid4)
name = Column(String)
I'm still a beginner to python, but as far as I understand I'm declaring the type of my Profile id column as a GUID (setup by the GUID class). A default GUID value should therefore be successfully stored when generated in that column through uuid.uuid4().
My guess is that there isn't anything wrong with the GUID class, but instead in how I'm trying to generate the default value within the id column.
Any help would be appreciated!
Your code is correct!
After you commit the profile, you can get the valid id.
profile = Profile(name='some_profile')
-> print(profile.name)
some_profile
-> print(profile.id)
None
# commit
session.add(profile)
session.commit()
# print saved profile
-> print(profile.name)
some_profile
-> print(profile.id)
ff36e5ff-16b5-4536-bc86-8ec02a53cfc8

Geometry('POINT') column being returned as str object

I have an sqlalchemy model object which has the following column:
gps = Column(Geometry('POINT'))
I have implemented a to_dict function in the model class, for which I need to deconstruct the gps object to give me lat and long. This successfully works for me in another model. But for some reason, in the class in question, the following piece of code results in an attribute error ('str' object has no attribute 'data'):
point = wkb.loads(bytes(self.gps.data))
I store the gps data like so:
gps = Point(longitude, latitude).wkt
Here's the table description from postgresql:
Column | Type | Modifiers | Storage | Stats target | Description
-------------+-----------------------------+---------------------------------------------------+---------+--------------+-------------
id | integer | not null default nextval('pins_id_seq'::regclass) | plain | |
gps | geometry(Point) | | main | |
I am calling the as dict method as soon as the Pin object gets created like so:
gps = Point(
float(data['longitude']),
float(data['latitude'])
).wkt
pin = Pin(gps=gps)
# Commit pin to disk
# otherwise fields will
# not return properly
with transaction.manager:
self.dbsession.add(pin)
transaction.commit()
print (pin.as_dict())
What's driving me insane is the fact that the exact some code works for the other model. Any insight would be mucho appreciated.
Edit: Following Ilja's comment, I understood that the issue is that the object isn't getting written to the disk, and apparently the Geometry column will get treated as a string till that happens. But I am getting the same error even now. Basically, at this point, the transaction.commit() function isn't doing what I think it is supposed to...
Relevant to that is the configuration of the session object. Since all this is under the Pyramid web framework, I am using the default session configuration, as described here (you can skip the first few paragraphs, until they start discussing the /models/__init__.py file. Ctrl + F if need be).
In case I have left some important detail out, reproducing the problematic class here below:
from geoalchemy2 import Geometry
from sqlalchemy import (
Column,
Integer,
)
from shapely import wkb
from .meta import Base
class Pin(Base):
__tablename__ = 'pins'
id = Column(Integer, primary_key=True)
gps = Column(Geometry('POINT'))
def as_dict(self):
toret = {}
point = wkb.loads(bytes(self.gps.data))
lat = point.x
lon = point.y
toret['gps'] = {'lon': lon, 'lat': lat}
return toret
At first I thought that the cause of the
Traceback (most recent call last):
...
File "/.../pyramid_test/views/default.py", line 28, in my_view
print(pin.as_dict())
File "/.../pyramid_test/models/pin.py", line 18, in as_dict
point = wkb.loads(bytes(self.gps.data))
AttributeError: 'str' object has no attribute 'data'
was that zope.sqlalchemy closes the session on commit, but leaves instances unexpired, but that was not the case. This was due to having used Pyramid some time ago when the global transaction would still affect the ongoing transaction during a request, but now the default seems to be an explicit transaction manager.
The actual problem is that transaction.commit() has no effect on the ongoing transaction of the current session. Adding some logging will make this clear:
with transaction.manager:
self.dbsession.add(pin)
transaction.commit()
print("Called transaction.commit()")
insp = inspect(pin)
print(insp.transient,
insp.pending,
insp.persistent,
insp.detached,
insp.deleted,
insp.session)
which results in about:
% env/bin/pserve development.ini
2018-01-19 14:36:25,113 INFO [shapely.speedups._speedups:219][MainThread] Numpy was not imported, continuing without requires()
Starting server in PID 1081.
Serving on http://localhost:6543
...
Called transaction.commit()
False True False False False <sqlalchemy.orm.session.Session object at 0x7f958169d0f0>
...
2018-01-19 14:36:28,855 INFO [sqlalchemy.engine.base.Engine:682][waitress] BEGIN (implicit)
2018-01-19 14:36:28,856 INFO [sqlalchemy.engine.base.Engine:1151][waitress] INSERT INTO pins (gps) VALUES (ST_GeomFromEWKT(%(gps)s)) RETURNING pins.id
2018-01-19 14:36:28,856 INFO [sqlalchemy.engine.base.Engine:1154][waitress] {'gps': 'POINT (1 1)'}
2018-01-19 14:36:28,881 INFO [sqlalchemy.engine.base.Engine:722][waitress] COMMIT
As can be seen no commit takes place and the instance is still in pending state, and so its gps attribute holds the text value from the assignment. If you wish to handle your serialization the way you do, you could first flush the changes to the DB and then expire the instance attribute(s):
gps = Point(
float(data['longitude']),
float(data['latitude'])
).wkt
pin = Pin(gps=gps)
self.dbsession.add(pin)
self.dbsession.flush()
self.dbsession.expire(pin, ['gps']) # expire the gps attr
print(pin.as_dict()) # SQLAlchemy will fetch the value from the DB
On the other hand you could also avoid having to handle the (E)WKB representation in the application and request the coordinates from the DB directly using for example column_property() accessors:
class Pin(Base):
__tablename__ = 'pins'
id = Column(Integer, primary_key=True)
gps = Column(Geometry('POINT'))
gps_x = column_property(gps.ST_X())
gps_y = column_property(gps.ST_Y())
def as_dict(self):
toret = {}
toret['gps'] = {'lon': self.gps_y, 'lat': self.gps_x}
return toret
With that the manual expire(pin) becomes unnecessary, since the column properties have to refresh themselves anyway in this case. And of course since you already know your coordinates when you're constructing the new Pin, you could just prefill them:
lon = float(data['longitude'])
lat = float(data['latitude'])
gps = Point(lon, lat).wkt
pin = Pin(gps=gps, gps_x=lat, gps_y=lon)
and so no flushing, expiring, and fetching is even needed.

Odoo 8 - on_record_write() not triggered for 'state' field on stock.picking

So I am using the connector to send a status to Magento depending on the status of a stock.picking record in Odoo.
Here is the (beginning of the) function that I use for that :
#on_record_write(model_names = 'stock.picking')
def change_status_sale_order_sp(session, model_name,
record_id, vals):
if session.context.get('connector_no_export'):
return
record = session.env['stock.picking'].browse(record_id)
if "IN" in record.name: #the stock pickings might be to receive products from the supplier, but we want the one for the deliveries to customers
return
origin = record.origin #String containing the sale order ID + the warehouse from where the order is shipped
so_name = origin.split(':')[0]
warehouse = origin.split(':')[1]
status = record.state
_logger.debug("STOCK PICKING --- Delivery order " + str(record_id) + " was modified : " + str(vals))
I want that function to be called when there is a change on a stock.picking record, hence the decorator on_record_write.
My problem is : that function is called for every write action (every time a field is modified, either manually or on the server side) on a stock.picking record, EXCEPT when it is the state field. I never get the 'vals' parameter to be {'state': whateverthestatusis}. Why is that, am I missing something ?
on_record_write() will every time call when write method of stock.picking will call.
But state field is fields.function field and it will set based on stock moves so you will not get state field in vals of write.

How to compute a databasefield with the field-id

Model:
db.define_table('orders',
Field('customer_id', db.customer)
Field('order_id', 'string')
)
I want to get a special order_id like XY-150012 where XY is part of the customer name, 15 is the year and 12 the id the actual record-id of orders. I tried in the model:
db.orders.order_id.compute = lambda r: "%s-%s00%s" % (db.customer(r['customer_id']).short, str(request.now.year)[2:], r['id'])
The id is never recognized, the computation ends up as None. If I remove r['id'] from the compute-line it works.
EDIT:
After adding an extra field field('running_number', 'integer') to the model I can access this fields content.
Is there a easy way to set this fields default=db.orders.id?
SOLUTION:
With Anthony´s Input, and reading about recursive selects I came up with this solution:
db.define_table('orders',
Field('customer_id', db.customer),
Field('order_id', 'string', default = None))
def get_order_id(id, short):
y = str(request.now.year)[2:]
return '%s-%s00%s' % (short, y, id)
def set_id_after_insert(fields,id):
fields.update(id=id)
def set_order_id_after_update(s,f):
row = s.select().first()
if row['order_id'] == None:
s.update_naive(order_id=get_order_id(row['id'], row['customer_id'].short)
else:
return
db.orders._after_insert.append(lambda f,id: set_id_after_insert(f,id))
db.orders._after_update.append(lambda s,f: set_order_id_after_update(s,f))
The problem is that the record ID is not known until after the record has been inserted in the database, as the id field is an auto-incrementing integer field whose value is generated by the database, not by web2py.
One option would be to define an _after_insert callback that updates the order_id field after the insert:
def order_after_insert(fields, id):
fields.update(id=id)
db(db.order.id == id).update(order_id=db.order.order_id.compute(fields))
db.order._after_insert.append(order_after_insert)
You might also want to create an _after_update callback, but in that case, be sure to use the update_naive argument in both callbacks when defining the Set (see above link for details).
Depending on how the order_id is used, another option might be a virtual field.

Categories

Resources