SQLAlchemy ORM to load specific columns in Model.query - python

I am newbie in python/sqlachemy world. I am trying to fetch data from PostgreSQL using sqlachemy in flask, where I need to fetch only selected column instead of streaming all the column from database through network. One of the approach is using session (like below) which is working fine,
session.query(User.user_id, User.name).all()
But for some reason I would like to stick with Model.query method instead of using sessions. So I ended up to use something like below,
User.query.options(load_only(User.user_id, User.name)).all()
Above code snippet doesn't filters the selected column, instead it gives back all the column. It looks like sqlachemy doesn't respects the load_only arguments. Why is that behaviour and any solution to achieve my use case in Model.query instead of using sessions?
My user model looks like below,
class User(db.Model):
__tablename__ = 'user_info'
user_id = Column(String(250), primary_key=True)
name = Column(String(250))
email = Column(String(250))
address = Column(String(512))
Version Info
Python - 3.7
sqlachemy - 1.3.11
Edit 1: Though I added load_only attributes, It is generating the following query,
SELECT user.user_id AS user_user_id, user.name AS user_name, user.email AS user_email, user.address AS user_address FROM user_info

Related

How would I Update an sql database I made with flask-sqlalchemy?

I'm using flask-sqlalchemy to store users and posts. The entire database is stored in this users.sqlite3 file. Let's say this is my user class:
class User(db.Model, UserMixin):
id = db.Column("id", db.Integer, primary_key=True)
name = db.Column(db.String(100))
email = db.Column(db.String(100), unique=True)
password = db.Column(db.String(100))
status = db.Column(db.String(100))
about = db.Column(db.String(500))
Now let's say I wanted to add another column like a favorite number or something. I would have to add number = db.Column(db.Integer()). But then it won't work because the file is already generated and now I'm saying that there's another column that doesn't exist in there. So I would have to delete all the data in that file and start with an empty database every time I want to update it.
Is there anyway to get around this? Could I do something to just make it so that those other values were empty when I added them in?
This is called a database migration. You basically need 2 steps-
Update your model file to the new state (add your column).
Run a "migration" - which is a one time change to update the EXISTING database to match the new model (in this case, it would be UPDATE TABLE ADD COLUMN ...)
sqlalchemy provides the alembic package to help with migrations, you just need to generate them and ensure they get run on each deploy to keep your database schema up-to-date with your code.

How to change Column label using SqlAlchemy ORM

I have MS Access DB file (.accdb), and inside of file stored photos of persons as attachments. In table constructor I see only one field "photos" with type "Attachment". Actually there three hidden fields with names: photos.FileData, photos.FileName, photos.FileType. For parsing these fields I created following class:
class Person:
__tablename__ = 'persons'
name = Column(String(255), name='name')
photos_data = Column(String, name='photos.FileData', quote=False)
....
If I try to get all attributes of Person in same time, as following:
persons = session.query(Person)
I get error in following generated piece of SQL statement:
SELECT ... [persons].photos.FileData AS persons_photos.FileData ...;
As you can see there dot sign present in alias, which raises ODBC error. I can avoid such behavior to request FileData as separate value:
persons = session.query(Person.photos_data.label('photos_data'))
Or I can use raw SQL without aliases. But This is not normal ORM way that I need, because I have to manually construct Persons object each time after request to DB.
Is it possible to set own label to Column during its declaration or even disable label for selected column?
I saw this great answer, but seems this is not applicable to me. Below statement doesn't work properly:
photos_data = Column(String, name='photos.FileData', quote=False).label('photos_data')

How to fix 'Invalid input syntax for integer' issue while using sqlalchemy Declarative API

I'm actually building a little utils which aims to take flat csv/excel file and populate a target database on MS Access - as I'm working on a Mac, i'm developping it using Postgres...
So I developped a part which deals with messy input (csv/excel) forms (several heading, etc) but that's not my issue at the moment.
On the other hand, I made my Database model using SQLAlchemy Declarative Base API.
I'm facing issue when importing data in some tables:
- Split flat record to several objects
- Check (SELECT) if the record doesn't exists yet based on uniqueness contraints
- If it doesn't exists I create object else I use the existing one
- Propagate keys information to related object
For some tables I'm using the auto_increment arguments but sometimes the record has its own ID (in input file) so I should it for insert/select in my tables and sometimes no ID so I have to create a new technical Id for my table.
Example: I have a record with for primary key -obsr25644- and sometimes nothing so I use a default value created with uuid.
So below the stacktrace when doing selectoperation on a my table. The same error occurs when working on existing data - obsr25644 - and generated uuid - 'a8098c1a-f86e-11da-bd1a-00112444be1e'
sqlalchemy.exc.DataError: (psycopg2.errors.InvalidTextRepresentation) **invalid input syntax for integer**: "obsr25644"
LINE 3: WHERE "Location"."Id_observer" = 'obsr25644'
As you can see below, "Location"."Id_observer" is declared as String(255). I don't understand why the error is related to 'integer'.
[SQL: SELECT "Location"."Id_location" AS "Location_Id_location", [...], "Location"."Id_observer" AS "Location_Id_observer",
FROM "Location"
WHERE "Location"."Id_observer" = %(Id_observer_1)s
LIMIT %(param_1)s]
[parameters: {'Id_observer_1': 'obsr25644', 'param_1': 1}]
class LocationModel(UniqueMixin, Base):
__tablename__ = 'Location'
# Primary key
Id_location = Column(Integer, primary_key=True, autoincrement=True)
[...]
Id_observer = Column(String(255), ForeignKey('Observer.Id_observer'))
observer = relationship("ObserverModel", load_on_pending=True, back_populates="location")
class ObserverModel(UniqueMixin, Base):
__tablename__ = 'Observer'
# Primary key
Id_observer = Column(String(255), primary_key=True, default=UniqueMixin.unique_hash())
[...]
# Relationship
location = relationship("LocationModel", load_on_pending=True, back_populates="observer")
Note :UniqueMixin.unique_hash() returns uuid.uuid4().hex

inner joins vs joinloads in sqlalchemy

I read about sqlalchemy joinloads like mentioned here and I little confused about the benefits or special usages over simply joining two tables like mentioned here
I would like to know about when to use each method, currently I don't see any benefit for using joinloads for now, can you please explain the difference? And the use cases to prefer joinloads
Sqlalchemy docs says joinedload() is not a replacement for join() and joinedload() doesn't affect the query result :
Query.join()
Query.options(joinedload())
Let's say if you wants to get same date that already related with data you are querying, but when you get this related data it won't change the result of the query it is like an attachment. Better to look sqlalchemy docs joinedload
class User(db.Model):
...
addresses = relationship('Address', backref='user')
class Address(db.Model):
...
user_id = Column(Integer, ForeignKey('users.id'))
The code below query user filter and return that user and optionally you can getting that user addresses.
user = db.session.query(User).options(joinedload(User.addresses)).filter(id==1).one()
Now lets look at join:
user = db.session.query(User).join(Address).filter(User.id==Address.user_id).one()
Conclusion
The query with joinedload() get that user addresses.
Other query, query on both table, check for user id on both table, so the result depend on this. But joinedload() if user doesn't have any address you will have user but no address. in join() if user doesn't have address there will not result.

Creating a case insensitive SQLAlchemy query for MS-SQL

I'm currently trying to transfer a program designed for a MySQL database onto a MS-SQL database and I've run into some trouble. I discovered that MySQL does not have case sensitivity by default as MS-SQL has. This has lead to some problems with code similar to that listed below.
class Employee(Base):
__tablename__ = "Employees"
Id = Column(Integer(unsigned=True),
primary_key=True, nullable=False, unique=True)
DisplayName = Column(String(64),
nullable=False)
#more columns
def get_employees(sql_session, param, columns=None, partial_match=True):
if not columns:
columns = [Employee.Id, Employee.DisplayName]
clauses = []
if partial_match:
clauses.append(Employee.DisplayName.startswith(param))
whereclause = and_(*clauses)
stmt = select(columns, whereclause)
return sql_session.execute(stmt)
I know of the SQL keyword COLLATE but I'm not sure how to implement that, or if it's even the best option to use in this situation. What recommendations would you give to create a case insensitive LIKE query using SQLAlchemy?
Python 2.7.7
SQLAlchemy 0.7.7
That's a bit odd, in my experience MS SQL Server is case insensitive by default although you can optionally set it to case sensitive using the database's collation setting.
You can use COLLATE with SqlAlchemy (see here), so you should be able to do (I have not tried this myself):
clauses.append(Employee.DisplayName.startswith(collate(param, 'SQL_Latin1_General_CP1_CI_AS')))
SQL Server also supports regex-like pattern matching with LIKE queries, so alternatively you could make use of this in your param value e.g. '[vV]alue%'

Categories

Resources