I need to retrieve what's the current date and time for the database I'm connected with SQLAlchemy (not date and time of the machine where I'm running Python code). I've seen this functions, but they don't seem to do what they say:
>>> from sqlalchemy import *
>>> print func.current_date()
CURRENT_DATE
>>> print func.current_timestamp()
CURRENT_TIMESTAMP
Moreover it seems they don't need to be binded to any SQLAlchemy session or engine. It makes no sense...
Thanks!
I foud the solution: these functions cannot be used in the way I used (print...), but need to be called inside of the code that interacts with the database. For instance:
print select([my_table, func.current_date()]).execute()
or assigned to a field in an insert operation.
Accidentally I discovered that exists at least a couple of parameters for these functions:
type_ that indicates the type of the value to return, I guess
bind that indicates a binding to an SQLAlchemy engine
Two examples of use:
func.current_date(type_=types.Date, bind=engine1)
func.current_timestamp(type_=types.Time, bind=engine2)
Anyway my tests seems to say these parameters are not so important.
Related
So I am very confused about this weird behaviour I have with SQLAlchemy and PostgreSQL. Let's say I have a table:
create table staging.my_table(
id integer DEFAULT nextval(...),
name text,
...
);
and a stored function:
create or replace function staging.test()
returns void
language plpgsql
as $function$
begin
insert into staging.my_table (name) values ('yay insert');
end;
$function$;
What I want to do now is call this function in Python with SQLAlchemy like this:
from sqlalchemy import create_engine
engine = create_engine('postgresql+psycopg2://foo:bar#localhost:5432/baz')
engine.execute('select staging.test()')
When I run this Python code nothing get's inserted in my database. That's weird because when I replace the function call with select 1 and add .fetchall() to it it gets executed and I see the result in console when I print it.
Let's say I run this code twice and nothing happens but code runs successful without errors.
If I switch to the database now and run select staging.test(); and select my_table I get: id: 3; name: yay insert.
So that means the sequence is actually increasing when I run my Python file but there is no data in my table.
What am I doing wrong? Am I missing something? I googled but didn't find anything.
This particular use case is singled out in "Understanding Autocommit":
Full control of the “autocommit” behavior is available using the generative Connection.execution_options() method provided on Connection, Engine, Executable, using the “autocommit” flag which will turn on or off the autocommit for the selected scope. For example, a text() construct representing a stored procedure that commits might use it so that a SELECT statement will issue a COMMIT:
engine.execute(text("SELECT my_mutating_procedure()").execution_options(autocommit=True))
The way SQLAlchemy autocommit detects data changing operations is that it matches the statement against a pattern, looking for things like UPDATE, DELETE, and the like. It is impossible for it to detect if a stored function/procedure performs mutations, and so explicit control over autocommit is provided.
The sequence is incremented even on failure because nextval() and setval() calls are never rolled back.
I'm trying to implement the following MySQL query using SQLAlchemy. The table in question is nested set hierarchy.
UPDATE category
JOIN
(
SELECT
node.cat_id,
(COUNT(parent.cat_id) - 1) AS depth
FROM category AS node, category AS parent
WHERE node.lft BETWEEN parent.lft AND parent.rgt
GROUP BY node.cat_id
) AS depths
ON category.cat_id = depths.cat_id
SET category.depth = depths.depth
This works just fine.
This is where I start pulling my hair out:
from sqlalchemy.orm import aliased
from sqlalchemy import func
from myapp.db import db
node = aliased(Category)
parent = aliased(Category)
stmt = db.session.query(node.cat_id,
func.count(parent.cat_id).label('depth_'))\
.filter(node.lft.between(parent.lft, parent.rgt))\
.group_by(node.cat_id).subquery()
db.session.query(Category,
stmt.c.cat_id,
stmt.c.depth_)\
.outerjoin(stmt,
Category.cat_id == stmt.c.cat_id)\
.update({Category.depth: stmt.c.depth_},
synchronize_session='fetch')
...and I get InvalidRequestError: This operation requires only one Table or entity be specified as the target. It seems to me that Category.depth adequately specifies the target, but of course SQLAlchemy trumps whatever I may think.
Stumped. Any suggestions? Thanks.
I know this question is five years old, but I stumbled upon it today. My answer might be useful to someone else. I understand that my solution is not the perfect one, but I don't have a better way of doing this.
I had to change only the last line to:
db.session.query(Category)\
.outerjoin(stmt,
Category.cat_id == stmt.c.cat_id)\
.update({Category.depth: stmt.c.depth_},
synchronize_session='fetch')
Then, you have to commit the changes:
db.session.commit()
This gives the following warning:
SAWarning: Evaluating non-mapped column expression '...' onto ORM
instances; this is a deprecated use case. Please make use of the
actual mapped columns in ORM-evaluated UPDATE / DELETE expressions.
"UPDATE / DELETE expressions." % clause
To get rid of it, I used the solution in this post: Turn off a warning in sqlalchemy
Note: For some reason, aliases don't work in SQLAlchemy update statements.
The code is quite simple, as follows:
from pony.orm import Required, Set, Optional, PrimaryKey
from pony.orm import Database, db_session
import time
db = Database('mysql', host="localhost", port=3306, user="root",
passwd="123456", db="learn_pony")
class TryUpdate(db.Entity):
_table_ = "try_update_record"
t = Required(int, default=0)
db.generate_mapping(create_tables=True)
#db_session
def insert_record():
new_t = TryUpdate()
#db_session
def update():
t = TryUpdate.get(id=1)
print t.t
t.t = 0
print t.t
if __name__ == "__main__":
insert_record()
update()
pony.orm reports exception: pony.orm.core.CommitException: Object TryUpdate[1] was updated outside of current transaction. But there is no other transaction running at all
And as my experiments show, pony works OK as long as t.t is changed to a value different from the original, but it always reports exception when t.t is set to a value which equals to the original.
I'm not sure if this is a design decision. Do I have to check if my input value changes everytime before the assignment? Or is there anything I can do to avoid this annoying exception?
my pony version: 0.4.8
Thansk a lot~~~
Pony ORM author is here.
This behavior is a MySQL-specific bug which was fixed in release Pony ORM 0.4.9, so please upgrade. The rest of my answer is the explanation of what caused the bug.
The reason for this bug is the following. In order to prevent lost updates, Pony ORM uses optimistic checks. Pony tracks which attributes were read or changed during the program execution and then adds extra conditions in the WHERE section of the corresponding UPDATE query. This way Pony guarantees that no data will be lost because of the concurrent update. Lets consider the next example:
#db_session
def some_function()
obj = MyObject[123]
print obj.x
obj.x = 100
Upon exit of the some_function the #db_session decorator will commit ongoing transaction. Right before the commit, the object's data will be saved by the following UPDATE command:
UPDATE MyTable
SET x = <new_value>
WHERE id = 123 and x = <old_value>
You may wonder, why this additional condition and x = <old_value> was added? This is because Pony knows that the program saw previous value of the attribute x and may use this value in order to calculate new value of the same attribute. So Pony takes steps to guarantee that this attribute is still unchanged at the moment of the UPDATE. This approach is called "optimistic concurrency check" (see also Wikipedia article "optimistic concurrency control"). Since isolation level used by default in most databases is not SERIALIZABLE, without this additional check it is possible that some other transaction have managed to update value of the x attribute before our transaction commit, and then the value written by the concurrent transaction will be lost.
When Python database driver executes the UPDATE query, it returns the number of rows which satisfy the UPDATE criteria. This way Pony knows if the update was successful or not. If the result is 1, this means that one row was successfully found and updated, but if the result is 0, this means that the row was already modified by another transaction and now it doesn't satisfy the criteria in the WHERE section. When this happens Pony terminates the current transaction in order to prevent lost update.
The reason of the bug is that while all other database drivers return number of rows which were found by WHERE section criteria, MySQLdb driver by default returns the number of rows which were actually modified! Because of this, if the new value of the attribute turns out to be the same as the original value of the same attribute, MySQLdb reports that 0 rows were modified, and Pony (prior to the release 0.4.9) mistakenly believes that it means that the row was modified by a concurrent transaction. Started with the release 0.4.9 Pony ORM tells MySQLdb driver to behave in a standard way and return the number of rows which were found and not the number of rows which were actually updated.
Hope this helps :)
P.S. I found you question just by chance, in order to reliably get answers about Pony ORM I recommend you to send questions to our mailing list http://ponyorm-list.ponyorm.com. If you think that you found a bug you can open issue here: https://github.com/ponyorm/pony/issues.
Thank you for your question!
I'm just using SQLAlchemy core, and cannot get the sql to allow me to add where clauses. I would like this very generic update code to work on all my tables. The intent is that this is part of a generic insert/update function that corresponds to every table. By doing it this way it allows for extremely brief test code and simple CLI utilities that can simply pass all args & options without the complexity of separate sub-commands for each table.
It'll take a few more tweaks to get it there, but should be doing the updates now just fine. However, while SQLAlchemy refers to generative queries it doesn't distinguish between selects & updates. I've reviewed SQLAlchemy documentation, Essential SQLAlchemy, stackoverflow, and several source code repositories, and have found nothing.
u = self._table.update()
non_key_kw = {}
for column in self._table.c:
if column.name in self._table.primary_key:
u.where(self._table.c[column.name] == kw[column.name])
else:
col_name = column.name
non_key_kw[column.name] = kw[column.name]
print u
result = u.execute(kw)
Which fails - it doesn't seem to recognize the where clause:
UPDATE struct SET year=?, month=?, day=?, distance=?, speed=?, slope=?, temp=?
FAIL
And I can't find any examples of building up an update in this way. Any recommendations?
the "where()" method is generative in that it returns a new Update() object. The old one is not modified:
u = u.where(...)
We have sqlite databases, and datetimes are actually stored in Excel format (there is a decent reason for this; it's our system's standard representation of choice, and the sqlite databases may be accessed by multiple languages/systems)
Have been introducing Python into the mix with great success in recent months, and SQLAlchemy is a part of that. The ability of the sqlite3 dbapi layer to swiftly bind custom Python functions where SQLite lacks a given SQL function is particularly appreciated.
I wrote an ExcelDateTime type decorator, and that works fine when retrieving result sets from the sqlite databases; Python gets proper datetimes back.
However, I'm having a real problem binding custom python functions that expect input params to be python datetimes; I'd have thought this was what the bindparam was for, but I'm obviously missing something, as I cannot get this scenario to work. Unfortunately, modifying the functions to convert from excel datetimes to python datetimes is not an option, and neither is changing the representation of the datetimes in the database, as more than one system/language may access it.
The code below is a self-contained example that can be run "as-is", and is representative of the issue. The custom function "get_month" is created, but fails because it receives the raw data, not the type-converted data from the "Born" column. At the end you can see what I've tried so far, and the errors it spits out...
Is what I'm trying to do impossible? Or is there a different way of ensuring the bound function receives the appropriate python type? It's the only problem I've been unable to overcome so far, would be great to find a solution!
import sqlalchemy.types as types
from sqlalchemy import create_engine, Table, Column, Integer, String, MetaData
from sqlalchemy.sql.expression import bindparam
from sqlalchemy.sql import select, text
from sqlalchemy.interfaces import PoolListener
import datetime
# setup type decorator for excel<->python date conversions
class ExcelDateTime( types.TypeDecorator ):
impl = types.FLOAT
def process_result_value( self, value, dialect ):
lxdays = int( value )
lxsecs = int( round((value-lxdays) * 86400.0) )
if lxsecs == 86400:
lxsecs = 0
lxdays += 1
return ( datetime.datetime.fromordinal(lxdays+693594)
+ datetime.timedelta(seconds=lxsecs) )
def process_bind_param( self, value, dialect ):
if( value < 200000 ): # already excel float?
return value
elif( isinstance(value,datetime.date) ):
return value.toordinal() - 693594.0
elif( isinstance(value,datetime.datetime) ):
date_part = value.toordinal() - 693594.0
time_part = ((value.hour*3600) + (value.minute*60) + value.second) / 86400.0
return date_part + time_part # time part = day fraction
# create sqlite memory db via sqlalchemy
def get_month( dt ):
return dt.month
class ConnectionFactory( PoolListener ):
def connect( self, dbapi_con, con_record ):
dbapi_con.create_function( 'GET_MONTH',1,get_month )
eng = create_engine('sqlite:///:memory:',listeners=[ConnectionFactory()])
eng.dialect.dbapi.enable_callback_tracebacks( 1 ) # show better errors from user functions
meta = MetaData()
birthdays = Table('Birthdays', meta, Column('Name',String,primary_key=True), Column('Born',ExcelDateTime), Column('BirthMonth',Integer))
meta.create_all(eng)
dbconn = eng.connect()
dbconn.execute( "INSERT INTO Birthdays VALUES('Jimi Hendrix',15672,NULL)" )
# demonstrate the type decorator works and we get proper datetimes out
res = dbconn.execute( select([birthdays]) )
tuple(res)
# >>> ((u'Jimi Hendrix', datetime.datetime(1942, 11, 27, 0, 0)),)
# simple attempt (blows up with "AttributeError: 'float' object has no attribute 'month'")
dbconn.execute( text("UPDATE Birthdays SET BirthMonth = GET_MONTH(Born)") )
# more involved attempt( blows up with "InterfaceError: (InterfaceError) Error binding parameter 0 - probably unsupported type")
dbconn.execute( text( "UPDATE Birthdays SET BirthMonth = GET_MONTH(:Born)",
bindparams=[bindparam('Born',ExcelDateTime)],
typemap={'Born':ExcelDateTime} ),
Born=birthdays.c.Born )
Many thanks.
Instead of letting Excel/Microsoft dictate how you store date/time, it would be less trouble and work for you to rely on standard/"obvious way" of doing things.
Process objects according to the standards of their domain - Python's way (datetime objects) inside Python/SQLAlchemy, SQL's way inside SQLite (native date/time type instead of float!).
Use APIs to do the necessary translation between domains. (Python talks to SQLite via SQLAlchemy, Python talks to Excel via xlrd/xlwt , Python talks to other systems, Python is your glue.)
Using standard date/time types in SQLite allows you to write SQL without Python involve in standard readable way (WHERE date BETWEEN '2011-11-01' AND '2011-11-02' makes much more sense than WHERE date BETWEEN 48560.9999 AND 48561.00001). It allows you to easily port it to another DBMS (without rewriting all those ad-hoc functions) when your application/databse needs to grow.
Using native datetime objects in Python allows you to use a lot of freely available, well tested, and non-EEE (embrace, extend, extinguish) APIs. SQLAlchemy is one of those.
And I hope you are aware of that slight but dangerous difference between Excel datetime floats in Mac and Windows? Who knows that one of your clients would in the future submit an Excel file from a Mac and crash your application (actually, what's worse is they suddenly earned a million dollars from the error)?
So my suggestion is for you to use xlrd/xlwt when dealing with Excel from Python (there's another package out there for reading Excel 2007 up) and let SQLALchemy and your database use standard datetime types. However if you insist on continuing to store datetime as Excel float, it could save you a lot of time to reuse code from xlrd/xlwt. It has functions for converting Python objects to Excel data and vice-versa.
EDIT: for clarity...
You have no issues reading from the database to Python because you have that class that converts the float into Python datetime.
You have issues writing to the database through SQLAlchemy or using other native Python functions/modules/extensions because you are trying to force a non-standard type when they are expecting the standard Python datetime. ExcelDateTime type from the point of view Python is a float, not datetime.
Although Python uses dynamic/duck typing, it still is strongly typed. It won't allow you to do "nonsense/silliness" like adding integers to string, or forcing float for datetime.
At least two ways to address that:
Declare a custom type - Seems to be the path you wanted to take. Unfortunately this is the hard way. It's quite difficult to create a type that is a float that can also pretend to be datetime. Possible, yes, but requires a lot of study on type instrumentation. Sorry, you have to grok the documentation for that on your own.
Create utility functions - Should be the easier way, IMHO. You need 2 functions: a) float_to_datetime() for converting data from the database to return a Python datetime, and b) datetime_to_float() for converting Python datetime to Excel float.
About solution #2, as I was saying that you could simplify your life by reusing the xldate_from_datetime_tuple() from xlrd/xlwt. That function "Convert a datetime tuple (year, month, day, hour, minute, second) to an Excel date value." Install xlrd then go to /path_to_python/lib/site-packages/xlrd. The function is in xldate.py - the source is well documented for understanding.