I have an SQL query thats runs on the Postgres database of my Django based webapp. The query runs against the data stored by Django-Notifications (a reusable app) and returns a list of email addresses that have not opted out of a specific notice type.
What I would really like to be able to do is to build an application that does this on demand, so I'm looking for an example of how to convert the SQL so it can run inside a Django view that will pass out a formatted email list. The SQL is currently thus:
gr_webapp=# select email from emailconfirmation_emailaddress where verified and user_id not in
(select user_id from notification_noticesetting s join notification_noticetype t on s.notice_type_id = t.id
where t.label = 'announcement' and not s.send);
You might have to make appropriate adjustments as far as model names go, since you didn't show them in your question:
users_to_exclude = Noticesetting.objects.filter(send=False, notice_type__label='announcement').values('user')
emails = Emailaddress.objects.exclude(user__in=users_to_exclude)
Related
I'm trying to migrate the database for an existing application from access to SQLite. The application uses Autonumbers to generate unique IDs in Access and some tables reference rows from other tables by these unique IDs.
What's a good way to migrate these and keep this functionality intact?
From what I've read, SQLite uses Auto indexing for this. How would I create the links between the tables? Do I have to search the other tables for the row with that unique ID and replace the reference with the SQL generated ID?
example:
table 1, has a column linkedID with a row with the value {7F99297A-DE91-4BD6-9ED8-FC13D668CDA2}, which is linked to a row in table 2 with primaryKey {7F99297A-DE91-4BD6-9ED8-FC13D668CDA2}.
Well, there not really a automated way to do this.
but, what I do to migrate data?
I setup a linked table in Access. Double check if that linked table works (you need to install the odbc driver).
Assuming you have a working linked table?
Then you can do this to export the Access table in VBA to sqlite.
Dim LocalTable As String ' name of local table link
Dim ServerTable As String ' name of table on SQL Lite
LocalTable = "Table1"
ServerTable = "TableSLite"
Dim strCon As String
strCon = CurrentDb.TableDefs("Test1").Connect
' above is a way to steal and get a working connection from a valid
' working linked table (I hate connection strings in code)
Debug.Print strCon
DoCmd.TransferDatabase acExport, "ODBC Database", strCon, acTable, LocalTable, ServerTable
Debug.Print "done export of " & LocalTable
That will get you the table in sqlite. But, there are no DDL (data definition commands) in sqlite to THEN change that PK id from Access to a PK and autonumber.
However, assuming you say have "db browser"?
Then simple export the table(s) as per above.
Now, in db browrser, open up the table, and choose modify, and simple check the AI (auto increemnt, and then PK settings - in fact if you check box AI, then the PK useally selects for you. So, after I done the above export. (and you should consider close down Access - since you had/have linked tables).
So, in db browser, we now do this:
so, for just a few tables, the above is not really hard.
However, the above export (transfer) of data does not set the PK, and auto increment for you.
If you need to do this with code, and this is not a one time export/transfer, then I don't have a good solution.
Unfortantly, SqlLite does NOT allow a alter table command to set PK and set auto increment (if that was possbile, then after a export, you could execute the DDL command in sqlite (or send the command from your client software) to make this alteration.
I not sure if sql lite can spit out the "create table" command that exists for a given table (but, I think it can). So, you might export the schema, get the DDL command, modify that command, drop the table, re-run the create table command (with current PK and auto increment), and THEN use a export or append query in Access.
But, transfer of the table(s) in question can be done quite easy as per above, but the result(s) do not set nor include the PK setting(s) for you.
However, if this is one time export? Then export of tables - and even the dirty work of figuring out the correct data types to be used?
The above works well - but you have to open up the tables in a tool like say db browser, and then set PK and auto increment.
I do the above quite often for transfer of Access tables to sqlLite tables, but it does then require some extra steps to setup the PK and auto increment.
Another possbile way if this had to be done more then one time?
I would export as per above, and then add the PK (and auto increment).
I would then grab say the 8 tables create commands from sqlLite, and save those create table commands in the client software.
then you execute the correct create table command, and then do a append query from Access. So, it really depends if this is a one time export, or this process of having to create the table(s) in sqlLite is to occur over and over.
I am trying to analyse the SQL performance of our Django (1.3) web application. I have added a custom log handler which attaches to django.db.backends and set DEBUG = True, this allows me to see all the database queries that are being executed.
However the SQL is not valid SQL! The actual query is select * from app_model where name = %s with some parameters passed in (e.g. "admin"), however the logging message doesn't quote the params, so the sql is select * from app_model where name = admin, which is wrong. This also happens using django.db.connection.queries. AFAIK the django debug toolbar has a complex custom cursor to handle this.
Update For those suggesting the Django debug toolbar: I am aware of that tool, it is great. However it does not do what I need. I want to run a sample interaction of our application, and aggregate the SQL that's used. DjDT is great for showing and shallow learning. But not great for aggregating and summarazing the interaction of dozens of pages.
Is there any easy way to get the real, legit, SQL that is run?
Check out django-debug-toolbar. Open a page, and a sidebar will be displayed with all SQL queries plus other information.
select * from app_model where name = %s is a prepared statement. I would recommend you to log the statement and the parameters separately. In order to get a wellformed query you need to do something like "select * from app_model where name = %s" % quote_string("user") or more general query % map(quote_string, params).
Please note that quote_string is DB specific and the DB 2.0 API does not define a quote_string method. So you need to write one yourself. For logging purposes I'd recommend keeping the queries and parameters separate as it allows for far better profiling as you can easily group the queries without taking the actual values into account.
The Django Docs state that this incorrect quoting only happens for SQLite.
https://docs.djangoproject.com/en/dev/ref/databases/#sqlite-connection-queries
Have you tried another Database Engine?
Every QuerySet object has a 'query' attribute. One way to do what you want (I accept perhaps not an ideal one) is to chain the lookups each view is producing into a kind of scripted user-story, using Django's test client. For each lookup your user story contains just append the query to a file-like object that you write at the end, for example (using a list instead for brevity):
l = []
o = Object.objects.all()
l.append(o.query)
We're trying to enable a SQL query front-end to our Web application, which is WSGI and uses Python, with SQLAlchemy (core, not ORM) to query a PostgreSQL database. We have several data layer functions set up to assist in query construction, and we are now trying to set something up that allows this type of query:
select id from <table_name> where ... limit ...
In the front end, we have a text box which lets the user type in the where clause and the limit clause, so that the data can be queried flexibly and dynamically from the front end, that is, we want to enable ad hoc querying. So, the only thing that we now firsthand is:
select id from <table_name>
And the user will type in, for example:
where date > <some_date>
where location is not null limit 10 order by location desc
using the same back end function. The select, column and table should be managed by the data layer (i.e. it knows what they are, and the user should not need to know that). However, I'm not aware of any way to get SQLAlchemy to automatically parse both the where clause and the limit clause automatically. What we have right now is a function which can return the table name and the name of the id column, and then use that to create a text query, which is passed to SQLAlchemy, as the input to a text() call.
Is there any way I can do this with SQLAlchemy, or some other library? Or is there a better pattern of which I should be aware, which does not involve parsing the SQL while still allowing this functionality from the front-end?
Thanks a lot! All suggestions will be greatly appreciated.
I'm not sure I follow, but the general SQL-Alchemy usage is like:
results = db.session.query(User).filter(User.name == "Bob").order_by(User.age.desc()).limit(10)
That will query the User table to return the top ten oldest members named "Bob"
UserTable is:
id (INT)
name (STR)
last_login (DATETIME)
Serving a web page request i have a user id in hand and I only wish to update the last_login field to 'now'.
It seems to me that there are 2 ways:
issue a direct SQL using db_engine (losing the mapper)
OR query the user first and then update the object
Both work fine but look quite disgusting in code.
Is anyone aware of a more elegant way of doing an update-with-no-query using sqlalchemy? Is there another ORM who has got this right?
Thanks
Assuming you have a mapper UserTable in place:
DBSession.query(UserTable).filter_by(id = user_id).\
update({"last_login":datetime.datetime.now()}, synchronize_session=False)
Additional parameters in the docs.
UserTable is:
id (INT)
name (STR)
last_login (DATETIME)
Serving a web page request i have a user id in hand and I only wish to update the last_login field to 'now'.
It seems to me that there are 2 ways:
issue a direct SQL using db_engine (losing the mapper)
OR query the user first and then update the object
Both work fine but look quite disgusting in code.
Is anyone aware of a more elegant way of doing an update-with-no-query using sqlalchemy? Is there another ORM who has got this right?
Thanks
Assuming you have a mapper UserTable in place:
DBSession.query(UserTable).filter_by(id = user_id).\
update({"last_login":datetime.datetime.now()}, synchronize_session=False)
Additional parameters in the docs.