Django hideously slow with 8000 model instances (How to drop PostgreSQL database)? - python

I have a Django installation running on the development server. I recently used python manage.py loaddata 8000models.json. Now everything is super slow. Pages won't load. python manage.py flush does not return. ModelType.objects.count() does not return.
(Or maybe they will return if I wait a sufficiently long time.)
What's going on here? Is Django truly unable to handle that much data? Or is there some other issue?
Update: I observe this issue on PostgreSQL but not SQLite with the same amount of data. Perhaps I just need to wipe the PostgreSQL and reload the data.
Update 2: How can I wipe the PostgreSQL and reset? python manage.py reset appname isn't responding.
Update 3: This is how I'm trying to wipe the PostgreSQL:
#! bin/bash
sudo -u postgres dropdb mydb
sudo -u postgres createdb mydb
sudo -u postgres psql mydb < ~/my-setup/my-init.sql
python ~/path/to/manage.py syncdb
However, this causes the following errors:
dropdb: database removal failed: ERROR: database "mydb" is being accessed by other users
DETAIL: There are 8 other session(s) using the database.
createdb: database creation failed: ERROR: database "mydb" already exists
ERROR: role "myrole" cannot be dropped because some objects depend on it
DETAIL: owner of table mydb.mytable_mytable
# ... more "owner of table", "owner of sequence" statements, etc
How can I close out these other sessions? I don't have Apache running. I've only been using one instance of the Django development server at a time. However, when it got unresponsive I killed it with Control+Z. Perhaps that caused it to not release a database connection, thereby causing this issue? How can I get around this?

Ctrl-Z just stops the process, it does not kill it. (I assume you're using bash) type jobs in your terminal, and you should see the old processes still running.
Once you kill all the jobs that are accessing the PostgreSQL database, you should be able to drop, create, and syncdb as you expect.

Have you tried checking that your postrgres database is still replying at all, from the sounds of it it is probably processing the data you told it to load.
When loading the data you should just let it finish first. If pages are still taking a very long time to render you should check your queries instead.
Using .count(), even in PostgreSQL (which has rather slow counts) with 8000 items should return within a good amount of time.

Related

Run a SQL Server Agent Job from Python

I am trying to trigger a SQL Server Agent Job (takes a backup of the db and places into a directory) from python. Unfortunately, I haven't found anything in regards to python triggering a SQL Server Agent Job (only the other way around, SQL Server Agent Job triggering a python script).
Once I get that backup, I want to restore this db into a different SQL Server using the same python script.
Thanks for any help!!
You can run a job from Transact-SQL from Python:
EXEC dbo.sp_start_job N'My Job Name';
GO
See documentation for more information.

Running Django on server with remote interpreter - Prevent Django from creating a test database

I'm trying to use PyCharm's remote interpreter capability to debug my Django app on our dev server. The connection is working, but when I try to run or debug using the remote interpreter I get this error from the run console:
Creating test database for alias 'default'...
Failed (ORA-01031: insufficient privileges)
Got an error creating the test database: ORA-01031: insufficient privileges
As far as I can tell, Django is trying to create a test database to use with my test cases. I don't have any test cases on our server yet and don't need them right now. I'm also using my personal schema of which I am the owner. How can I prevent Django from trying to create this database so that I can run my code?
Well this is embarrassing, but the problem turned out to be that in my run configuration I had the 'test' box checked.

Python - when saving data to an sqlite db, then restarting the machine, db is filled with zeros

I have a python script that creates a sqlite db, saves it and closing the connection with close().
Afterwards, I reboot my system (osx) with:
subprocess.call(echo mypassword| sudo -S shutdown -r now, shell=True)
What happens , is after the system is rebooted , some of the time (it's not consistent) the db is filled with one or two zeros fields instead of the fields I saved.
I tried things like , creating a bash script that will perform the shutdown (and calling it from python-instead of executing shutdown directly), and even call killall python before sending the shutdown command in the bash script, but it still didn't help.
For me , it seems like sometimes python keeps some open handles maybe to the db , although I close it correctly and although I can't see the db instance in the memory while using pycharm debugger (I know it doesn't mean the handle is really closed)
I don't know how to continue in solving this from here

use psycopg with installed PostgreSQL Database

I would like to get started with working with python and PSQL databases.
The script I have in mind will (at least not at the beginning) run on hosts with an installed PSQL Database, but I want the script to connect remotely to the database.
In fact (for the start):
user hosts: run the script (read xls, convert, manipulate, etc) and write into remote DB
DB Host: this will host the db and gets connections from the user hosts and the "Sync Host"
Sync Host: a cloud service which will connect to the db server to read the databases and do some "magic" with it.
from what I have read, the best python module for PSQL connection is psycopg, but this seems to require an installed PSQL Database, which is something I do not have (and don't want to install) on the user hosts.
At a later stage I will remove the "user hosts" and provide a webinterface for uploading the xls and do the conversion, etc on the db host, but for the beginning I wanted to start as mentioned above.
My questions:
Is my thinking totally wrong? should I start with a central approach (webinterface, etc) right awa
If not, how can I get a PSQL connection method implemented in python without installing a PSQL Database?
All User hosts are Mac OS X, so Python is already installed.
thanks a lot in advance
Andre

Recovering Celery From a Database Outage

I have Celeryd/RabbitMQ running on a Fedora box, communicating with a MySQL
database on a separate box. I've noticed that, on rare occasions, if
there's even the slightest problem connecting to the MySQL database
(even for a few seconds), celeryd will crash with the error:
OperationalError: (2003, "Can't connect to MySQL server on
'mydatabasedomain' (111)")
and fail to reconnect even when the database becomes available again.
Currently, I'm forced to manually restart the celeryd service to get
celery running again. Is there a more graceful and automatic way to
recover from these types of event? Is there any feature of celeryd to
just quietly wait, logging the OperationalError, and reconnect instead
of exiting out entirely?
I don't know of any way to fix this by simply using a config flag, but you could consider running your worker using supervisor (s. http://supervisord.org).
This is even mentioned in the celery docs (http://celery.readthedocs.org/en/latest/tutorials/daemonizing.html#supervisord) including a link to some example config files.

Categories

Resources