postgres : relation there even after dropping the database - python

I dropped my database that I had previously created for django using :
dropdb <database>
but when I go to the psql prompt and say \d, I still see the relations there :
How do I remove everything from postgres so that I can do everything from scratch ?

Most likely somewhere along the line, you created your objects in the template1 database (or in older versions the postgres database) and every time you create a new db i thas all those objects in it. You can either drop the template1 / postgres database and recreate it or connect to it and drop all those objects by hand.

Chances are that you never created the tables in the correct schema in the first place. Either that or your dropdb failed to complete.
Try to drop the database again and see what it says. If that appears to work then go in to postgres and type \l, putting the output here.

Related

i have 600 records in my postgres database, now when i inserted from django it's generating primary duplication error

I have inserted data into table from postgresql directly. Now when I try to insert data from django application, it's generating primary key duplication error. How can I resolve this issue?
Run
python manage.py sqlsequencereset [app_name]
and execute all or just one for the required table SQL statements in the database to reset sequences.
Explanation:
You probably inserted with primary keys already present in it, not letting postgresql to auto-generate ids. This is ok.
This means, internal Postgresql sequence used to get next available id has old value. You need to rest with sequence to start with maximum id present in the table.
Django manage.py has command intended just for that - print sql one can execute in db to reset sequences.
I think problem is not in database. please check your django code probably you use get_or_create

Django 1.8 and Python 2.7 using PostgreSQL DB help in fetching

I'm making an application that will fetch data from a/n (external) postgreSQL database with multiple tables.
Any idea how I can use inspectdb only on a SINGLE table? (I only need that table)
Also, the data in the database would by changing continuously. How do I manage that? Do I have to continuously run inspectdb? But what will happen to junk values then?
I think you have misunderstood what inspectdb does. It creates a model for an existing database table. It doesn't copy or replicate that table; it simply allows Django to talk to that table, exactly as it talks to any other table. There's no copying or auto-fetching of data; the data stays where it is, and Django reads it as normal.

Alembic migration acting on different versions of same database

I am trying to use alembic migrations to act on different versions of the same database. An example would be that I have two databases, one live and one for testing. Each of them might be in different states of migration. For one, the test database might not exist at all.
Say live has a table table1 with columns A and B. Now I would like to add column C. I change my model to include C and I generate a migration script that has the following code
op.add_column('table1', sa.Column('C', sa.String(), nullable=True))
This works fine with the existing live database.
If I now call alembic upgrade head referring to a non-existing test database, I get an (Operational Error) duplicate column name... error. I assume this is due to my model containing the C column and that alembic/sqlalchemy creates the full table automatically if it does not exist.
Should I simply trap the error or is there a better way of doing this?
I would suggest that immediately after your test db is newly created you should stamp it with head
command.stamp(configs_for_test_db, 'head')
This will go ahead and insert the head revision number into the appropriate alembic table without actually running migrations so that the revision number will reflect the state of the db (namely that your newly created db is up to date wrt migrations). After the db is stamped, alembic upgrade should behave properly.

Different Postgres users for syncdb/migrations and general database access in Django

I'm using Django 1.6 with PostgreSQL and I want to use a two different Postgres users - one for creating the initial tables (syncdb) and performing migrations, and one for general access to the database in my application. Is there a way of doing this?
From ./manage.py help syncdb:
--database=DATABASE Nominates a database to synchronize. Defaults to the
"default" database.
You can add another database definition in your DATABASES configuration, and run ./manage.py syncdb --database=name_of_database_definition. You might want to create a small wrapper script for running that command, so that you don't have to type out the --database=... parameter by hand every time.
south also supports that option, so you can also use it to specify the database for your migrations.

How do you create a table with Sqlalchemy within a transaction in Postgres?

I'm using Sqlalchemy in a multitenant Flask application and need to create tables on the fly when a new tenant is added. I've been using Table.create to create individual tables within a new Postgres schema (along with search_path modifications) and this works quite well.
The limitation I've found is that the Table.create method blocks if there is anything pending in the current transaction. I have to commit the transaction right before the .create call or it will block. It doesn't appear to be blocked in Sqlalchemy because you can't Ctrl-C it. You have to kill the process. So, I'm assuming it's something further down in Postgres.
I've read in other answers that CREATE TABLE is transactional and can be rolled back, so I'm presuming this should be working. I've tried starting a new transaction with the current engine and using that for the table create (vs. the current Flask one) but that hasn't helped either.
Does anybody know how to get this to work without an early commit (and risking partial dangling data)?
This is Python 2.7, Postgres 9.1 and Sqlalchemy 0.8.0b2.
(Copy from comment)
Assuming sess is the session, you can do sess.execute(CreateTable(tenantX_tableY)) instead.
EDIT: CreateTable is only one of the things being done when calling table.create(). Use table.create(sess.connection()) instead.

Categories

Resources