Every time I do a: python manage.py runserver
And I load the site, python gets data and puts this in my database.
Even when I already filled some info in the database. Enough to get a view of what I am working on.
Now it is not loading the information I want and instead putting in new information to add to the database so it can work with some data.
What is the reason my data in the database is not being processed?
And how do I stop new data being loaded into the database.
May be it is happening due to migration file first sometimes when you migrate models into database query language with same number
python manage.py makemigrations 0001
This "0001" has to be changed everytime
To solve your problem once delete the migrations file and then again migrate all models and then try
Tell if this work
Related
I have a fairly large code base in django, with several applications, and at some point I had a problem: every time I change or add models and try to do a migration, an error appears: django.db.utils.ProgrammingError: relation "appname_modelname" already exists. The problem has been going on for some time - the migration file is always created under the same number - 0114 (I could not find this file, by the way), and all new fixes are recorded in it along with the previous ones, which is why the problem grows, like snowball.
I did not want to delve into the solution and just manually removed all the innovations from the database - everything that caused the "already exists" conflict to appear. So, in order for the migration to succeed, I had to manually delete all models or table fields that I created after this problem appeared. But now I'm starting to work in prod with this code and fill it with data, and it is no longer possible to delete all the data from the corresponding tables. I have no idea why this problem appeared and how to solve it and would really appreciate your advice.
I tried to make a fake, but then, obviously, the fixes I need simply do not get into the database.
I have a hypothesis as to what caused this: I have three docker containers, one api, and two for celery. In the api container, I wrote the command in the bash file at startup
python3 /usr/srv/h_api/src/manage.py makemigrations && python3 /usr/srv/h_api/src/manage.py migrate
instead of
python3 /usr/srv/h_api/src/manage.py makemigrations
so I don't have to do it manually every time. I don't understand why, but judging by the coincidence of time, it seems that the problem started because of this. It's been weeks since I changed that line back, but the problem remains.
How do I commit previous changes to the database and have django create the next migration file numbered 0115?
It may be a bit risky but it has worked for me in the past. I suggest creating a copy of your project in another folder and trying this safely away from the original project. Also if you are using a postgresql database then just switch to a dummy database although it shouldn't make a difference but just to be on the safer side.
Inside of your migrations folder, inside the app folder, try deleting the all the files inside of pycache EXCEPT init.cpython-39.pyc and 0001_initial.cpython-39.pyc and inside of the migrations folder delete all files EXCEPT init.py and 0001_initial.py.
IMPORTANT: Delete from the pycache INSIDE of migrations NOT the one outside of it.
The approach below has often helped me out, if its a production database its good to backup and also your migrations folder before getting started.
Start by deleting your current applications migrations folder
temporarily add a field to your models.
Makemigrations then migrate
Delete the temporarily added field from your model
Makemigrations and migrate
I am already having one mysql database with a lot of data, of which tables and migrations are written in sql. Now I want to use the same mysql database in django so that I can use the data in that database.I am expecting that there will not be need for making the migrations as I am not going to write the models in Django again, also what will be the changes/modification I will have to do. For eg: as in middlewares?. Can anyone please help me in this?
From what I know there is no 100 % automatic way to achieve that.
You can use the following command
python manage.py inspectdb
It will generate a list of unmanaged models that you can export to a model.py file and integrate in your django project.
However it is not magical and there are a lot of edge cases so the generated list of model should be manually inspected before being integrated.
More info here : https://docs.djangoproject.com/en/3.0/ref/django-admin/#django-admin-inspectdb
We have an online store web-app which is powered by django, postgresql and heroku.
For a specific campaign (you can think a campaign like a product to purchase), we have sold 10k+ copies successfully. Yet some of our users are encountered this error according to our Sentry reports. Common specification of these users is; none of them have address information before the purchase. Generally, users fill out address form right after registering. If they don't, they need to fill the form while purchasing the product and submit them together.
This is how the trace looks like:
OperationalError: cursor "_django_curs_140398688327424_146" does not exist
(66 additional frame(s) were not displayed)
...
File "store/apps/store_main/templatetags/store_form_filters.py", line 31, in render_form
return render_to_string('widgets/store_form_renderer.html', ctx)
File "store/apps/store_main/templatetags/store_form_filters.py", line 20, in render_widget
return render_to_string('widgets/store_widget_renderer.html', ctx)
File "store/apps/store_main/widgets.py", line 40, in render
attrs=attrs) + "<span class='js-select-support select-arrow'></span><div class='js-select-support select-arrow-space'><b></b></div>"
OperationalError: cursor "_django_curs_140398688327424_146" does not exist
So another weird common thing, there are exception messages between sql queries before the failure. You can see it in the image below:
I'm adding it if they are somehow related. What may also be related is, the users who get this error are the users who tries to purchase the campaign right after a bulk mailing. So, extensive traffic might be the reason yet we are also not sure.
We asked Heroku about the problem since they are hosting the postgres, yet they do not have any clue either.
I know the formal reason of this error is trying to reach the cursor after a commit. Since it is destroyed after transaction, trying to reach it cause this error yet I don't see this in our scenario. We are not touching the cursor in any way. What am I missing? What may produce this error? How to prevent it? Any ideas would be appreciated.
The reason for your error might be that you added fields to a model and forgot to makemigrations and migrate.
Have a look at this answer: When in run test cases then I will get this error: psycopg2.OperationalError: cursor "_django_curs_140351416325888_23" does not exist
If you're using django-pytest and have enabled the optimization --reuse-db and have made DB migrations between test runs you need to re-create the DB tables again.
pytest --create-db
Most likely you have forgotten to makemigrations and migrate them to the database.
If you are sure you did make the migrations and running python manage.py makemigrations and python manage.py migrate will not find the changes you made then the database and models are not in sync.
Sometimes this situation can be very frustrating if you have a big database. You will need to manually inspect your models.
To help out you can try this trick, which has been working for me.
Step 1 (delete all migrations from apps)
I am going to assume you are using the unix terminal. run sudo rm -rv */migrations/*. This removes all the migrations files and caches.
Step 2 (make the migration folders in each app)
run the command mkdir <app-folder>/migrations && touch <app-folder>/__init__.py. Replace with the name of the app that you have in the INSTALLED_APPS list in your default django settings file.
Step 3 (Make Migrations)
Here we populate the migrations folders in each app with migration files. Run python manage.py makemigrations. Run the second command. python manage.py migrate --fake. We are using the --fake flag because the data already exists in the database and so we do not want to populate name database tables that are already there or you will be greeted with the already exists error
If this did not work, then you will need to temper with some additional fields on the database. such as the django_migrations or a similarly named table. This is not recommended as there are other database tables that depend on it such as the django_contenttypes and this will throw you in a chain of related tables that you will manually inspect which is very painful.
With this setup:
-Development environment
-Flask
-SQLAlchemy
-Postgres
-Possibility Alembic
If I have a database with some tables populated with random data. As far as I know the Flask-Migrate, that will use Alembic, will not preserve the data, only keep the models and database synchronized.
But what is the difference between the use of Alembic or just delete > create all tables?
Something like:
db.create_all()
The second question:
What happens to the data when something change in models? The data will be lost? Or the Alembic can preserve the previous populated data?
Well, my idea is just populate the database with some data, and then avoid any lost of data
when the models change. Alembic is the solution?
Or I need to import the data, from a .sql file, for example, when I change the models and database?
I am the Flask-Migrate author.
You are not correct. Flask-Migrate (through Alembic) will always preserve the data that you have in your database. That is the whole point of working with database migrations, you do not want to lose your data.
It seems you already have a database with data in it and you want to start using migrations. You have two options to incorporate Flask-Migrate into your project:
Only track migrations going forward, i.e. leave your initial database schema outside of migration tracking.
For this you really have nothing special to do. Just do manage.py db init to create the migrations repository and when you need to migrate your database do so normally with manage.py db migrate. The disadvantage of this method is that Flask-Migrate/Alembic do not have the initial schema of the database, so it is not possible to recreate a database from scratch.
Implement an initial migration that brings your database to your current state, then continue tracking future migrations normally.
This requires a little bit of trick. Here you want Alembic to record an initial migration that defines your current schema. Since Alembic creates migrations by comparing your models to your database, the trick is to replace your real database with an empty database and then generate a migration. After the initial migration is recorded, you restore your database, and from then on you can continue migrating your database normally.
I hope this helps. Let me know if you have any more questions.
I'm trying to save a stripe (the billing service) company id [around 200 characters or so] to my database in Django.
The specific error is:
database error: value too long for type character varying(4)
How can I enable Django to allow for longer values?
I saw:
value too long for type character varying(N)
and:
Django fixture fails, stating "DatabaseError: value too long for type character varying(50)"
my database is already encoded for UTF-8, according to my webhost.
EDIT : I see that one answer recommends making the column wider. Does that involve modifying the PostgreSQL database?
My specific system is Webfaction, CentOs shared machine, Django running on PostgreSQL. I would really appreciate a conceptual overview of what's going on and how I can fix it.
Yes, make the column wider. The error message is quite clear: your 200 characters are too big to fit in a varchar(4).
First, update your model fields max_length attribute from 4 to a number that you expect will be long enough to contain the data you're feeding it.
Next up you have to update the database column itself as django will not automatically update existing columns.
Here are a few options:
1:
Drop the database and run syncdb again. Warning: you will lose all your data.
2: Manually update the column via SQL:
Type in python manage.py dbshell to get into your database shell and type in
ALTER TABLE my_table ALTER COLUMN my_column TYPE VARCHAR(200)
3: Learn and use a database migration tool like django south which will help keep your database updated with your model code.
Using Django 1.11 and Postgres 9.6 I ran into this, for no apparent reason.
I set max_length=255 in the migration file and executed:
manage.py migrate
Then I set the correct length on the Model's max_length and ran another makemigrations and the ran migrate again.
JUST IN CASE SOMEONE RUNS INTO THIS ERROR FOR DJANGO 3 and POSTGRES.
STEP 1: Go to your migration folder.
STEP 2: Navigate to your recent migration file.
STEP 3: Check for the operations list in the migration file.
STEP 4: Update the field or the table column max_length to the higher number to accommodate your data.