In my project, I have made some setting fields within the "settings.py" file configurable by exposing them to the user via an environment file. So the user can modify the values on the .env file, and that is then used to update the setting fields within the main project settings.py file.
I want to improve this by migrating some of this values to the database, so users can set their values interactively via the product's UI instead of having to modify the .env. I have taken the following approach:
After the default database has been declared in the DATABASES dictionary, I isntantiate a connection.cursor() to run a raw SQL query that retrieves the settings from the database, as described in the documentation.
I manipulate the results of the cursor to construct a dictionary in which keys are setting identifiers and values are the relevant values from the database, as set by the user.
This dictionary is then used to assign the appropriate value to each Django setting variable (i.e. SESSION_COOKIE_AGE, MEDIA_ROOT, etc.). So at each setting variable instead of doing a getenv, I retrieve the value from the dictionary using the relevant key.
I have observed the code's behavior within settings.py, and I can see that each setting value gets assigned to the correct variable, identically to how it was when using the previous .env approach. The problem is that when these setting variables are accessed in the code via django.conf.settings or by direct import (from project.settings import SETTING), their value is an empty string, as if it has not been declared in the first place.
I have noticed that settings declared before the cursor is instantiated (regardless of whether their value was hardcoded or retrieved from the .env) work fine. Settings after the cursor seem to not maintain their state outside the settings.py file.
Can anyone please enlighten me as to why using a cursor within settings.py essentially invalidates all setting fields declared after it?
I managed to resolve the issue. The problem was that I was using Django's connection model (from django.db import connection), which performs the SQL query on the default database. Since this was running within settings.py while the project was still in its set-up process, it caused the strange behavior described above.
Solution: Instead of using django.db.connection (a Django model) to perform the query, I used Python's pyodbc library instead, which is the database engine used in the project. With pyodbc I was able to establish a connection to the database and run my query (as described here) regardless of Django's state.
Related
I've tried many contortions on this problem to try to figure out what's going on.
My SQLAlchemy code specified tables as schema.table. I have a special connection object that connects using the specified connect string if the database is PostgreSQL or Oracle, but if the database is SQLite, it connects to a :memory: database, then attaches the SQLite file-based database using the schema name. This allows me to use schema names throughout my SQLAlchemy code without a problem.
But when I try to set up Alembic to see my database, it fails completely. What am I doing wrong?
I ran into several issues that had to be worked through before I got this working.
Initially, Alembic didn't see my database at all. If I tried to specify it in the alembic.ini file, it would load the SQLite database using the default schema, but my model code specified a schema, so that didn't work. I had to change alembic/env.py in run_migrations_online() to call my connection method from my code instead of using engine_from_config. In my case, I created a database object that had a connect() method that would return the engine and the metadata. I called that as connectable, meta = db.connect(). I would return the schema name with schema=db.schema(). I had to import the db class from my SQLAlchemy code to get access to these.
Now I was getting a migration that would build up the entire database from scratch, but I couldn't run that migration because my database already had those changes. So apparently Alembic wasn't seeing my database. Alembic also kept telling me that my database was out of date. The problem there was that the alembic table alembic_version was being written to my :memory: database, and as soon as the connection was dropped, so was the database. So to get Alembic to remember the migration, I needed that table to be created in my database. I added more code to env.py to pass the schema to context.configure using the version_table_schema=my_schema.
When I went to generate the migration again, I still got the migration that would build the database from scratch, so Alembic STILL wasn't seeing my database. After lots more Googling, I found that I needed to pass include_schemas=True to context.configure in env.py. But after I added that, I started getting tracebacks from Alembic.
Fortunately, my configuration was set up to provide both the connection and the metadata. By changing the target_metadata=target_metadata line to target_metadata=meta (my local metadata returned from the connection), I got around these tracebacks as well, and Alembic started to behave properly.
So to recap, to get Alembic working with a SQLite database attached as a schema name, I had to import the connection script I use for my Flask code. That connection script properly attaches the SQLite database, then reflects the metadata. It returns both the engine and the metadata. I return the engine to the "connectable" variable in env.py, and return the metadata to the new local variable meta. I also return the schema name to the local variable schema.
In the with connectable.connect() as connection: block, I then pass to context.configure additional arguments target_metadata=meta, version_table_schema=schema, and include_schemas=True where meta and schema are my new local variables set above.
With all of these changes, I thought I was able to work with SQLite databases attached as schemas. Unfortunately, I continued to run into problems with this, and eventually decided that I simply wouldn't work with SQLite with Alembic. Our rule now is that Alembic migrations are only for non-SQLite databases, and SQLite data has to be migrated to another database before attempting an Alembic migration of the data.
I'm documenting this so that anyone else facing this may be able to follow what I've done and possibly get Alembic working for SQLite.
Django is proving the model field argument default (https://docs.djangoproject.com/en/dev/ref/models/fields/#default) but as I know it will be called every time new object is created via django.
If we insert/create a records with raw queries (using django.db.connection.cursor) we will get the exception cause Field 'xyz' doesn't have a default value.
How to represent the db level default value for the column in model. Like db_index.
I hope you guy understand my question.
There is an open ticket 470 to include default values in the SQL schema.
Until this feature has been added to Django, you'll have to manually run alter table statements yourself or write a migration to run them if you want a default value in the SQL schema.
Note that Django allows callables as defaults, so even if this feature is added to Django, it won't be possible to have all defaults in the database.
I am unable to use database view in test cases. other hand i am able to use those database view in front end function . but when i try to get data from view in it return null in test case.
Please give me suggestion for use database views in test cases
By database view do you mean you are using an unmanaged model which represents an underlying database view (as described here)?
If so, I have found that, during unit testing, Django ignores the managed = False setting in the model meta and creates an actual table. Unless you explicitly populate this in your setUp this will be empty.
A quick-and-dirty way of getting around this is to explicitly drop the table and create the view in your test case's setUp method, like this:
# Imports
from django.db import connection
from django.core.files import File
...
# Inside your test case setUp method
# Drop the table
cursor = connection.cursor()
# See note 1
cursor.execute("SET #OLD_SQL_NOTES=##SQL_NOTES, SQL_NOTES=0; DROP TABLE IF EXISTS myproject_myview; SET SQL_NOTES=#OLD_SQL_NOTES;")
cursor.close()
# Create the view
# See note 2
file_handle=open('/full/path/to/myproject/sql/create_myview.sql','r+')
sql_file=File(file_handle)
sql = sql_file.read()
cursor = connection.cursor()
cursor.execute(sql)
cursor.close()
Notes:
This is to get around a MySQL problem so might not apply to your case. The table will only exist the first time setUp is run. If you try to drop the table on a subsequent pass MySQL will generate warnings - this code suppresses them.
This file contains creation code for a single view in the format CREATE OR REPLACE VIEW myproject_myview AS.... I've found that trying to execute a file containing multiple commands with the same cursor also causes problems.
I'm guessing by a database view you mean accessing a database inside a view.
That being said, I think your problem is that you dont have a test database that Django is trying to test against.
This is how you start off with that and its called fixtures. (You could do this with SQL as well but I think it's easier with fixtures).
The easiest being using the dumpdata command provided by Django.
python manage.py dumpdata
This will create a file, which will be in your apps directory, which you can use in your tests like this:
For example
myDjangoProject/myCoreApp/fixtures/myCoreApp_views_testdata.json
NOTE: The myCoreApp won't be named this.
You could also set a FIXTURES_DIR setting in your settings.py as to tell Django where to look for fixtures in the future.
To use a fixture then in your tests you do the following
class SomeViewThatIWantToTest(TestCase): #Note, you must use django.test.TestCase
fixtures = ['core_views_testdata.json']
After this you should be able to access your data in your views as normal.
This might require some tuning to fit your exact example so I added a link to the official docs at the bottom!
Good luck and please do correct me if I'm wrong! :)
Read more about this here
I am tracking changes made to a levels in a game. The way I currently track changes is in a sqlite database. Each level is supposed to have its own database, as just one database for all the levels would provide complications when adding and deleting levels. So for each level, I want a database that has the same name as that level. SO that changes made to level "foo" get written to database "foo". I don't need to edit the tables just the actual name of the database. I guess now that I could just use a file renaming function in python, but I would like to know if there is any way to change names from the start.
Heres an example:
connection = sqlite.connect('\database\foo.db')
cursor = connection.cursor()
Where foo is the variable
You'll need to post some code for us to answer this completely.
You can use the ALTER TABLE command to rename the tables within your database, you can rename your sqlite db file on disk if you close it first, and you can use a variable in your python code to represent the name of the DB you're using. But you need to be more specific if you want a more specific answer.
I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production.
What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better).
I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn).
Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.
Don't make this more complex than it needs to be. The big, independent databases have complex setup and configuration requirements. SQLite is just a file you access with SQL, it's much simpler.
Do the following.
Add a table to your database for "Components" or "Versions" or "Configuration" or "Release" or something administrative like that.
CREATE TABLE REVISION(
RELEASE_NUMBER CHAR(20)
);
In your application, connect to your database normally.
Execute a simple query against the revision table. Here's what can happen.
The query fails to execute: your database doesn't exist, so execute a series of CREATE statements to build it.
The query succeeds but returns no rows or the release number is lower than expected: your database exists, but is out of date. You need to migrate from that release to the current release. Hopefully, you have a sequence of DROP, CREATE and ALTER statements to do this.
The query succeeds, and the release number is the expected value. Do nothing more, your database is configured correctly.
AFAIK an SQLITE database is just a file.
To check if the database exists, check for file existence.
When you open a SQLITE database it will automatically create one if the file that backs it up is not in place.
If you try and open a file as a sqlite3 database that is NOT a database, you will get this:
"sqlite3.DatabaseError: file is encrypted or is not a database"
so check to see if the file exists and also make sure to try and catch the exception in case the file is not a sqlite3 database
SQLite automatically creates the database file the first time you try to use it. The SQL statements for creating tables can use IF NOT EXISTS to make the commands only take effect if the table has not been created This way you don't need to check for the database's existence beforehand: SQLite can take care of that for you.
The main thing I would still be worried about is that executing CREATE TABLE IF EXISTS for every web transaction (say) would be inefficient; you can avoid that by having the program keep an (in-memory) variable saying whether it has created the database today, so it runs the CREATE TABLE script once per run. This would still allow for you to delete the database and start over during debugging.
As #diciu pointed out, the database file will be created by sqlite3.connect.
If you want to take a special action when the file is not there, you'll have to explicitly check for existance:
import os
import sqlite3
if not os.path.exists(mydb_path):
#create new DB, create table stocks
con = sqlite3.connect(mydb_path)
con.execute('''create table stocks
(date text, trans text, symbol text, qty real, price real)''')
else:
#use existing DB
con = sqlite3.connect(mydb_path)
...
Sqlite doesn't throw an exception if you create a new database with the same name, it will just connect to it. Since sqlite is a file based database, I suggest you just check for the existence of the file.
About your second problem, to check if a table has been already created, just catch the exception. An exception "sqlite3.OperationalError: table TEST already exists" is thrown if the table already exist.
import sqlite3
import os
database_name = "newdb.db"
if not os.path.isfile(database_name):
print "the database already exist"
db_connection = sqlite3.connect(database_name)
db_cursor = db_connection.cursor()
try:
db_cursor.execute('CREATE TABLE TEST (a INTEGER);')
except sqlite3.OperationalError, msg:
print msg
Doing SQL in overall is horrible in any language I've picked up. SQLalchemy has shown to be easiest from them to use because actual query and committing with it is so clean and absent from troubles.
Here's some basic steps on actually using sqlalchemy in your app, better details can be found from the documentation.
provide table definitions and create ORM-mappings
load database
ask it to create tables from the definitions (won't do so if they exist)
create session maker (optional)
create session
After creating a session, you can commit and query from the database.
See this solution at SourceForge which covers your question in a tutorial manner, with instructive source code :
y_serial.py module :: warehouse Python objects with SQLite
"Serialization + persistance :: in a few lines of code, compress and annotate Python objects into SQLite; then later retrieve them chronologically by keywords without any SQL. Most useful "standard" module for a database to store schema-less data."
http://yserial.sourceforge.net
Yes, I was nuking out the problem. All I needed to do was check for the file and catch the IOError if it didn't exist.
Thanks for all the other answers. They may come in handy in the future.