I am unable to use database view in test cases. other hand i am able to use those database view in front end function . but when i try to get data from view in it return null in test case.
Please give me suggestion for use database views in test cases
By database view do you mean you are using an unmanaged model which represents an underlying database view (as described here)?
If so, I have found that, during unit testing, Django ignores the managed = False setting in the model meta and creates an actual table. Unless you explicitly populate this in your setUp this will be empty.
A quick-and-dirty way of getting around this is to explicitly drop the table and create the view in your test case's setUp method, like this:
# Imports
from django.db import connection
from django.core.files import File
...
# Inside your test case setUp method
# Drop the table
cursor = connection.cursor()
# See note 1
cursor.execute("SET #OLD_SQL_NOTES=##SQL_NOTES, SQL_NOTES=0; DROP TABLE IF EXISTS myproject_myview; SET SQL_NOTES=#OLD_SQL_NOTES;")
cursor.close()
# Create the view
# See note 2
file_handle=open('/full/path/to/myproject/sql/create_myview.sql','r+')
sql_file=File(file_handle)
sql = sql_file.read()
cursor = connection.cursor()
cursor.execute(sql)
cursor.close()
Notes:
This is to get around a MySQL problem so might not apply to your case. The table will only exist the first time setUp is run. If you try to drop the table on a subsequent pass MySQL will generate warnings - this code suppresses them.
This file contains creation code for a single view in the format CREATE OR REPLACE VIEW myproject_myview AS.... I've found that trying to execute a file containing multiple commands with the same cursor also causes problems.
I'm guessing by a database view you mean accessing a database inside a view.
That being said, I think your problem is that you dont have a test database that Django is trying to test against.
This is how you start off with that and its called fixtures. (You could do this with SQL as well but I think it's easier with fixtures).
The easiest being using the dumpdata command provided by Django.
python manage.py dumpdata
This will create a file, which will be in your apps directory, which you can use in your tests like this:
For example
myDjangoProject/myCoreApp/fixtures/myCoreApp_views_testdata.json
NOTE: The myCoreApp won't be named this.
You could also set a FIXTURES_DIR setting in your settings.py as to tell Django where to look for fixtures in the future.
To use a fixture then in your tests you do the following
class SomeViewThatIWantToTest(TestCase): #Note, you must use django.test.TestCase
fixtures = ['core_views_testdata.json']
After this you should be able to access your data in your views as normal.
This might require some tuning to fit your exact example so I added a link to the official docs at the bottom!
Good luck and please do correct me if I'm wrong! :)
Read more about this here
Related
I have a django app and I am running some unit tests on it. So the problem I am having is not when one test inserts into the test db. It is the tests that come after. Since each test is not saving the transaction, the entry from a previous test is not there which is fine, although the auto increment id's are increasing as if there are still entries into the database. Which I need to fix because I am inserting more data where I cannot control the id's given to it and need to be able to grab this specific data for the test. If I hard code the code to grab the objects I will have to change the code for every time I add a new test, which is not ideal.
I have multiple tests running, but for simplicity sake, I will show two.
from django.test import TestCase
from app.models import Model
class VersionMerge(TestCase):
fixtures = ['initial_test_data.json']
def test_model_test1(self):
*Insert new data*
*grab new data in*
*Check data*
def test_model_test2(self):
*Insert new data*
*grab new data*
*Check data*
The problem arises in test_model_test2 where when I try to grab the new data, I have to print the object out to see the id's to be able to grab it.
I have a solution on how I can fix this on the actual database but not the test one. For mine I need to be able to connect to a docker container and run a psql command to reset the table_id_seq.
docker exec -t $CONTAINER_ID psql --dbname=test_database_name -username=user -c "SELECT setval('modelName_appName_id_seq', 2, true)"
This will go to the table and set the last id value used to be 2 to make the next id 3. However whenever I try to run the command from inside python using
cmd = "command above"
os.system(cmd)
and when I run this I get the following error.
sh: 1: docker: not found
sh: 1: docker: not found
Looking for any help on this, either a new solution to the problem or improvements on mine.
TLDR; I need to be able to modify data in the database that the django unit tests create.
If you need a test to reset the primary key sequence, you can issue a RawSQL query doing that. How to do that exactly is answered in this StackOverflow question.
An easier option is available if you're using pytest. We're using pytest and pytest-django in all our Django projects and it makes testing a breeze. pytest-django provdes a database fixture that can take a boolean parameter to reset the sequences. Use it like so:
#pytest.mark.django_db(transaction=True, reset_sequences=True)
def mytest():
[...]
I got this to work by replacing TestCase with TransactionTestCase and set reset_sequences=True. However the tests are running slower.
from django.test import TransactionTestCase
class ViewTest(TransactionTestCase):
reset_sequences = True
def test_view_redirects(self):
...
Here's the doc
You are using a fixtures file - put whatever data you want in there; then edit it in your test as needed.
Although, that is harder to maintain. In my opinion - there are far better options that may work much more like what you intend.
You're better off using something like factory_boy and generating the models (and related foreign keys) at instantiation with dummy data you provide.
That way you know exactly what is being tested and it's completely independent of everything else. The nice part is that with factory_boy you will have a factories.py file you can keep up to date much easier than working with some fixture.
There are other options like Mixer or model_mommy, although I only have experience with factory_boy and mixer.
With factory_boy it might look something like this:
def test_model_test1(self):
factory.ModelFactory(
some_specific_attribute='some_specific_value'
)
model = Model.objects.all().first()
# Test against your model
I am trying to analyse the SQL performance of our Django (1.3) web application. I have added a custom log handler which attaches to django.db.backends and set DEBUG = True, this allows me to see all the database queries that are being executed.
However the SQL is not valid SQL! The actual query is select * from app_model where name = %s with some parameters passed in (e.g. "admin"), however the logging message doesn't quote the params, so the sql is select * from app_model where name = admin, which is wrong. This also happens using django.db.connection.queries. AFAIK the django debug toolbar has a complex custom cursor to handle this.
Update For those suggesting the Django debug toolbar: I am aware of that tool, it is great. However it does not do what I need. I want to run a sample interaction of our application, and aggregate the SQL that's used. DjDT is great for showing and shallow learning. But not great for aggregating and summarazing the interaction of dozens of pages.
Is there any easy way to get the real, legit, SQL that is run?
Check out django-debug-toolbar. Open a page, and a sidebar will be displayed with all SQL queries plus other information.
select * from app_model where name = %s is a prepared statement. I would recommend you to log the statement and the parameters separately. In order to get a wellformed query you need to do something like "select * from app_model where name = %s" % quote_string("user") or more general query % map(quote_string, params).
Please note that quote_string is DB specific and the DB 2.0 API does not define a quote_string method. So you need to write one yourself. For logging purposes I'd recommend keeping the queries and parameters separate as it allows for far better profiling as you can easily group the queries without taking the actual values into account.
The Django Docs state that this incorrect quoting only happens for SQLite.
https://docs.djangoproject.com/en/dev/ref/databases/#sqlite-connection-queries
Have you tried another Database Engine?
Every QuerySet object has a 'query' attribute. One way to do what you want (I accept perhaps not an ideal one) is to chain the lookups each view is producing into a kind of scripted user-story, using Django's test client. For each lookup your user story contains just append the query to a file-like object that you write at the end, for example (using a list instead for brevity):
l = []
o = Object.objects.all()
l.append(o.query)
If I run my python script, it creates the tables correctly.
However, if I add/remove a column and run the script again, it doesn't modify the database.
Why is this?
I am doing this after defining everything:
mapper(Product, products)
mapper(Category, categories)
metadata.create_all(engine)
session = Session()
create_all only creates. It doesn't modify.
http://www.sqlalchemy.org/docs/core/schema.html?highlight=create_all#sqlalchemy.schema.MetaData.create_all
If you've changed the columns you need to either drop/readd the table, or manually add the column outside of your script. There are tools to do this like sqlalchemy-migrate which is a full migrations manager for your project. http://code.google.com/p/sqlalchemy-migrate/
I use SQLAlchemy within Flask, but I think the same concepts apply here. You have to commit the changes when you change something within the database:
session.commit()
as I usually don't do the up front design of my models in Django projects I end up modifying the models a lot and thus deleting my test database every time (because "syncdb" won't ever alter the tables automatically for you). Below lies my workflow and I'd like to hear about yours. Any thoughts welcome..
Modify the model.
Delete the test database. (always a simple sqlite database for me.)
Run "syncdb".
Generate some test data via code.
goto 1.
A secondary question regarding this.. In case your workflow is like above, how do you execute the 4. step? Do you generate the test data manually or is there a proper hook point in Django apps where you can inject the test-data-generating-code at server startup?\
TIA.
Steps 2 & 3 can be done in one step:
manage.py reset appname
Step 4 is most easily managed, from my understanding, by using fixtures
This is a job for Django's fixtures. They are convenient because they are database independent and the test harness (and manage.py) have built-in support for them.
To use them:
Set up your data in your app (call
it "foo") using the admin tool
Create a fixtures directory in your
"foo" app directory
Type: python manage.py dumpdata --indent=4 foo > foo/fixtures/foo.json
Now, after your syncdb stage, you just type:
python manage.py loaddata foo.json
And your data will be re-created.
If you want them in a test case:
class FooTests(TestCase):
fixtures = ['foo.json']
Note that you will have to recreate or manually update your fixtures if your schema changes drastically.
You can read more about fixtures in the django docs for Fixture Loading
Here's what we do.
Apps are named with a Schema version number. appa_2, appb_1, etc.
Minor changes don't change the number.
Major changes increment the number. Syncdb works. And a "data migration" script can be written.
def migrate_appa_2_to_3():
for a in appa_2.SomeThing.objects.all():
appa_3.AnotherThing.create( a.this, a.that )
appa_3.NewThing.create( a.another, a.yetAnother )
for b in ...
The point is that drop and recreate isn't always appropriate. It's sometimes helpful to move data form the old model to the new model without rebuilding from scratch.
South is the coolest.
Though good ol' reset works best when data doesn't matter.
http://south.aeracode.org/
To add to Matthew's response, I often also use custom SQL to provide initial data as documented here.
Django just looks for files in <app>/sql/<modelname>.sql and runs them after creating tables during syncdb or sqlreset. I use custom SQL when I need to do something like populate my Django tables from other non-Django database tables.
Personally my development db is for a project I'm working on right now is rather large, so I use dmigrations to create db migration scripts to modify the db (rather than wiping out the db everytime like I did in the beginning).
Edit: Actually, I'm using South now :-)
I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production.
What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better).
I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn).
Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.
Don't make this more complex than it needs to be. The big, independent databases have complex setup and configuration requirements. SQLite is just a file you access with SQL, it's much simpler.
Do the following.
Add a table to your database for "Components" or "Versions" or "Configuration" or "Release" or something administrative like that.
CREATE TABLE REVISION(
RELEASE_NUMBER CHAR(20)
);
In your application, connect to your database normally.
Execute a simple query against the revision table. Here's what can happen.
The query fails to execute: your database doesn't exist, so execute a series of CREATE statements to build it.
The query succeeds but returns no rows or the release number is lower than expected: your database exists, but is out of date. You need to migrate from that release to the current release. Hopefully, you have a sequence of DROP, CREATE and ALTER statements to do this.
The query succeeds, and the release number is the expected value. Do nothing more, your database is configured correctly.
AFAIK an SQLITE database is just a file.
To check if the database exists, check for file existence.
When you open a SQLITE database it will automatically create one if the file that backs it up is not in place.
If you try and open a file as a sqlite3 database that is NOT a database, you will get this:
"sqlite3.DatabaseError: file is encrypted or is not a database"
so check to see if the file exists and also make sure to try and catch the exception in case the file is not a sqlite3 database
SQLite automatically creates the database file the first time you try to use it. The SQL statements for creating tables can use IF NOT EXISTS to make the commands only take effect if the table has not been created This way you don't need to check for the database's existence beforehand: SQLite can take care of that for you.
The main thing I would still be worried about is that executing CREATE TABLE IF EXISTS for every web transaction (say) would be inefficient; you can avoid that by having the program keep an (in-memory) variable saying whether it has created the database today, so it runs the CREATE TABLE script once per run. This would still allow for you to delete the database and start over during debugging.
As #diciu pointed out, the database file will be created by sqlite3.connect.
If you want to take a special action when the file is not there, you'll have to explicitly check for existance:
import os
import sqlite3
if not os.path.exists(mydb_path):
#create new DB, create table stocks
con = sqlite3.connect(mydb_path)
con.execute('''create table stocks
(date text, trans text, symbol text, qty real, price real)''')
else:
#use existing DB
con = sqlite3.connect(mydb_path)
...
Sqlite doesn't throw an exception if you create a new database with the same name, it will just connect to it. Since sqlite is a file based database, I suggest you just check for the existence of the file.
About your second problem, to check if a table has been already created, just catch the exception. An exception "sqlite3.OperationalError: table TEST already exists" is thrown if the table already exist.
import sqlite3
import os
database_name = "newdb.db"
if not os.path.isfile(database_name):
print "the database already exist"
db_connection = sqlite3.connect(database_name)
db_cursor = db_connection.cursor()
try:
db_cursor.execute('CREATE TABLE TEST (a INTEGER);')
except sqlite3.OperationalError, msg:
print msg
Doing SQL in overall is horrible in any language I've picked up. SQLalchemy has shown to be easiest from them to use because actual query and committing with it is so clean and absent from troubles.
Here's some basic steps on actually using sqlalchemy in your app, better details can be found from the documentation.
provide table definitions and create ORM-mappings
load database
ask it to create tables from the definitions (won't do so if they exist)
create session maker (optional)
create session
After creating a session, you can commit and query from the database.
See this solution at SourceForge which covers your question in a tutorial manner, with instructive source code :
y_serial.py module :: warehouse Python objects with SQLite
"Serialization + persistance :: in a few lines of code, compress and annotate Python objects into SQLite; then later retrieve them chronologically by keywords without any SQL. Most useful "standard" module for a database to store schema-less data."
http://yserial.sourceforge.net
Yes, I was nuking out the problem. All I needed to do was check for the file and catch the IOError if it didn't exist.
Thanks for all the other answers. They may come in handy in the future.