Python sqlite3.OperationalError: no such table: - python

I am trying to store data about pupils at a school. I've done a few tables before, such as one for passwords and Teachers which I will later bring together in one program.
I have pretty much copied the create table function from one of these and changed the values to for the Pupil's information. It works fine on the other programs but I keep getting:
sqlite3.OperationalError: no such table: PupilPremiumTable
when I try to add a pupil to the table, it occurs on the line:
cursor.execute("select MAX(RecordID) from PupilPremiumTable")
I look in the folder and there is a file called PupilPremiumTable.db and the table has already been created before, so I don't know why it isn't working.
Here is some of my code, if you need more feel free to tell me so, as I said it worked before so I have no clue why it isn't working or even what isn't working:
with sqlite3.connect("PupilPremiumTable.db") as db:
cursor = db.cursor()
cursor.execute("select MAX(RecordID) from PupilPremiumTable")
Value = cursor.fetchone()
Value = str('.'.join(str(x) for x in Value))
if Value == "None":
Value = int(0)
else:
Value = int('.'.join(str(x) for x in Value))
if Value == 'None,':
Value = 0
TeacherID = Value + 1
print("This RecordID is: ",RecordID)

You are assuming that the current working directory is the same as the directory your script lives in. It is not an assumption you can make. Your script is opening a new database in a different directory, one that is empty.
Use an absolute path for your database file. You can base it on the absolute path of your script:
import os.path
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
db_path = os.path.join(BASE_DIR, "PupilPremiumTable.db")
with sqlite3.connect(db_path) as db:
You can verify what the current working directory is with os.getcwd() if you want to figure out where instead you are opening the new database file; you probably want to clean up the extra file you created there.

I had the same problem and here's how I solved it.
I killed the server by pressing Ctrl+C
I deleted the pychache folder. You'll find this folder in your project folder.
I deleted the sqlite db.
I made migrations with python manage.py makemigrations <app_name> where <app_name> is the specific app that contains the model that's causing the error. In my case it was the mail app so I ran python manage.py makemigrations app.
I migrated in the normal way.
Then I started the server and it was all solved.
I believe the issue is as Jorge Cardenas said:
Maybe you are loading views or queries to database but you haven´t
granted enough time for Django to migrate the models to DB. That's why
the "table doesn't exist".
This solution is based on this youtube video

First, you need to check if that table 100% exist in the database. You can use sqlite viewer for that: https://inloop.github.io/sqlite-viewer/.
If the table exists, then you can write your table name in '', for example:
Select * from 'TableName'
Whatever your query is, I am just using Select * as an example.

I have to face same issue and there are a couple of approaches, but the one I think is the most probable one.
Maybe you are loading views or queries to database but you haven´t granted enough time for Django to migrate the models to DB. That's why the "table doesn't exist".
Make sure you use this sort of initialization in you view's code:
form RegisterForm(forms.Form):
def __init__(self, *args, **kwargs):
super(RegisterForm, self).__init__(*args, **kwargs)
A second approach is you clean previous migrations, delete the database and start over the migration process.

I had the same issue when I was following the flask blog tutorial. I had initialized the database one time when it started giving me the sqlite3.OperationalError: then I tried to initialize again and turns out I had lots of errors in my schema and db.py file. Fixed them and initialized again and it worked.

Adding this worked for me:
import os.path
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
db_dir = (BASE_DIR + '\\PupilPremiumTable.db')
Note the need for \\ before PupilPremiumTable.bd for the code to work .

Related

Pytest-django: cannot delete db after tests

I have a Django application, and I'm trying to test it using pytest and pytest-django. However, quite often, when the tests finish running, I get the error that the database failed to be deleted: DETAIL: There is 1 other session using the database.
Basically, the minimum test code that I could narrow it down to is:
#pytest.fixture
def make_bundle():
a = MyUser.objects.create(key_id=uuid.uuid4())
return a
class TestThings:
def test_it(self, make_bundle):
all_users = list(MyUser.objects.all())
assert_that(all_users, has_length(1))
Every now and again the tests will fail with the above error. Is there something I am doing wrong? Or how can I fix this?
The database that I am using is PostgreSQL 9.6.
I am posting this as an answer because I need to post a chunk of code and because this worked. However, this looks like a dirty hack to me, and I'll be more than happy to accept anybody else's answer if it is better.
Here's my solution: basically, add the raw sql that kicks out all the users from the given db to the method that destroys the db. And do that by monkeypatching. To ensure that the monkeypatching happens before tests, add that to the root conftest.py file as an autouse fixture:
def _destroy_test_db(self, test_database_name, verbosity):
"""
Internal implementation - remove the test db tables.
"""
# Remove the test database to clean up after
# ourselves. Connect to the previous database (not the test database)
# to do so, because it's not allowed to delete a database while being
# connected to it.
with self.connection._nodb_connection.cursor() as cursor:
cursor.execute(
"SELECT pg_terminate_backend(pg_stat_activity.pid) "
"FROM pg_stat_activity "
"WHERE pg_stat_activity.datname = '{}' "
"AND pid <> pg_backend_pid();".format(test_database_name)
)
cursor.execute("DROP DATABASE %s"
% self.connection.ops.quote_name(test_database_name))
#pytest.fixture(autouse=True)
def patch_db_cleanup():
creation.BaseDatabaseCreation._destroy_test_db = _destroy_test_db
Note that the kicking-out code may depend on your database engine, and the method that needs monkeypatching may be different in different Django versions.

testing postgres db python

I don't understand how to test my repositories.
I want to be sure that I really saved object with all of it parameters into database, and when I execute my SQL statement I really received what I am supposed to.
But, I cannot put "CREATE TABLE test_table" in setUp method of unittest case because it will be created multiple times (tests of the same testcase are runned in parallel). So, as long as I create 2 methods in the same class which needs to work on the same table, it won't work (name clash of tables)
Same, I cannot put "CREATE TABLE test_table" setUpModule, because, now the table is created once, but since tests are runned in parallel, there is nothing which prevents from inserting the same object multiple times into my table, which breakes the unicity constraint of some field.
Same, I cannot "CREATE SCHEMA some_random_schema_name" in every method, because I need to globally "SET search_path TO ..." for a given Database, so every method runned in parallel will be affected.
The only way I see is to create to "CREATE DATABASE" for each test, and with unique name, and establish a invidual connection to each database.. This looks extreeeemly wasteful. Is there a better way?
Also, I cannot use SQLite in memory because I need to test PostgreSQL.
The best solution for this is to use the testing.postgresql module. This fires up a db in user-space, then deletes it again at the end of the run. You can put the following in a unittest suite - either in setUp, setUpClass or setUpModule - depending on what persistence you want:
import testing.postgresql
def setUp(self):
self.postgresql = testing.postgresql.Postgresql(port=7654)
# Get the url to connect to with psycopg2 or equivalent
print(self.postgresql.url())
def tearDown(self):
self.postgresql.stop()
If you want the database to persist between/after tests, you can run it with the base_dir option to set a directory - which will prevent it's removal after shutdown:
name = "testdb"
port = "5678"
path = "/tmp/my_test_db"
testing.postgresql.Postgresql(name=name, port=port, base_dir=path)
Outside of testing it can also be used as a context manager, where it will automatically clean up and shut down when the with block is exited:
with testing.postgresql.Postgresql(port=7654) as psql:
# do something here

Calling python script in web2py framework over webserver

I have NGINX UWSGI and WEB2PY installed on the server. Web2py application performing only one function by accessing the database and printing rows in the table.
def fetch():
import psycopg2
conn = psycopg2.connect(database="postgres",
user="postgres",
password="qwerty",
host="127.0.0.1")
cur = conn.cursor()
cur.execute("SELECT id, name from TEST")
rows = cur.fetchall()
conn.close()
return rows
When the function is called locally the table contents is returned.
But when I'm trying to call the function from remote machine I get an internal error 500.
One more interesting thing, is when function looks like this:
def hello():
return 'hello'
String 'hello' is returned. Starting adding it an import directive immediately causes error page to be generated.
Can any one please suggest the proper application syntax/logic?
My guess is that your MySQL service doesn't allow remote access. Could you check your MySQL configuration?
vim /etc/mysql/my.cnf
Comment out the following lines.
#bind-address = 127.0.0.1
#skip-networking
If there is no skip-networking line in your configuration file, just add it and comment out it.
And then restart the mysql service.
service mysql restart
Forgive the stupid question but have you checked if the module is available on your server?
When you say that the error appears in your hello function as soon as you try to import, it's the same directive import psycopg2?
Try this:
Assuming that fetch() it's defined on controllers/default.py
open folder views/default and create a new file called fetch.html
paste this inside
{{extend 'layout.html'}}
{{=rows}}
fetch.html is a view or a template if you prefer
Modify fetch() to return a dictionary with rows for the view to print
return dict(rows=rows)
this is very basic tough, you can find more information about basic steps in the book -> http://www.web2py.com/books/default/chapter/29/03/overview#Postbacks

UPDATE statement on Access database fails silently under pyodbc

I have a problem with a simple UPDATE statement. I wrote a Python tool which creates a lot of UPDATE statements and after creating them I want to execute them on my Access database but it doesn't work This is one statement for example:
UPDATE FCL_B_COVERSHEET_A SET BRANCH = 0 WHERE OBJ_ID = '1220140910132011062005';
The statement syntax is not the problem. I tested it and it works.
This next code snippet shows the initialization for the connect object.
strInputPathMDB = "C:\\Test.mdb"
DRV = '{Microsoft Access Driver (*.mdb)}';
con = pyodbc.connect('Driver={0};Dbq={1};Uid={2};Pwd={3};'.format(DRV,strInputPathMDB,"administrator",""))
After that I wrote a method which execute one SQL statement
def executeSQLStatement(conConnection, strSQL):
arcpy.AddMessage(strSQL)
cursor = conConnection.cursor()
cursor.execute(strSQL)
conConnection.commit()
and if I execute this code everything seems to work - no error message or anything like that - but also the data is not updated and I don't know what I'm doing wrong ...
for strSQL in sqlStateArray:
executeSQLStatement(con, strSQL)
con.close()
I hope you understand what my problem is. Thanks for your help.
Chris
The issue here was that the .mdb file was in the root folder of the C: drive. Root folders often restrict normal users to read-only access so the database file was being opened as read-only. Moving the .mdb file to a public folder solved the problem.

Python and YAML- GAE app settings file

I was reading about using YAML to store settings in a GAE application and I think I want to go this way. I'm talking about my own constants like API keys and temp variables, NOT about standard GAE config files like app.yaml.
So far I used a simple settings.py file for it (which included separate configs for production/testing/etc environment), but it doesn't seem to do the job well enough.
I even had some serious problems when a git merge overwrote some settings (hard to control it).
Eventually I want to store as much data as I can in the data store but as for now I'm looking for ideas.
So does anybody have any ideas or examples for simple storing and access of, sometimes protected, config data?
Storing the API keys and configuration variables in static files is usually always a bad idea for a few reasons:
Hard to update: you need to deploy a new app in order to update the values
Not secure: you might commit it to the version control system and there is no way back
Don't scale: very often you have different keys for different domains
So why not storing all these values right from the very beginning securely in the datastore, especially when it's actually so much easier and here is how:
All you have to do is to create one new model for your configuration values and have only one record in it. By using NDB, you have caching out of the box and because of the nature of this configuration file you can even cache it per running instance to avoid regular reads to the datastore.
class Config(ndb.Model):
analytics_id = ndb.StringProperty(default='')
brand_name = ndb.StringProperty(default='my-awesome-app')
facebook_app_id = ndb.StringProperty(default='')
facebook_app_secret = ndb.StringProperty(default='')
#classmethod
def get_master_db(cls):
return cls.get_or_insert('master')
I'm also guessing that you already have one file that most likely is called config.py where you have some constants like these ones:
PRODUCTION = os.environ.get('SERVER_SOFTWARE', '').startswith('Google App Engine')
DEVELOPMENT = not PRODUCTION
If you don't create one or if you do just add this to that file as well:
import model # Your module that contains the models
CONFIG_DB = model.Config.get_master_db()
and finally to be able to read your config values you can simply do that, wherever in your app (after importing the config.py of course):
config.CONFIG_DB.brand_name
From now on you don't even have to create any special forms to update these values (it is recommended though), because you can do that from the admin console locally or the Dashboard on the production.
Just remember that if you're going to store that record in a local variable you will have to restart the instances after updating the values to see the changes.
All of the above you can see it in action, in one of my open sourced projects called gae-init.
You should not store config values in source code, for the reasons outlined in the other answers. In my latest project, I put config data in the datastore using this class:
from google.appengine.ext import ndb
class Settings(ndb.Model):
name = ndb.StringProperty()
value = ndb.StringProperty()
#staticmethod
def get(name):
NOT_SET_VALUE = "NOT SET"
retval = Settings.query(Settings.name == name).get()
if not retval:
retval = Settings()
retval.name = name
retval.value = NOT_SET_VALUE
retval.put()
if retval.value == NOT_SET_VALUE:
raise Exception(('Setting %s not found in the database. A placeholder ' +
'record has been created. Go to the Developers Console for your app ' +
'in App Engine, look up the Settings record with name=%s and enter ' +
'its value in that record\'s value field.') % (name, name))
return retval.value
Your application would do this to get a value:
AMAZON_KEY = Settings.get('AMAZON_KEY')
If there is a value for that key in the datastore, you will get it. If there isn't, a placeholder record will be created and an exception will be thrown. The exception will remind you to go to the Developers Console and update the placeholder record.
I find this takes the guessing out of setting config values. If you are unsure of what config values to set, just run the code and it will tell you!

Categories

Resources