sqlite3.OperationalError: no such table: main.source - python

I am making an API that uses Python Flask and Sqlite3. Most of it works. Specifically:
all GET endpoints/SELECT queries work
two POST endpoints/INSERT INTO queries work
However, the remaining POSTs/INSERT INTOs do not work. They all have the same sqlite3.OperationalError with the message:
no such table: main.source
This is weird because none of the queries use a table called "source" or "main.source". I print the queries before I execute them, and I have tried copy/pasting the queries into the sqlite3 command prompt. The queries have no issues when I do that.
The other weird thing is that all the INSERT INTO queries call the same function to create the actual query (which in turn calls the function to run queries ... used by ALL queries most of which work). Only some of the INSERT INTOs cause this error.
Some potentially useful information:
An excerpt from createdb.sql
CREATE TABLE transactions (
id INTEGER PRIMARY KEY,
buyer INTEGER NOT NULL,
seller INTEGER NOT NULL,
amount INTEGER NOT NULL,
currency VARCHAR(6) NOT NULL,
fee INTEGER NOT NULL,
source INTEGER NOT NULL,
description TEXT NOT NULL,
status VARCHAR(40) NOT NULL,
created TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);
The insert into that Python prints/that throws an error in execute:
INSERT INTO transactions (status, fee, description, source, seller, currency, amount, buyer) VALUES ('initiated', '1', 'nada', '1', '2', 'USD', '1000', '1');
And some stuff from Sqlite's prompt:
sqlite> .tables
conversations sources users
messages transactions withdrawals
sqlite> SELECT id, description FROM transactions;
1|hella mulah
2|payback
3|woohoo
sqlite> INSERT INTO transactions (status, fee, description, source, seller, currency, amount, buyer) VALUES ('initiated', '1', 'nada', '1', '2', 'USD', '1000', '1');
sqlite>
sqlite> SELECT id, description FROM transactions;
1|hella mulah
2|payback
3|woohoo
4|nada
For references, here is a POST command that has no error despite using most of the same stuff:
INSERT INTO users (session, balance, name, firebaseToken) VALUES ('ABCDEFG', '0', 'Mr Miyagi', 'ABCDEFG');
There are a lot of similar questions on SO but here's why they are not duplicates:
flask/sqlalchemy - OperationalError: (sqlite3.OperationalError) no such table Yes, I created the tables before using them.
Why am I suddenly getting "OperationalError: no such table"? my Flask app is able to find the database no problem (and most of the queries work without a hitch). To be safe I made my DB in connect an absolute path. No effect.
Error: sqlite3.OperationalError: no such table: main.m I don't do any weird indexing + if I did this the copy pasting wouldn't work
Python Sqlite3 - Data is not saved permanently: I do call commit in #app.teardown_appcontext. I also tried calling commit after every query. No effect.
Other stuff I considered but ruled out:
There is a list of disallowed names, but these are not included http://www.sqlite.org/lang_keywords.html. transactions is close to transaction but not the same. source is not on the list.
I'm sure this will end up being some kind of silly messup but any ideas on where to look would be much appreciated. I also tried Googling this error but I didn't see anything helpful.
--- more code ---
This is database.py
import sqlite3
import flask
import backend
def dict_factory(cursor, row):
output = {}
for idx, col in enumerate(cursor.description):
output[col[0]] = row[idx]
return output
def get_db():
if not hasattr(flask.g, 'sqlite_db'):
flask.g.sqlite_db = sqlite3.connect("/my/absolute/path/var/data.db"
)
flask.g.sqlite_db.row_factory = dict_factory
flask.g.sqlite_db.execute("PRAGMA foreign_keys = ON;")
return flask.g.sqlite_db
def query(query, args=(), islast=False):
print(query) # this is where the print from before is
cur = get_db().execute(query, args)
rowvector = cur.fetchall()
if islast:
cur.close()
return rowvector
#backend.app.teardown_appcontext
def close_db(error):
if hasattr(flask.g, 'sqlite_db'):
flask.g.sqlite_db.commit()
flask.g.sqlite_db.close()
This is selected sections from apiimpl.py
QUERY_INSERT = "INSERT INTO"
QUERY_SELECT = "SELECT"
QUERY_UPDATE = "UPDATE"
def queryhelper(*args, **kwargs):
sqltxt = None
selectstr = None
if kwargs["action"] == QUERY_INSERT:
sqltxt = "{} {} ({}) VALUES ({});".format(
QUERY_INSERT,
kwargs["table"],
", ".join(["{}".format(x) for x in kwargs["cols"]]),
", ".join(["'{}'".format(x) for x in kwargs["vals"]]),
)
# pretty sure this next bit is not relevant but here it is anyway
selectstr = "SELECT * FROM {} WHERE ROWID=(SELECT last_insert_rowid());".format(
kwargs["table"],
)
elif kwargs["action"] == QUERY_SELECT:
# not relevant
elif kwargs["action"] == QUERY_UPDATE:
# not relevant
else:
assert(kwargs["action"] in [QUERY_INSERT, QUERY_SELECT, QUERY_UPDATE,])
try:
rv = db.query(sqltxt) # this is where the error is thrown
if selectstr:
return db.query(selectstr)
else:
return rv
except sqlite3.OperationalError as e:
# this is where the error is caught
return api_error("SQL error (1): {}", str(e), code=500)
def append(tablename, args):
tabledata = TABLES().tablenamemap[tablename]
print("tablename: " + tablename) # "tablename: transactions"
# a bunch of error detection
rv = queryhelper(
action=QUERY_INSERT,
table=tablename,
cols=args.keys(),
vals=args.values(),
)
# not shown: potentially returning json.dumps(rv)
return rv
def transactions_post(req):
# a lot of stuff to turn req into validargs
# printed validargs: {'status': 'initiated', u'fee': u'1', u'description': u'nada', u'source': u'1', u'seller': u'2', u'currency': u'USD', u'amount': u'1000', u'buyer': u'1'}
return append("transactions", validargs)
#backend.app.route("/transactions", methods=["GET", "POST", "PUT"])
def transactions_route():
return {
"GET": transactions_get, # get list of transactions
"POST": transactions_post, # initiate a transaction
"PUT": transactions_put, # change transaction status
}[flask.request.method](flask.request)
P.S. the purpose of this question is not to discuss the implementation, but if you want to leave a comment that's ok with me.
--- in response to comment --
sqlite> SELECT * FROM sqlite_master WHERE type="table" AND name="transactions";
table|transactions|transactions|4|CREATE TABLE transactions (
id INTEGER PRIMARY KEY,
buyer INTEGER NOT NULL,
seller INTEGER NOT NULL,
amount INTEGER NOT NULL,
currency VARCHAR(6) NOT NULL,
fee INTEGER NOT NULL,
source INTEGER NOT NULL,
description TEXT NOT NULL,
status VARCHAR(40) NOT NULL,
created TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (buyer) REFERENCES users(id), -- do not want to delete on CASCADE
FOREIGN KEY (seller) REFERENCES users(id), -- do not want to delete on CASCADE
FOREIGN KEY (source) REFERENCES source(id) -- do not want to delete on CASCADE
)

It looks like you are referencing a table that doesn't exist based on your .tables command.
sqlite> .tables
conversations sources users
messages transactions withdrawals
And this create table statement.
CREATE TABLE transactions (
id INTEGER PRIMARY KEY,
buyer INTEGER NOT NULL,
seller INTEGER NOT NULL,
amount INTEGER NOT NULL,
currency VARCHAR(6) NOT NULL,
fee INTEGER NOT NULL,
source INTEGER NOT NULL,
description TEXT NOT NULL,
status VARCHAR(40) NOT NULL,
created TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (buyer) REFERENCES users(id), -- do not want to delete on CASCADE
FOREIGN KEY (seller) REFERENCES users(id), -- do not want to delete on CASCADE
FOREIGN KEY (source) REFERENCES source(id) -- do not want to delete on CASCADE
-- ^ there is no source table
)
If you change source(id) to sources(id) you should be good.

Related

How can I capture the rowid of newly inserted row in SQLite/Flask?

I want to insert a new row into a table, and return the newly created auto-incremented id from that row so I can execute a second command, inserting that new id into a join table.
I've tried using solutions from other SO posts but they don't work for my case (e.g., they call for cursor.x but I'm not using "cursor").
I created a simple example for sharing my code:
SQLite schema for 3 tables:
CREATE TABLE trees (id INTEGER, name TEXT NOT NULL, PRIMARY KEY (id));
CREATE TABLE birds (id INTEGER, name TEXT NOT NULL, PRIMARY KEY (id));
CREATE TABLE BirdsToTrees (
birdID INTEGER NOT NULL,
treeID INTEGER NOT NULL,
FOREIGN KEY (birdID) REFERENCES birds(id) ON DELETE CASCADE,
FOREIGN KEY (treeID) REFERENCES trees(id) ON DELETE CASCADE
);
Test data in birds table:
id | name
1 | sparrow
2 | dodo
3 | cardinal
4 | bluejay
5 | woodpecker
6 | emu
7 | chicken
Flask app code:
#app.route("/", methods = ["GET", "POST"])
def index():
if request.method == "GET":
treeList = db.execute("SELECT * FROM trees")
birdList = db.execute("SELECT * FROM birds")
return render_template ("SQLtest.html", treeList = treeList, birdList = birdList)
else:
newBird = request.form.get("newBird")
db.execute("INSERT INTO birds (name) VALUES (?)", newBird)
newBirdID = db.execute("SELECT last_insert_rowid() FROM birds")
print(f"You really saw a {newBird}? Its ID is now {newBirdID}")
return redirect("/")
When I used the web form to submit "chicken" as a new bird, it was successfully inserted with the id of 7. But the printed output of my Flask console was:
You really saw a chicken? Its ID is now [{'last_insert_rowid()': 0}, {'last_insert_rowid()': 0}, {'last_insert_rowid()': 0}, {'last_insert_rowid()': 0}, {'last_insert_rowid()': 0}, {'last_insert_rowid()': 0}, {'last_insert_rowid()': 0}]
So it returned a list of 7 identical dictionaries, rather than the integer 7. Can anyone help?
Here are a few other attempts that failed, along with the error messages:
newBirdID = cursor.lastrowid
#NameError: name 'cursor' is not defined
newBirdID = db.lastrowid
#AttributeError: 'SQL' object has no attribute 'lastrowid'
newBirdID = db.execute("INSERT INTO birds (name) VALUES (?) RETURNING id", newBird)
#RuntimeError: near "RETURNING": syntax error
BTW, that RETURNING syntax works fine at the command line in SQLite, e.g.
sqlite> INSERT INTO birds (name) VALUES "chicken" RETURNING id;
id
7
But it fails every time when I do it via db.execute with the "?" placeholder.
Your problem is that you do execute directly on the connection and not the cursor.
Docs explain how that shortcut works:
execute(sql[, parameters])
This is a nonstandard shortcut that creates a cursor object by calling the cursor() method, calls the cursor’s execute() method with the parameters given, and returns the cursor.
https://docs.python.org/3/library/sqlite3.html#sqlite3.Connection.execute
See at the end. "returns the cursor". This means we can still get the use the Cursor.lastrowsid which you tried!
So just... save the returned cursor and get lastrowid from it. :)
cur = db.execute("INSERT INTO birds (name) VALUES (?)", newBird)
newBirdID = cur.lastrowid

stored procedure call from sqlalchemy not commit

I have tested my SP in MySQL and it works fine. I was able to insert new entry with it. I try to call it from flask with alchemy and it does run, but insert is not made into the table although it appears to execute the right commands.
My SP checks if there is an existing entry if yes then return 0, if no then insert the entry and return 1. When I send a new query from backend, I got 1 as return value but insert is not made in the table, When I send the same query, the return value is still 1. When I send an existing query that the table holds, the return value is 0.
I have other routes with the same db.connect() and it does fetch information. I read other posts about calling SP with the same execute function to run raw sql. From the doc it does seem execute doesn't require extra commit command to confirm the transaction.
So why can't I insert from the flask server?
This is the backend function
def add_book(info):
try:
connection = db.connect()
title = info['bookTitle']
url = info['bookUrl']
isbn = info['isbn']
author = info['author']
#print("title: " + title + " url: "+ url + " isbn: "+ str(isbn) + " author"+ str(author))
query = 'CALL add_book("{}", "{}", {}, {});'.format(title, url, isbn, author)
#print(query)
query_results = connection.execute(query)
connection.close()
query_results = [x for x in query_results]
result = query_results[0][0]
except Exception as err:
print(type(err))
print(err.args)
return result
This is the table to insert
CREATE TABLE `book` (
`isbn` int(11) DEFAULT NULL,
`review_count` int(11) DEFAULT NULL,
`language_code` varchar(10) DEFAULT NULL,
`avg_rating` int(11) DEFAULT NULL,
`description_text` text,
`formt` varchar(30) DEFAULT NULL,
`link` varchar(200) DEFAULT NULL,
`authors` int(11) DEFAULT NULL,
`publisher` varchar(30) DEFAULT NULL,
`num_pages` int(11) DEFAULT NULL,
`publication_month` int(11) DEFAULT NULL,
`publication_year` int(11) DEFAULT NULL,
`url` varchar(200) DEFAULT NULL,
`image_url` varchar(200) DEFAULT NULL,
`book_id` int(11) NOT NULL AUTO_INCREMENT,
`ratings_count` int(11) DEFAULT NULL,
`work_id` int(11) DEFAULT NULL,
`title` varchar(200) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL,
PRIMARY KEY (`book_id`),
KEY `authors` (`authors`),
CONSTRAINT `book_ibfk_2` FOREIGN KEY (`authors`) REFERENCES `author` (`author_id`) ON DELETE RESTRICT ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=36485537 DEFAULT CHARSET=utf8;
This is the SP
DELIMITER $$
CREATE DEFINER=`root`#`%` PROCEDURE `add_book`(
IN titleIn VARCHAR(200), urlIn VARCHAR(200), isbnIn INT, authorIn INT)
BEGIN
DECLARE addSucess INT;
DECLARE EXIT HANDLER FOR sqlexception
BEGIN
GET diagnostics CONDITION 1
#p1 = returned_sqlstate, #p2 = message_text;
SELECT #pa1, #p2;
ROLLBACK;
END;
DECLARE exit handler for sqlwarning
BEGIN
GET DIAGNOSTICS CONDITION 1
#p1 = RETURNED_SQLSTATE, #p2 = MESSAGE_TEXT;
SELECT #p1 as RETURNED_SQLSTATE , #p2 as MESSAGE_TEXT;
ROLLBACK;
END;
IF EXISTS (SELECT 1 FROM book WHERE title = titleIn) THEN
SET addSucess = 0;
ELSE
INSERT INTO book (authors, title, url, book_id)
VALUES (authorIn, titleIn, urlIn, null);
SET addSucess = 1;
END IF;
SELECT addSucess;
END$$
DELIMITER ;
My user permission from show grants for current_user
[('GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOC ... (73 characters truncated) ... OW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE, CREATE ROLE, DROP ROLE ON *.* TO `root`#`%` WITH GRANT OPTION',), ('GRANT APPLICATION_PASSWORD_ADMIN,CONNECTION_ADMIN,ROLE_ADMIN,SET_USER_ID,XA_RECOVER_ADMIN ON *.* TO `root`#`%` WITH GRANT OPTION',), ('REVOKE INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, CREATE VIEW, CREATE ROUTINE, ALTER ROUTINE ON `mysql`.* FROM `root`#`%`',), ('REVOKE INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, CREATE VIEW, CREATE ROUTINE, ALTER ROUTINE ON `sys`.* FROM `root`#`%`',), ('GRANT INSERT ON `mysql`.`general_log` TO `root`#`%`',), ('GRANT INSERT ON `mysql`.`slow_log` TO `root`#`%`',), ('GRANT `cloudsqlsuperuser`#`%` TO `root`#`%`',)]
I solved it with the Session api instead. If someone is reading, pls tell me a better way of passing the params and parse the return result
def add_book(info):
title = info['bookTitle']
url = info['bookUrl']
isbn = info['isbn']
author = info['author']
with Session(db) as session:
session.begin()
try:
query = 'CALL insert_book("{}", "{}", {}, {});'.format(title, url, isbn, author)
result = session.execute(text(query)).all()
except:
session.rollback()
raise
else:
session.commit()

Scrapy MySQL foreign key constraint failures

I'm currently working on writing a web scraper for a website called Mountain Project and have come across an issue while inserting items into a MySQL (MariaDB) database.
The basic flow of my crawler is this:
Get response from link extractor (I'm subclassing CrawlSpider)
Extract data from response and send extracted items down the item pipeline
Items get picked up by the SqlPipeline and inserted into the database
An important note about step 2 is that the crawler is sending multiple items down the pipeline. The first item will be the main resource (either a route or an area), and then any items after that will be other important data about that resource. For areas, those items will be data on different weather conditions, for routes those items will be the different grades assigned to those routes.
My area and route tables look like this:
CREATE TABLE `area` (
`area_id` INT(10) UNSIGNED NOT NULL,
`parent_id` INT(10) UNSIGNED NULL DEFAULT NULL,
`name` VARCHAR(200) NOT NULL DEFAULT '',
`latitude` FLOAT(12) NULL DEFAULT -1,
`longitude` FLOAT(12) NULL DEFAULT -1,
`elevation` INT(11) NULL DEFAULT '0',
`link` VARCHAR(300) NOT NULL,
PRIMARY KEY (`area_id`) USING BTREE
)
CREATE TABLE `route` (
`route_id` INT(10) UNSIGNED NOT NULL,
`parent_id` INT(10) UNSIGNED NOT NULL,
`name` VARCHAR(200) NOT NULL DEFAULT '' COLLATE 'utf8_general_ci',
`link` VARCHAR(300) NOT NULL COLLATE 'utf8_general_ci',
`rating` FLOAT(12) NULL DEFAULT '0',
`types` VARCHAR(50) NULL DEFAULT NULL COLLATE 'utf8_general_ci',
`pitches` INT(3) NULL DEFAULT '0',
`height` INT(5) NULL DEFAULT '0',
`length` VARCHAR(5) NULL DEFAULT NULL COLLATE 'utf8_general_ci',
PRIMARY KEY (`route_id`) USING BTREE,
INDEX `fk_parent_id` (`parent_id`) USING BTREE,
CONSTRAINT `fk_parent_id` FOREIGN KEY (`parent_id`) REFERENCES `mountainproject`.`area` (`area_id`) ON UPDATE CASCADE ON DELETE CASCADE
)
And here's an example of one of my condition tables:
CREATE TABLE `temp_avg` (
`month` INT(2) UNSIGNED NOT NULL,
`area_id` INT(10) UNSIGNED NOT NULL,
`avg_high` INT(3) NOT NULL,
`avg_low` INT(3) NOT NULL,
PRIMARY KEY (`month`, `area_id`) USING BTREE,
INDEX `fk_area_id` (`area_id`) USING BTREE,
CONSTRAINT `fk_area_id` FOREIGN KEY (`area_id`) REFERENCES `mountainproject`.`area` (`area_id`) ON UPDATE CASCADE ON DELETE CASCADE
)
Here's where things start to get troublesome. If I run my crawler and just extract areas, everything works fine. The area is inserted into the database, and all the conditions data gets inserted without a problem. However, when I try to extract areas and routes, I get foreign key constraint failures when trying to insert routes because the area that the route belongs to (parent_id) doesn't exist. Currently to work around this I've been running my crawler twice. Once to extract area data, and once to extract route data. If I do that, everything goes smoothly.
My best guess as to why this doesn't work currently is that the areas that are being inserted haven't been committed and so when I attempt to add a route that belongs to an uncommitted area, it can't find the parent area. This theory quickly falls apart though because I'm able to insert condition data in the same run that I insert the area that the data belongs to.
My insertion code looks like this:
def insert_item(self, table_name, item):
encoded_vals = [self.sql_encode(val) for val in item.values()]
sql = "INSERT INTO %s (%s) VALUES (%s)" % (
table_name,
", ".join(item.keys()),
", ".join(encoded_vals)
)
logging.debug(sql)
self.cursor.execute(sql)
# EDIT: As suggested by #tadman I have moved to using the built in sql value
# encoding. I'm leaving this here because it doesn't affect the issue
def sql_encode(self, value):
"""Encode provided value and return a valid SQL value
Arguments:
value {Any} -- Value to encode
Returns:
str -- SQL encode value as a str
"""
encoded_val = None
is_empty = False
if isinstance(value, str):
is_empty = len(value) == 0
encoded_val = "NULL" if is_empty or value is None else value
if isinstance(encoded_val, str) and encoded_val is not "NULL":
encode_val = encoded_val.replace("\"", "\\\"")
encoded_val = "\"%s\"" % encoded_val
return str(encoded_val)
The rest of the project lives in a GitHub repo if any more code/context is needed

Mysql insert lock wait timeout exceeded - auto increment

I'm having an issue with my application causing MySQL table to be locked due to inserts which take a long time, after reviewing online articles, it seems like it's related to auto increment, info below -
Python that inserts data (row at a time unfortunately as I need the auto incremented id for reference in future inserts) -
for i, flightobj in stats[ucid]['flight'].items():
flight_fk = None
# Insert flights
try:
with mysqlconnection.cursor() as cursor:
sql = "insert into cb_flights(ucid,takeoff_time,end_time,end_event,side,kills,type,map_fk,era_fk) values(%s,%s,%s,%s,%s,%s,%s,%s,%s);"
cursor.execute(sql, (
ucid, flightobj['start_time'], flightobj['end_time'], flightobj['end_event'], flightobj['side'],
flightobj['killnum'], flightobj['type'], map_fk, era_fk))
mysqlconnection.commit()
if cursor.lastrowid:
flight_fk = cursor.lastrowid
else:
flight_fk = 0
except pymysql.err.ProgrammingError as e:
logging.exception("Error: {}".format(e))
except pymysql.err.IntegrityError as e:
logging.exception("Error: {}".format(e))
except TypeError as e:
logging.exception("Error: {}".format(e))
except:
logging.exception("Unexpected error:", sys.exc_info()[0])
The above runs every 2 minutes on the same data and is supposed to insert only non duplicates as the MySQL would deny duplicates due to the unique ucid_takeofftime index.
MYSQL info, cb_flights table -
`pk` int(11) NOT NULL AUTO_INCREMENT,
`ucid` varchar(50) NOT NULL,
`takeoff_time` datetime DEFAULT NULL,
`end_time` datetime DEFAULT NULL,
`end_event` varchar(45) DEFAULT NULL,
`side` varchar(45) DEFAULT NULL,
`kills` int(11) DEFAULT NULL,
`type` varchar(45) DEFAULT NULL,
`map_fk` int(11) DEFAULT NULL,
`era_fk` int(11) DEFAULT NULL,
`round_fk` int(11) DEFAULT NULL,
PRIMARY KEY (`pk`),
UNIQUE KEY `ucid_takeofftime` (`ucid`,`takeoff_time`),
KEY `ucid_idx` (`ucid`) /*!80000 INVISIBLE */,
KEY `end_event` (`end_event`) /*!80000 INVISIBLE */,
KEY `side` (`side`)
) ENGINE=InnoDB AUTO_INCREMENT=76023132 DEFAULT CHARSET=utf8;
Now inserts into the table from the Python code, can take sometimes over 60 seconds.
I beleive it might be related to the auto increment that is creating the lock on the table, if so, I'm looking for a workaround.
innodb info -
innodb_autoinc_lock_mode 2
innodb_lock_wait_timeout 50
buffer is used up to 70% more or less.
Appreciate any assistance with this, either from application side or MySQL side.
EDIT
Adding the create statement for the cb_kills table which is also used with inserts but without an issue as far as I can see, this is in response to the comment on the 1st answer.
CREATE TABLE `cb_kills` (
`pk` int(11) NOT NULL AUTO_INCREMENT,
`time` datetime DEFAULT NULL,
`killer_ucid` varchar(50) NOT NULL,
`killer_side` varchar(10) DEFAULT NULL,
`killer_unit` varchar(45) DEFAULT NULL,
`victim_ucid` varchar(50) DEFAULT NULL,
`victim_side` varchar(10) DEFAULT NULL,
`victim_unit` varchar(45) DEFAULT NULL,
`weapon` varchar(45) DEFAULT NULL,
`flight_fk` int(11) NOT NULL,
`kill_id` int(11) NOT NULL,
PRIMARY KEY (`pk`),
UNIQUE KEY `ucid_killid_flightfk_uniq` (`killer_ucid`,`flight_fk`,`kill_id`),
KEY `flight_kills_fk_idx` (`flight_fk`),
KEY `killer_ucid_fk_idx` (`killer_ucid`),
KEY `victim_ucid_fk_idx` (`victim_ucid`),
KEY `time_ucid_killid_uniq` (`time`,`killer_ucid`,`kill_id`),
CONSTRAINT `flight_kills_fk` FOREIGN KEY (`flight_fk`) REFERENCES `cb_flights` (`pk`)
) ENGINE=InnoDB AUTO_INCREMENT=52698582 DEFAULT CHARSET=utf8;
You can check if autocommit is set to 1, this forces to commit every row and disabling it makes it somewhat faster
Instead of committing every insert try to bulk insert.
For that you should check
https://dev.mysql.com/doc/refman/8.0/en/optimizing-innodb-bulk-data-loading.html
and do something like
data = [
('city 1', 'MAC', 'district 1', 16822),
('city 2', 'PSE', 'district 2', 15642),
('city 3', 'ZWE', 'district 3', 11642),
('city 4', 'USA', 'district 4', 14612),
('city 5', 'USA', 'district 5', 17672),
]
sql = "insert into city(name, countrycode, district, population)
VALUES(%s, %s, %s, %s)"
number_of_rows = cursor.executemany(sql, data)
db.commit()
I want to put in here some of the ways I worked on finding a solution to this problem. I'm not an expert in MySQL but I think these steps can help anyone looking to find out why he has lock wait timeouts.
So the troubleshooting steps I took are as follows -
1- Check if I can find in the MySQL slow log the relevant query that is locking my table. Usually it's possible to find queries that run a long time and also locks with the info below and the query right after it
# Time: 2020-01-28T17:31:48.634308Z
# User#Host: # localhost [::1] Id: 980397
# Query_time: 250.474040 Lock_time: 0.000000 Rows_sent: 10 Rows_examined: 195738
2- The above should give some clue on what's going on in the server and what might be waiting for a long time. Next I ran the following 3 queries to identify what is in use:
check process list on which process are running -
show full processlist;
check tables in use currently -
show open tables where in_use>0;
check running transactions -
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`;
3- The above 2 steps should give enough information on which query is locking the tables. in my case here I had a SP that ran an insert into <different table> select from <my locked table>, while it was inserting to a totally different table, this query was locking my table due to the select operation that took a long time.
To work around it, I changed the SP to work with temporary tables and now although the query is still not completely optimized, there are no locks on my table.
Adding here how I run the SP on temporary tables for async aggregated updates.
CREATE DEFINER=`username`#`%` PROCEDURE `procedureName`()
BEGIN
drop temporary table if exists scheme.temp1;
drop temporary table if exists scheme.temp2;
drop temporary table if exists scheme.temp3;
create temporary table scheme.temp1 AS select * from scheme.live1;
create temporary table scheme.temp2 AS select * from scheme.live2;
create temporary table scheme.temp3 AS select * from scheme.live3;
create temporary table scheme.emptytemp (
`cName1` int(11) NOT NULL,
`cName2` varchar(45) NOT NULL,
`cName3` int(11) NOT NULL,
`cName4` datetime NOT NULL,
`cName5` datetime NOT NULL,
KEY `cName1` (`cName1`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
INSERT into scheme.emptytemp
select t1.x,t2.y,t3.z
from scheme.temp1 t1
JOIN scheme.temp2 t2
ON t1.x = t2.x
JOIN scheme.temp3 t3
ON t2.y = t3.y
truncate table scheme.liveTable;
INSERT into scheme.liveTable
select * from scheme.emptytemp;
END
Hope this helps anyone that encounters this issue

No data inserted after successful SQLite statement within Python

Extension from previous question
Attempting to insert SQL values into database after pulling from XML file, but none seem to be appearing in database after insert statement embedded in Python code. Without the SQL section included, the entries are printed as expected. I am not getting an error in my Python environment (Anaconda Navigator), so totally lost on how the queries were processed, but nothing was entered! I tried a basic select statement to display the table, but get an empty table back.
Select Query
%sql SELECT * FROM publication;
Main Python code
import sqlite3
con = sqlite3.connect("publications.db")
cur = con.cursor()
from xml.dom import minidom
xmldoc = minidom.parse("test.xml")
#loop through <pub> tags to find number of pubs to grab
root = xmldoc.getElementsByTagName("root")[0]
pubs = [a.firstChild.data for a in root.getElementsByTagName("pub")]
num_pubs = len(pubs)
count = 0
while(count < num_pubs):
#get data from each <pub> tag
temp_pub = root.getElementsByTagName("pub")[count]
temp_ID = temp_pub.getElementsByTagName("ID")[0].firstChild.data
temp_title = temp_pub.getElementsByTagName("title")[0].firstChild.data
temp_year = temp_pub.getElementsByTagName("year")[0].firstChild.data
temp_booktitle = temp_pub.getElementsByTagName("booktitle")[0].firstChild.data
temp_pages = temp_pub.getElementsByTagName("pages")[0].firstChild.data
temp_authors = temp_pub.getElementsByTagName("authors")[0]
temp_author_array = [a.firstChild.data for a in temp_authors.getElementsByTagName("author")]
num_authors = len(temp_author_array)
count = count + 1
#process results into sqlite
pub_params = (temp_ID, temp_title)
cur.execute("INSERT INTO publication (id, ptitle) VALUES (?, ?)", pub_params)
journal_params = (temp_booktitle, temp_pages, temp_year)
cur.execute("INSERT INTO journal (jtitle, pages, year) VALUES (?, ?, ?)", journal_params)
x = 0
while(x < num_authors):
cur.execute("INSERT OR IGNORE INTO authors (name) VALUES (?)", (temp_author_array[x],))
x = x + 1
#display results
print("\nEntry processed: ", count)
print("------------------\nPublication ID: ", temp_ID)
print("Publication Title: ", temp_title)
print("Year: ", temp_year)
print("Journal title: ", temp_booktitle)
print("Pages: ", temp_pages)
i = 0
print("Authors: ")
while(i < num_authors):
print("-",temp_author_array[i])
i = i + 1
print("\nNumber of entries processed: ", count)
SQL queries
%%sql
DROP TABLE IF EXISTS publication;
CREATE TABLE publication(
id INT PRIMARY KEY NOT NULL,
ptitle VARCHAR NOT NULL
);
/* Author Entity set and writes_for relationship */
DROP TABLE IF EXISTS authors;
CREATE TABLE authors(
name VARCHAR(200) PRIMARY KEY NOT NULL,
pub_id INT,
pub_title VARCHAR(200),
FOREIGN KEY(pub_id, pub_title) REFERENCES publication(id, ptitle)
);
/* Journal Entity set and apart_of relationship */
DROP TABLE IF EXISTS journal;
CREATE TABLE journal(
jtitle VARCHAR(200) PRIMARY KEY NOT NULL,
pages INT,
year INT(4),
pub_id INT,
pub_title VARCHAR(200),
FOREIGN KEY(pub_id, pub_title) REFERENCES publication(id, ptitle)
);
/* Wrote relationship b/w journal & authors */
DROP TABLE IF EXISTS wrote;
CREATE TABLE wrote(
name VARCHAR(100) NOT NULL,
jtitle VARCHAR(50) NOT NULL,
PRIMARY KEY(name, jtitle),
FOREIGN KEY(name) REFERENCES authors(name),
FOREIGN KEY(jtitle) REFERENCES journal(jtitle)
);
You need to call con.commit() in order to commit the data to the database. If you use the connection as a context manager (with con:), the connection will commit any changes you make (or roll them back if there is an error).
Explicitly closing the connection is also a good practice.
It looks like you are forgetting to commit and close the connection. You need to call these two functions in order to properly close the connection and to save the work you have done to the database.
conn.commit()
conn.close()

Categories

Resources