No Error, but can't input data to MySQL - python

I have this code to insert data to Database using MySQL. But when I ran that code using Python, there's no error. But when I checked the Database, the data isn't inserted. Is there anyone who can help me? I would appreciate it. :)
This is the code:
import MySQLdb
db=MySQLdb.connect(host="localhost", user="root", passwd="", db="try")
cursor=db.cursor()
insert="INSERT INTO `try`.`try` (`nomor`, `nama`) VALUES (NULL, 'bismillah')"
cursor.execute(insert)

You're not doing a COMMIT anywhere. So, if auto-commit is not on, all you've done is create a transaction that, if later committed, will insert this row.
Since you haven't done a SET AUTOCOMMIT anywhere, whether auto-commit is on depends on how you created the database. With at least some storage types (in particular, InnoDB), you can change the default at creation time, and, because you often want auto-commit disabled with those storage types, your GUI design tool, or the sample code you copied and pasted, or whatever may have done so for you. Also, the server variable that provides the default can itself be set to a different value at server startup/configuration. (See System Server Variables.)
If you want to make sure that auto-commit is on, just execute SET autocommit=1 before any other statements.
If you want to find out whether auto-commit is on, execute SHOW VARIABLES. (And if it's disabled, you may want to try SHOW GLOBAL VARIABLES LIKE 'autocommit' and SHOW SESSION VARIABLES like 'autocommit' to see which context you've disabled it in.)

If you cannot insert into mysql, there are several ways to solve it:
1: check the log
2: check the structure of your table, maybe it must not be null for nomor or other field
3: the last, when insert into mysqldb using program, you need to commit before you close the connection. for here: db.commit()

Related

Preventing writable modifications to Oracle database, using Python.

Currently using cx_Oracle module in Python to connect to my Oracle database. I would like to only allow the user of the program to do read only executions, like Select, and NOT INSERT/DELETE queries.
Is there something I can do to the connection/cursor variables once I establish the connection to prevent writable queries?
I am using the Python Language.
Appreciate any help.
Thanks.
One possibility is to issue the statement "set transaction read only" as in the following code:
import cx_Oracle
conn = cx_Oracle.connect("cx_Oracle/welcome")
cursor = conn.cursor()
cursor.execute("set transaction read only")
cursor.execute("insert into c values (1, 'test')")
That will result in the following error:
ORA-01456: may not perform insert/delete/update operation inside a READ ONLY transaction
Of course you'll have to make sure that you create a Connection class that calls this statement when it is first created and after each and every commit() and rollback() call. And it can still be circumvented by calling a PL/SQL block that performs a commit or rollback.
The only other possibility that I can think of right now is to create a restricted user or role which simply doesn't have the ability to insert, update, delete, etc. and make sure the application uses that user or role. This one at least is fool proof, but a lot more effort up front!

MySQL Unread Result with Python

I use mysql.connector to do SQL operations.
I have a short scripts which executes the following operations (strings) on the cursor with cursor.execute(...):
"use {}".format(db)
"show tables"
command = """
ALTER TABLE Object DROP PRIMARY KEY;
ALTER TABLE Object ADD `id` bigint(20) NOT NULL PRIMARY KEY AUTO_INCREMENT FIRST;
ALTER TABLE Object ADD INDEX (`uid`);"""
The script iterates over several databases db.
The problem is that at some point I get an "Unread result found" error. It seems when I run the script, at some point "use mydb" returns a result (cursor._have_result=True), when I didn't expect one. The weird thing is that if I rerun the full script it runs a little longer with more databases giving the same error later.
Can you suggest a way to solve or investigate this problem? Is there something I can do to prevent "unread results"?
PS: When I rerun the script the ALTER commands fails for the databases which are already done. Not sure if that causes problems.
Using MySQL Connector/Python, the Unread results found might happen when you use the connection object in different places without reading the result. It's not something one can go around. You can use the buffered option to read result immediately.
As mentioned in the comments, it's best to split the statements and execute them separately.
If you want to execute multiple statements, you'll need to use the multi=True option for the MySQLCursor.execute() method (since Connector/Python v1.0.4). Actually, if you don't use the multi option and send multiple statements, an InterfaceError will raise. (I do suspect a bug here as well..)
Additional remarks:
Instead of executing the USE-command to change databases, you can MySQLConnection.database property.
You best group the changes into one ALTER TABLE statement, like this:
ALTER TABLE t1 DROP PRIMARY KEY, ADD id INT NOT NULL AUTO_INCREMENT KEY FIRST, ADD INDEX(c1)
You have to pass buffered = true in your cursor. Read more official docs
cursor = conn.cursor(buffered=True)

How to avoid caching in sqlalchemy?

I have a problem with SQL Alchemy - my app works as a constantly working python application.
I have function like this:
def myFunction(self, param1):
s = select([statsModel.c.STA_ID, statsModel.c.STA_DATE)])\
.select_from(statsModel)
statsResult = self.connection.execute(s).fetchall()
return {'result': statsResult, 'calculation': param1}
I think this is clear example - one result set is fetched from database, second is just passed as argument.
The problem is that when I change data in my database, this function still returns data like nothing was changed. When I change data in input parameter, returned parameter "calculation" has proper value.
When I restart the app server, situation comes back to normal - new data are fetched from MySQL.
I know that there were several questions about SQLAlchemy caching like:
How to disable caching correctly in Sqlalchemy orm session?
How to disable SQLAlchemy caching?
but how other can I call this situation? It seems SQLAlchemy keeps the data fetched before and does not perform new queries until application restart. How can I avoid such behavior?
Calling session.expire_all() will evict all database-loaded data from the session. Any access of object attributes subsequent emits a new SELECT statement and gets new data back. Please see http://docs.sqlalchemy.org/en/latest/orm/session_state_management.html#refreshing-expiring for background.
If you still see so-called "caching" after calling expire_all(), then you need to close out transactions as described in my answer linked above.
A few possibilities.
You are reusing your session improperly or at improper time. Best practice is to throw away your session after you commit, and get a new one at the last possible moment before you use it. The behavior that appears to be caching may in fact be due to a session lifetime being very long in your application.
Objects that survive longer than the session are not being merged into a subsequent session. The "metadata" may not be able to update their state if you do not merge them back in. This is more a concern for the ORM API of SQLAlchemy, which you do not appear so far to be using.
Your changes are not committed. You say they are so we'll assume this is not it, but if none of the other avenues explain it you may want to look again.
One general debugging tip: if you want to know exactly what SQLAlchemy is doing in the database, pass echo=True to the create_engine function. The engine will print all queries it runs.
Also check out this suggestion I made to someone else, who was using ORM and had transactionality problems, which resolved their issue without ever pinpointing it. Maybe it will help you.
You need to change transaction isolation level to READ_COMMITTED
http://docs.sqlalchemy.org/en/rel_0_9/dialects/mysql.html#mysql-isolation-level

What is the proper way to do an INSERT query in Python MySQL?

I have a python script that connects to a local MySQL db. I know it is connecting correctly because I can do this and get the proper results:
cursor.execute("SELECT * FROM reel")
But when I try to do any insert statements it just does nothing. No error messages, no exceptions. Nothing shows up in the database when I check it from sqlyog. This is what my code looks like:
self.cursor.executemany("INSERT INTO reel (etime,etext) VALUES (%s,%s)", tups)
where tups is a list of tuples looking like this ('0000-00-00 00:00:00','text'). No errors show up and if I copy paste the generated SQL query into sqlyog it works. I've tried generating the query and doing cursor.execute() on it and no errors and no result either. Anyone know what I'm doing wrong?
You need to do a self.cursor.commit() after self.cursor.executemany("INSERT INTO reel (etime,etext) VALUES (%s,%s)", tups)
Starting with 1.2.0, MySQLdb disables autocommit by default, as required by the DB-API standard (PEP-249). If you are using InnoDB tables or some other type of transactional table type, you'll need to do connection.commit() before closing the connection, or else none of your changes will be written to the database.
Conversely, you can also use connection.rollback() to throw away any changes you've made since the last commit.
Important note: Some SQL statements -- specifically DDL statements like CREATE TABLE -- are non-transactional, so they can't be rolled back, and they cause pending transactions to commit.
Is a FAQ

Using SQLite in a Python program

I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production.
What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better).
I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn).
Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.
Don't make this more complex than it needs to be. The big, independent databases have complex setup and configuration requirements. SQLite is just a file you access with SQL, it's much simpler.
Do the following.
Add a table to your database for "Components" or "Versions" or "Configuration" or "Release" or something administrative like that.
CREATE TABLE REVISION(
RELEASE_NUMBER CHAR(20)
);
In your application, connect to your database normally.
Execute a simple query against the revision table. Here's what can happen.
The query fails to execute: your database doesn't exist, so execute a series of CREATE statements to build it.
The query succeeds but returns no rows or the release number is lower than expected: your database exists, but is out of date. You need to migrate from that release to the current release. Hopefully, you have a sequence of DROP, CREATE and ALTER statements to do this.
The query succeeds, and the release number is the expected value. Do nothing more, your database is configured correctly.
AFAIK an SQLITE database is just a file.
To check if the database exists, check for file existence.
When you open a SQLITE database it will automatically create one if the file that backs it up is not in place.
If you try and open a file as a sqlite3 database that is NOT a database, you will get this:
"sqlite3.DatabaseError: file is encrypted or is not a database"
so check to see if the file exists and also make sure to try and catch the exception in case the file is not a sqlite3 database
SQLite automatically creates the database file the first time you try to use it. The SQL statements for creating tables can use IF NOT EXISTS to make the commands only take effect if the table has not been created This way you don't need to check for the database's existence beforehand: SQLite can take care of that for you.
The main thing I would still be worried about is that executing CREATE TABLE IF EXISTS for every web transaction (say) would be inefficient; you can avoid that by having the program keep an (in-memory) variable saying whether it has created the database today, so it runs the CREATE TABLE script once per run. This would still allow for you to delete the database and start over during debugging.
As #diciu pointed out, the database file will be created by sqlite3.connect.
If you want to take a special action when the file is not there, you'll have to explicitly check for existance:
import os
import sqlite3
if not os.path.exists(mydb_path):
#create new DB, create table stocks
con = sqlite3.connect(mydb_path)
con.execute('''create table stocks
(date text, trans text, symbol text, qty real, price real)''')
else:
#use existing DB
con = sqlite3.connect(mydb_path)
...
Sqlite doesn't throw an exception if you create a new database with the same name, it will just connect to it. Since sqlite is a file based database, I suggest you just check for the existence of the file.
About your second problem, to check if a table has been already created, just catch the exception. An exception "sqlite3.OperationalError: table TEST already exists" is thrown if the table already exist.
import sqlite3
import os
database_name = "newdb.db"
if not os.path.isfile(database_name):
print "the database already exist"
db_connection = sqlite3.connect(database_name)
db_cursor = db_connection.cursor()
try:
db_cursor.execute('CREATE TABLE TEST (a INTEGER);')
except sqlite3.OperationalError, msg:
print msg
Doing SQL in overall is horrible in any language I've picked up. SQLalchemy has shown to be easiest from them to use because actual query and committing with it is so clean and absent from troubles.
Here's some basic steps on actually using sqlalchemy in your app, better details can be found from the documentation.
provide table definitions and create ORM-mappings
load database
ask it to create tables from the definitions (won't do so if they exist)
create session maker (optional)
create session
After creating a session, you can commit and query from the database.
See this solution at SourceForge which covers your question in a tutorial manner, with instructive source code :
y_serial.py module :: warehouse Python objects with SQLite
"Serialization + persistance :: in a few lines of code, compress and annotate Python objects into SQLite; then later retrieve them chronologically by keywords without any SQL. Most useful "standard" module for a database to store schema-less data."
http://yserial.sourceforge.net
Yes, I was nuking out the problem. All I needed to do was check for the file and catch the IOError if it didn't exist.
Thanks for all the other answers. They may come in handy in the future.

Categories

Resources