In sqlalchemy, I make the connection:
conn = engine.connect()
I found this will set autocommit = 0 in my mysqld log.
Now I want to set autocommit = 1 because I do not want to query in a transaction.
Is there a way to do this?
From The SQLAlchemy documentation: Understanding autocommit
conn = engine.connect()
conn.execute("INSERT INTO users VALUES (1, 'john')") # autocommits
The “autocommit” feature is only in effect when no Transaction has otherwise been declared. This means the feature is not generally used with the ORM, as the Session object by default always maintains an ongoing Transaction.
Full control of the “autocommit” behavior is available using the generative Connection.execution_options() method provided on Connection, Engine, Executable, using the “autocommit” flag which will turn on or off the autocommit for the selected scope. For example, a text() construct representing a stored procedure that commits might use it so that a SELECT statement will issue a COMMIT:
engine.execute(text("SELECT my_mutating_procedure()").execution_options(autocommit=True))
What is your dialect for mysql connection?
You can set the autocommit to True to solve the problem, like this mysql+mysqldb://user:password#host:port/db?charset=foo&autocommit=true
You can use this:
from sqlalchemy.sql import text
engine = create_engine(host, user, password, dbname)
engine.execute(text(sql).execution_options(autocommit=True))
In case you're configuring sqlalchemy for a python application using flask / django, you can create the engine like this:
# Configure the SqlAlchemy part of the app instance
app.config['SQLALCHEMY_DATABASE_URI'] = conn_url
session_options = {
'autocommit': True
}
# Create the SqlAlchemy db instance
db = SQLAlchemy(app, session_options=session_options)
I might be little late here, but for fox who is using sqlalchemy >= 2.0.*, above solution might not work as it did not work for me.
So, I went through the official documentation, and below solution worked for me.
from sqlalchemy import create_engine
db_engine = create_engine(database_uri, isolation_level="AUTOCOMMIT")
Above code works if you want to set autocommit engine wide.
But if you want use autocommit for a particular query then you can use below -
with engine.connect().execution_options(isolation_level="AUTOCOMMIT") as connection:
with connection.begin():
connection.execute("<statement>")
Official Documentation
This can be done using the autocommit option in the execution_option() method:
engine.execute("UPDATE table SET field1 = 'test'").execution_options(autocommit=True))
This information is available within the documentation on Autocommit
Related
I'm using Python (and Peewee) to connect to a SQLite database. My data access layer (DAL) is a mix of peewee ORM and SQL-based functions. I would like to enable EXPLAIN PLAN for all queries upon connecting to the database and toggle it via configuration or CLI parameter ... how can I do that using the Python API?
from playhouse.db_url import connect
self._logger.info("opening db connection to database, creating cursor and initializing orm model ...")
self.__db = connect(url)
# add support for a REGEXP and POW implementation
# TODO: this should be added only for the SQLite case and doesn't apply to other vendors.
self.__db.connection().create_function("REGEXP", 2, regexp)
self.__db.connection().create_function("POW", 2, pow)
self.__cursor = self.__db.cursor()
self.__cursor.arraysize = 100
# what shall I do here to enable EXPLAIN PLANs?
That is a feature of the sqlite interactive shell. To get the query plans, you will need to explicitly request it. This is not quite straightforward with Peewee because it uses parameterized queries. You can get the SQL executed by peewee in a couple of ways.
# Print all queries to stderr.
import logging
logger = logging.getLogger('peewee')
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.DEBUG)
Or for an individual query:
query = SomeModel.select()
sql, params = query.sql()
# To get the query plan:
curs = db.execute_sql('EXPLAIN ' + sql, params)
print(curs.fetchall()) # prints query plan
I would like to use data from SQL server in Pycharm using python. I have my database connection set up in Pycharm, but not sure how to access this data within my python code. I would like to query the data within the python code (similar to what I would do in R using the RODBC package).
Any suggestions on what to do or where to look would be much appreciated.
I have been having issues with this over learning this the last few days. (database / python) For me I am working in flask but it doesn't really seem to matter.
I did get this to work though not exactly what you ask but might get you a start
import MySQLdb
def database():
db = MySQLdb.connect(host="localhost", port=3306, user="root", passwd="admin", db="echo")
cursor = db.cursor()
cursor.execute( "INSERT INTO `post` (`hello`) VALUES (null), ('hello_world')" )
db.commit()
db.close()
I had to just set up my database from the command line. Its not pretty or intuitive but should get you started.
If you want to work with Python objects rather than SQL, I'd use SqlAlchemy and reflection.
from sqlalchemy import MetaData, create_engine
from sqlalchemy.orm import Session
from sqlalchemy.ext.automap import automap_base
engine = create_engine('mysql+mysqldb://...', pool_recycle=3600)
metadata = MetaData()
metadata.reflect(bind=engine)
session = Session(engine)
Base = automap_base(metadata=metadata)
Base.prepare()
# assuming I have a table named 'users'
Users = Base.classes.users
someUsers = Users.query.filter(Users.name.in_(['Jack', 'Bob', 'YakMan')).all()
import mysql.connector
connection=mysql.connector.connect(user='root', password='daniela', host='localhost', database='girrafe')
mycursor=connection.cursor()
There is a concept called OR(Object Relational) Mapping in python, which can be used for database connections. One of the modules that you need to import for the purpose is SQLAlchemy.
First, you will need to install sqlalchemy by:
pip install sqlalchemy
Now, for database connection, we have an Engine class in the sqlalchemy, which is responsible for the database connectivity. We create an object of the Engine class for establishing connection.
from sqlalchemy import create_engine,MetaData,select
engine=create_engine("mysql://user:pwd#localhost/dbname", echo=True)
connection=engine.connect()
The process of reading the database and creating metadata is called Reflection.
metadata=MetaData()
query=select([Student]) #Assuming that my database already has a table named Student
result=connection.execute(query)
row=result.fetchall() #This works similar to the select* query
In this way, you can manipulate data through other queries too, using sqlalchemy!
I'm new to Python and SQLAlchemy. I've been playing about with retrieving things from the database, and it's worked every time, but im a little unsure what to do when the select statement will return multiple rows. I tried using some older code that worked before I started SQLAlchemy, but db is a SQLAlchemy object and doesn't have the execute() method.
application = Applications.query.filter_by(brochureID=brochure.id)
cur = db.execute(application)
entries = cur.fetchall()
and then in my HTML file
{% for entry in entries %}
var getEmail = {{entry.2|tojson|safe}}
emailArray.push(getEmail);
I looked in the SQLAlchemy documentation and I couldn't find a .first() equivalent to getting all the rows. Can anyone point me in the right direction? No doubt it's something very small.
Your query is correct, you just need to change the way you interact with the result. The method you are looking for is all().
application = Applications.query.filter_by(brochureID=brochure.id)
entries = application.all()
the Usual way to work with orm queries is through the Session class, somewhere you should have a
engine = sqlalchemy.create_engine("sqlite:///...")
Session = sqlalchemy.orm.sessionmaker(bind=engine)
I'm not familiar with flask, but it likely does some of this work for you.
With a Session factory, your application is instead
session = Session()
entries = session.query(Application) \
.filter_by(...) \
.all()
I am inserting a lot of rows and it seems that postgress can't keep up. I've googled a bit and it is suggested that you can turn off autocommit. I don't need to commit the values right away. It's data that I can fetch again if something goes wrong.
Now when I search for turning off autocommit I'm not finding what I'm looking for. I've tried supplying autocommit=False in the dbpool constructor:
dbpool = adbapi.ConnectionPool('psycopg2', user="xxx", password="xxx", database="postgres", host="localhost", autocommit=False)
2013-01-27 18:24:42,254 - collector.EventLogItemController - WARNING - [Failure instance: Traceback: : invalid connection option "autocommit"
psycopg2 does not claim to support an autocommit keyword argument to connect:
connect(dsn=None, database=None, user=None, password=None, host=None, port=None, connection_factory=None, async=False, **kwargs)
Create a new database connection.
The connection parameters can be specified either as a string:
conn = psycopg2.connect("dbname=test user=postgres password=secret")
or using a set of keyword arguments:
conn = psycopg2.connect(database="test", user="postgres", password="secret")
The basic connection parameters are:
- *dbname*: the database name (only in dsn string)
- *database*: the database name (only as keyword argument)
- *user*: user name used to authenticate
- *password*: password used to authenticate
- *host*: database host address (defaults to UNIX socket if not provided)
- *port*: connection port number (defaults to 5432 if not provided)
Using the *connection_factory* parameter a different class or connections
factory can be specified. It should be a callable object taking a dsn
argument.
Using *async*=True an asynchronous connection will be created.
Any other keyword parameter will be passed to the underlying client
library: the list of supported parameter depends on the library version.
The current postgresql documentation doesn't discuss any "autocommit" parameter either:
http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING
So perhaps the problem is that this is not the correct way to disable autocommit for a psycopg2 connection. Apart from that, you won't find that turning off autocommit actually helps you at all. adbapi.ConnectionPool will begin and commit explicit transactions for you, side-stepping any behavior autocommit mode might give you.
The problem with adbapi is that:
1) its missing features specific to the some of the database backends
2) its fake asynchronous api. Under the hoods its using the thread pool and to call the blocking methods.
For postgres I'd suggest using txpostgres library (source is here: https://github.com/wulczer/txpostgres). It is using asynchronous api of psycopg2 and it lets you specify the connection string.
You can find an example here: http://txpostgres.readthedocs.org/en/latest/usage.html#customising-the-connection-and-cursor-factories
Use the cp_openfun option of the twisted.enterprise.adbapi.ConnectionPool constructor
This function is called with the connection as a parameter.
In case of psycopg2 you can then set the autocommit property of that connection to True or False as stated here
Consider the following snippet of Python code:
from sqlalchemy import *
from sqlalchemy.orm import *
db = create_engine('postgresql:///database', isolation_level='SERIALIZABLE')
Session = scoped_session(sessionmaker(bind=db, autocommit=False))
s = Session()
s.add(SomeInstance())
s.flush()
raw_input('Did it work? ')
It connects to the database, adds SomeInstance to the session, flushes, and then pauses. At this point, if I psql into my database, I would see that the instance has already been inserted -- even though autocommit is False and I haven't committed the session yet!
Any idea what I might be doing wrong?
Thanks!
Nevermind, there was a bug in the psycopg2.py implementation in sqlalchemy 0.6.3; upgrading to 0.6.4 solved this problem.