Connecting to localhost mysql db with python - python

Fair warning: I'm a big time noob. Please handle with kid gloves.
Details:
Python 3.2
MySQL 5.5
Tornado webframe installed
pymysql installed
Windows 7
Problem:
I'm following the Tornado documentation on connecting to a mysql database here. I only want to connect to localhost, but I'm getting the following error message:
Traceback (most recent call last):
File "C:\Python32\DIP3\tornado-test.py", line 5, in <module>
class Connection(localhost,re_project, user=root, password=mypassword, max_idle_time=25200):
NameError: name 'localhost' is not defined
This is the code I'm trying to run:
import tornado.ioloop
import tornado.web
import pymysql
class Connection(localhost,re_project, user=root, password=mypassword, max_idle_time=25200):
db = database.Connection("localhost", "re_project")
for Bogota in db.query("SELECT * FROM cities_copy"):
print(Bogota.title)
MySQL is currently running when I execute the code, so I don't think that should be a problem. What else could I be doing wrong?

This line:
class Connection(localhost,re_project, user=root, password=mypassword, max_idle_time=25200):
makes no sense at all. You can't define a class like that. Did you mean to use def instead of class?

Okay, I think I understand the problem. In the documentation, the line class tornado.database.Connection(host, database, user=None, password=None, max_idle_time=25200) is part of the documentation and is not meant to be copy/pasted. That describes how to do the db = database.Connection bit.
The green code sample lines should work on their own, as long as 1) the tornado.database module is imported and 2) the db = line is adjusted to pass values appropriate for your database to the Connection method.
So:
from tornado import database # you can use "import tornado.database" here, but then
# you will have to use "tornado.database.Connection()"
# instead of "database.Connection()"
db = database.Connection("localhost", "re_project", user="root", password="mypassword")
for bogota in db.query("SELECT * FROM cities_copy"): # I changed "bogota" to lower-case because the convention in Python is for only classes, not objects, to have upper-case names.
print(bogota.title)
I haven't tested this because I do not have Python 3.2 installed, so let me know if it doesn't work and I'll try to adjust.

You're not actually defining a constructor. Look at this as a template for what you need to do:
class Connection(object):
def __init__(self, host, project, user, password, max_idle_time):
self.db = database.Connection(
host, project, user=user, password=password, max_idle_time=max_idle_time)
def some_other_method(self):
for bogota in self.db.query("SELECT * FROM cities_copy"):
print(bogota.title)

Related

How do I put a ESXi host into maintenance mode using pyvmomi?

I was asked to write some python code that would put an VMWare ESXi host into maintenance mode. I was given the name of a virtual center, test-vc, and the hostname of an ESXi host, test-esxi-host and this link ...
https://github.com/vmware/pyvmomi/blob/master/docs/vim/HostSystem.rst
... which provides some documentation on the method I am suppose to use, EnterMaintenanceMode(timeout, evacuatePoweredOffVms, maintenanceSpec).
I am really a complete loss as to what to do really and could use some help. I have tried doing this from a python console:
from pyVmomi import vim
vim.HostSystem.EnterMaintenanceMode(timeout=0)
Which results in this error trace:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/apps/cpm/red/env/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 574, in __call__
return self.f(*args, **kwargs)
TypeError: _InvokeMethod() takes at least 2 arguments (1 given)
Also I am a kind of confused about how the EnterMaintenanaceMode routine would know that I want to put the host test-esxi-host in virtual center test-vc?
Update: I think I have figured it out. Here's what I think I need to do:
from pyVim.connect import SmartConnect, Disconnect
from pyVmomi import vim
import atexit
si = SmartConnectNoSSL(host=vc_host, user=user, pwd=pwd)
cont = si.RetrieveContent()
atexit.register(Disconnect, si) # maybe. I am not really sure what this does
objview = si.content.viewManager.CreateContainerView(si.content.rootFolder, [vim.HostSystem], True)
objview.view[0].EnterMaintenanceMode(0)
Of course the line
objview.view[0].EnterMaintenanceMode(0)
is sure to wreak havoc as I have no idea if that is the host, 'test-esxi-host', I want to put into maintenance mode. I guess I could do this
for h in objview.view:
if h.name == 'test-esxi-host'
h.EnterMaintenanceMode(0)
I hope there is a better way to do the above. Something like
get_host(objview.view, 'test-esxi-host').EnterMaintenanceMode(0)
Have a look at Getting started with VMwares ESXi/vSphere API in Python.
To get a VM object or a list of objects you can use the searchIndex
class. The class had methods to search for VMs by UUID, DNS name, IP
address or datastore path.
Hopefuly, there are a couple of ways to look for objects in vCenter:
FindByUuid (VM|Host)
FindByDatastorePath (VM)
FindByDnsName (VM|Host)
FindByIp (VM|Host)
FindByInventoryPath (managed entity: VM|Host|Resource Pools|..)
FindChild (managed entity)
Many of these also have FindAll.. methods which allow a much broader look up.
For this particular case, you could use FindByDnsName to look for your host.
searcher = si.content.searchIndex
host = searcher.FindByDnsName(dnsName='test-esxi-host', vmSearch=False)
host.EnterMaintenanceMode(0)
This code requires you to authenticate to vCenter (#SmartConnectNoSSL) with a user having Host.Config.Maintenance privileges.
Finally you can take your host out of maintenance mode with: host.ExitMaintenanceMode(0)

Python Package and Methods not Importing

I built a simple class with a couple methods to make my life a little easier when loading data into Postgres with Python. I also attempted to package it so I could pip install it (just to experiment, never done that before).
import psycopg2
from sqlalchemy import create_engine
import io
class py_psql:
engine = None
def engine(self, username, password, hostname, port, database):
connection = 'postgresql+psycopg2://{}:{}#{}:{}/{}'.format(ntid.lower(), pw, hostname, port, database)
self.engine = create_engine(connection)
def query(self, query):
pg_eng = self.engine
return pd.read_sql_query(query, pg_eng)
def write(self, write_name, df, if_exists='replace', index=False):
mem_size = df.memory_usage().sum()/1024**2
pg_eng = self.engine
def write_data():
df.head(0).to_sql(write_name, pg_eng, if_exists=if_exists,index=index)
conn = pg_eng.raw_connection()
cur = conn.cursor()
output = io.StringIO()
df.to_csv(output, sep='\t', header=False, index=False)
output.seek(0)
contents = output.getvalue()
cur.copy_from(output, write_name, null="")
conn.commit()
if mem_size > 100:
validate_size = input('DataFrame is {}mb, proceed anyway? (y/n): '.format(mem_size))
if validate_size == 'y':
write_data()
else:
print("Canceling write to database")
else:
write_data()
My package directory looks like this:
py_psql
py_psql.py
__init__.py
setup.py
My init.py is empty since I read elsewhere that I was able to do that. I'm not remotely an expert here...
I was able to pip install that package and import it, and if I were to paste this class into a python shell, I would be able to do something like
test = py_psql()
test.engine(ntid, pw, hostname, port, database)
and have it create the sqlalchemy engine. However, when I import it after the pip install I can't even initialize a py_psql object:
>>> test = py_psql()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'module' object is not callable
>>> py_psql.engine(ntid, pw, hostname, port, database)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'py_psql' has no attribute 'engine'
I'm sure I'm messing up something obvious here, but I found the process of packaging fairly confusing while researching this. What am I doing incorrectly?
Are you sure you imported your package correctly after pip install?
For example:
from py_psql.py_psql import py_psql
test = py_psql()
test.engine(ntid, pw, hostname, port, database)

Flask SqlAlchemy/Alembic migration sends invalid charset to PyMysql

I've spent a 3+ hours on this for 18 of the last 21 days. Please, someone, tell me what I'm misunderstanding!
TL;DR: My code is repeatedly sending the db charset as a string to PyMysql, while it expects an object with an attribute called "encoding"
Background
This is Python code running on a docker container. A second container houses the database. The database address is stored in a .env variable called ENGINE_URL:
ENGINE_URL=mysql+pymysql://root:#database/starfinder_development?charset=utf8
I'm firing off Alembic and Flask-Alembic commands using click commands in the CLI. All of the methods below are used in CLI commands.
Models / Database Setup (works)
from flask import Flask
flask_app = Flask(__name__)
db_engine = SQLAlchemy(flask_app)
from my_application import models
def create_database():
db_engine.create_all()
At this point I can open up the database container and use the MySql CLI to see that all of my models have now been converted into tables with columns and relationships.
Attempt 1: Alembic
Create Revision Files with Alembic (works)
from alembic.config import Config
def main(): # fires prior to any CLI command
filepath = os.path.join(os.path.dirname(__file__),
"alembic.ini")
alembic_config = Config(file_=filepath)
alembic_config.set_main_option("sqlalchemy.url",
ENGINE_URL)
alembic_config.set_main_option("script_location",
SCRIPT_LOCATION)
migrate_cli(obj={"alembic_config": alembic_config})
def revision(ctx, message):
alembic_config = ctx.obj["alembic_config"]
alembic_command.revision(alembic_config, message)
At this point I have a migration file the was created exactly as expected. Then I need to upgrade the database using that migration...
Running Migrations with Alembic (fails)
def upgrade(ctx, migration_revision):
alembic_config = ctx.obj["alembic_config"]
migration_revision = migration_revision.lower()
_dispatch_alembic_cmd(alembic_config, "upgrade",
revision=migration_revision)
firing this off with cli_command upgrade head causes a failure, which I've included here at the bottom because it has an identical stack trace to my second attempt.
Attempt 2: Flask-Alembic
This attempt finds me completely rewriting my main and revision commands, but it doesn't get as far as using upgrade.
Create Revision Files with Flask-Alembic (fails)
def main(): # fires prior to any CLI command
alembic_config = Alembic()
alembic_config.init_app(flask_app)
migrate_cli(obj={"alembic_config": alembic_config})
def revision(ctx, message):
with flask_app.app_context():
alembic_config = ctx.obj["alembic_config"]
print(alembic_config.revision(message))
This results in an error that is identical to the error from my previous attempt.
The stack trace in both cases:
(Identical failure using alembic upgrade & flask-alembic revision)
File "/Users/MyUser/.pyenv/versions/3.6.2/envs/sf/lib/python3.6/site-packages/pymysql/connections.py", line 678, in __init__
self.encoding = charset_by_name(self.charset).encoding
AttributeError: 'NoneType' object has no attribute 'encoding'
In response, I went into the above file & added a print on L677, immediately prior to the error:
print(self.charset)
utf8
Note: If I modify my ENGINE_URL to use a different ?charset=xxx, that change is reflected here.
So now I'm stumped
PyMysql expects self.charset to have an attribute encoding, but self.charset is simply a string. How can I change this to behave as expected?
Help?
A valid answer would be an alternative process, though the "most correct" answer would be to help me resolve the charset/encoding problem.
My primary goal here is simply to get migrations working on my flask app.

Gearman + SQLAlchemy - keep losing MySQL thread

I have a python script that sets up several gearman workers. They call into some methods on SQLAlchemy models I have that are also used by a Pylons app.
Everything works fine for an hour or two, then the MySQL thread gets lost and all queries fail. I cannot figure out why the thread is getting lost (I get the same results on 3 different servers) when I am defining such a low value for pool_recycle. Also, why wouldn't a new connection be created?
Any ideas of things to investigate?
import gearman
import json
import ConfigParser
import sys
from sqlalchemy import create_engine
class JSONDataEncoder(gearman.DataEncoder):
#classmethod
def encode(cls, encodable_object):
return json.dumps(encodable_object)
#classmethod
def decode(cls, decodable_string):
return json.loads(decodable_string)
# get the ini path and load the gearman server ips:ports
try:
ini_file = sys.argv[1]
lib_path = sys.argv[2]
except Exception:
raise Exception("ini file path or anypy lib path not set")
# get the config
config = ConfigParser.ConfigParser()
config.read(ini_file)
sqlachemy_url = config.get('app:main', 'sqlalchemy.url')
gearman_servers = config.get('app:main', 'gearman.mysql_servers').split(",")
# add anypy include path
sys.path.append(lib_path)
from mypylonsapp.model.user import User, init_model
from mypylonsapp.model.gearman import task_rates
# sqlalchemy setup, recycle connection every hour
engine = create_engine(sqlachemy_url, pool_recycle=3600)
init_model(engine)
# Gearman Worker Setup
gm_worker = gearman.GearmanWorker(gearman_servers)
gm_worker.data_encoder = JSONDataEncoder()
# register the workers
gm_worker.register_task('login', User.login_gearman_worker)
gm_worker.register_task('rates', task_rates)
# work
gm_worker.work()
I've seen this across the board for Ruby, PHP, and Python regardless of DB library used. I couldn't find how to fix this the "right" way which is to use mysql_ping, but there is a SQLAlchemy solution as explained better here http://groups.google.com/group/sqlalchemy/browse_thread/thread/9412808e695168ea/c31f5c967c135be0
As someone in that thread points out, setting the recycle option to equal True is equivalent to setting it to 1. A better solution might be to find your MySQL connection timeout value and set the recycle threshold to 80% of it.
You can get that value from a live set by looking up this variable http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_connect_timeout
Edit:
Took me a bit to find the authoritivie documentation on useing pool_recycle
http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html?highlight=pool_recycle

PermanentTaskFailure: 'module' object has no attribute 'Migrate'

I'm using Nick Johnson's Bulk Update library on google appengine (http://blog.notdot.net/2010/03/Announcing-a-robust-datastore-bulk-update-utility-for-App-Engine). It works wonderfully for other tasks, but for some reason with the following code:
from google.appengine.ext import db
from myapp.main.models import Story, Comment
import bulkupdate
class Migrate(bulkupdate.BulkUpdater):
DELETE_COMPLETED_JOBS_DELAY = 0
DELETE_FAILED_JOBS = False
PUT_BATCH_SIZE = 1
DELETE_BATCH_SIZE = 1
MAX_EXECUTION_TIME = 10
def get_query(self):
return Story.all().filter("hidden", False).filter("visible", True)
def handle_entity(self, entity):
comments = entity.comment_set
for comment in comments:
s = Story()
s.parent_story = comment.story
s.user = comment.user
s.text = comment.text
s.submitted = comment.submitted
self.put(s)
job = Migrate()
job.start()
I get the following error in my logs:
Permanent failure attempting to execute task
Traceback (most recent call last):
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/deferred/deferred.py", line 258, in post
run(self.request.body)
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/deferred/deferred.py", line 122, in run
raise PermanentTaskFailure(e)
PermanentTaskFailure: 'module' object has no attribute 'Migrate'
It seems quite bizarre to me. Clearly that class is right above the job, they're in the same file and clearly the job.start is being called. Why can't it see my Migrate class?
EDIT: I added this update job in a newer version of the code, which isn't the default. I invoke the job with the correct URL (http://version.myapp.appspot.com/migrate). Is it possible this is related to the fact that it isn't the 'default' version served by App Engine?
It seems likely that your declaration of the 'Migrate' class is in the handler script (Eg, the one directly invoked by app.yaml). A limitation of deferred is that you can't use it to call functions defined in the handler module.
Incidentally, my bulk update library is deprecated in favor of App Engine's mapreduce support; you should probably use that instead.

Categories

Resources