I've got third-party Python script, it looks like it has to connect to MySQL database by means of SQLObject package.
Considering I've provided correct DSN, the script throws
sqlobject.dberrors.OperationalError: Unknown database
'dbname?charset=utf8'
I've traced the problem to this piece of code
ar['charset'] = 'utf8'
conn = connectionForURI(uri, **ar)
which calls this function.
And it connects fine when ar['charset'] = 'utf8' is commented, so no query string is provided.
I have this issue on Windows,
MySQL 5.5.25
Python 2.7.2
MySQL-python 1.2.5
SQLObject 3.0.0a1dev-20150327
What exactly is going on there, and how it is supposed to be fixed? Does the problem lie in dependencies or the script itself?
I have done some research and found out that recent version of SQLObject uses the following code to extract connection parameters from URI. Unfortunately, urlparse function works that way so path that is used as DB name further parsed with a query string.
As a workaround for this issue, I could suggest passing DB encoding parameter explicitly to the connection object as follow:
conn = connectionForURI(uri)
conn.dbEncoding = 'utf-8'
It might help but it's worth to make a pull request to fix DB name extraction from URI.
UPD: Older version like 2.x uses a different code to parse connection URL which works well.
Comma, not question mark:
db = MySQLdb.connect(host=DB_HOST, user=DB_USER, passwd=DB_PASS,
db=DB_NAME, charset="utf8", use_unicode=True)
If you can't get past the 3rd party software, abandon it.
Related
so I was trying to create a password manager for myself, using python and mariadb. After creating a table named pw, which contains Name, Account and Passwords 3 columns, I tried to create a function(Search_Passwords(app_name)) which I can use to enter a keyword to search in the database, and it will give me the right passwords. However, I ran into this error message:
Commands out of syncs error message.
I'm new to python and mariadb(using it cause for some reason MySQL doesn't work..), tried to look up for some answers but still can't figure it out. Can anyone help please? Below are other codes I think might be related.
This is what mariadb's table looks like.
Search_Passwords()
class UseDataBase
This is what I found online as a reference version where Search_Passwords() involved.
Sorry if my codes are not perfect... :(
MariaDB Connector/Python by default use unbuffered result sets, which means before executing another cursor all pending result sets need to be fetched or the cursor needs to be closed.
For example the following script
import mariadb
conn= mariadb.connect()
cursor1= conn.cursor()
cursor1.execute("select 1 from dual")
cursor2= conn.cursor()
cursor2.execute("select 2 from dual")
will throw an exception Mariadb.InterfaceError: Commands out of sync; you can't run this command now.
To avoid this, you need to create a buffered cursor:
cursor1= conn.cursor(buffered=True)
I'm getting the whole data of some collection in MongoDB and after a while (like 30 or 60 minutes), the script raises the following error:
pymongo.errors.CursorNotFound: cursor id 1801580172063793986 not found, full error: {'ok': 0.0, 'errmsg': 'cursor id 1801580172063793986 not found', 'code': 43, 'codeName': 'CursorNotFound'}
This error occurs after the 24k documents. I'm using Django and Pymongo connected to the database on the local server. The collection has like 60k documents.
This is how I'm getting the data:
client = MongoClient(settings.MONGO_HOST, settings.MONGO_PORT)
collection = client[settings.MONGO_DB].collection
cursor = collection.find(no_cursor_timeout=True)
for document in cursor:
# getting the data from the user
Just in case, I'm using:
Python 3.8
Django 3.1.4
Pymongo 3.11.0
Mongod 4.4.2 (for the local server)
Ubuntu 20.04
This is not the solution to the error, but this is a way to avoid the error.
In order to use the cursor in the minimum time possible, you can save all the data in a list or something like that.
client = MongoClient(settings.MONGO_HOST, settings.MONGO_PORT)
collection = client[settings.MONGO_DB].collection
cursor = collection.find(no_cursor_timeout=True)
collection_data = [document for document in cursor]
for document in collection_data:
# using the data
Try setting the cursor timeout globally via cursorTimeoutMillis parameter.
In your terminal, type:
$ mongod --setParameter cursorTimeoutMillis=600000
See https://jira.mongodb.org/browse/SERVER-36808 and the related linked tickets.
The server can destroy sessions that are being used by open cursors, which has the effect of rendering those cursors unusable.
https://jira.mongodb.org/browse/PYTHON-1626 might be helpful though it seems to me the issue is on the server side and as of now unresolved.
I have an sql server database hosted on Azure. I have put a string in the database with smart quotes('“test”'). I can connect to it and run a simple query:
import pymssql
import json
conn = pymssql.connect(
server='coconut.database.windows.net',
user='kingfish#coconut',
password='********',
database='coconut',
charset='UTF-8',
)
sql = """
SELECT * FROM messages WHERE id = '548a72cc-f584-7e21-2725-fe4dd594982f'
"""
cursor = conn.cursor()
cursor.execute(sql)
row = cursor.fetchone()
json.dumps(row[3])
When I run this query on my Mac (macOS 10.11.6, Python 3.4.4, pymssql 2.1.3) I get back the string:
"\u201ctest\u201d"
This is correctly interpreted as smart quotes and displays properly.
When I run this query on an Azure web deployment (Python 3.4, Azure App service) I get back a different (and incorrect) encoding for that same string:
"\u0093test\u0094"
I specified the charset as 'UTF-8' on the pymssql connection. Why does the Windows/Azure environment get back a different charset?
(note: I have put the pre-built binary pymssql-2.1.3-cp34-none-win32.whl in the wheelhouse of my project repo on Azure. This is the same as the pymssql pre-built binary pymssql-2.1.3-cp34-cp34m-win32.whl on PyPI only I had to rename the 'cp34m' to 'none' to convince pip to install it.)
According to your description, I think it seems that the issue was caused by the default charset encoding of the SQL Database on Azure. For verifing my thought, I did some testing below in Python 3.
The default charset encoding of SQL Database on Azure is Windows-1252 (CP-1252).
SQL Server Collation Support
The default database collation used by Microsoft Azure SQL Database is SQL_LATIN1_GENERAL_CP1_CI_AS, where LATIN1_GENERAL is English (United States), CP1 is code page 1252, CI is case-insensitive, and AS is accent-sensitive. It is not possible to alter the collation for V12 databases. For more information about how to set the collation, see COLLATE (Transact-SQL).
>>> u"\u201c".encode('cp1252')
b'\x93'
>>> u"\u201d".encode('cp1252')
b'\x94'
As the code above shown, the \u0093 & \u0094 can be got via encode \u201c & \u201d.
And,
>>> u"\u0093".encode('utf-8')
b'\xc2\x93'
>>> u"\u0093".encode('utf-8').decode('cp1252')[1]
'“' # It's `\u201c`
>>> u"\u201c" == u"\u0093".encode('utf-8').decode('cp1252')[1]
True
So I think the charset encoding of your current SQL Database for data storage is Latin-1, not UTF-8, when you created the SQL Database, as the figure below, the default property Collation on Azure portal is SQL_Latin1_General_CP1_CI_AS. Please try to use the other collation support UTF-8 instead of the default one.
I ended up recasting the column type from VARCHAR to NVARCHAR. This solved my problem, characters are correctly interpreted, regardless of platform.
I keep running into an odd error when attempting to connect python sqlalchemy to a msssql server/database. I need to use sqlalchemy as it is (from what I've been told) the only way to connect pandas dataframes to mssql.
I have tried connecting sqlalchemy two different ways:
using full connection string:
import sqlalchemy as sa
import urllib.parse as ulp
usrCnnStr = r'DRIVER={SQL Server};SERVER=myVoid\MYINSTANCE;Trusted_Connection=yes;'
usrCnnStr = ulp.quote_plus(usrCnnStr)
usrCnnStr = "mssql+pyodbc:///?odbc_connect=%s" % usrCnnStr
engine = sa.create_engine(usrCnnStr)
connection = engine.connect()
connection.execute("select getdate() as dt from mydb.dbo.dk_rcdtag")
connection.close()
using DSN:
import sqlalchemy as sa
import urllib.parse as ulp
usrDsn = 'myDb'
params = ulp.quote_plus(usrDsn)
engine = sa.create_engine("mssql+pyodbc://cryo:pass#myDb")
conn = engine.connect()
conn.execute('select getdate() as dt')
conn.close()
Both methods return the same error:
sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('ODBC data type -150 is not supported. Cannot read column .', 'HY000') [SQL: "SELECT SERVERPROPERTY('ProductVersion')"]
I am not sure how to get around this error; when I execute the "SELECT SERVERPROPERTY('ProductVersion')" in mssql, it works fine but comes back with a data type of "sql_variant".
Is there any way to get around this?
This is most certainly a bug introduced in Issue 3814, new in SQLAlchemy 1.1.0, where they introduce SELECT SERVERPROPERTY('ProductVersion') to fetch server version for the pyodbc MSSQL driver. Downgrading to 1.0.15 will make the code work again, but hopefully the SQLAlchemy devs will make the new version lookup scheme work better in a new patch release.
(There is an issue already reported in the SQLAlchemy issue tracker, I would add this comment there, but bitbucket can't log me in.)
I upgraded to sqlalchemy 1.1 today and ran into a similar issue with connections that were working before. Bumped back to 1.0.15 and no problems. Not the best answer, more of a workaround, but it may work if you are on 1.1 and need to get rolling.
If you are unsure of your version:
>>import sqlalchemy
>>sqlalchemy.__version__
IIRC, this is because you can't select non-cast functions directory, since they don't return a datatype pyodbc recognizes.
Try this:
SELECT CAST(GETDATE() AS DATETIME) AS dt
Also, your may want to use CURRENT_TIMESTAMP, which is ANSI standard SQL, instead of GETDATE(): Retrieving date in sql server, CURRENT_TIMESTAMP vs GetDate()
I'm not sure where your product version select is coming from, but hopefully this gets you on the right path. I'll amend the answer if we figure out more.
There is one row in Mysql table as following:
1000, Intel® Rapid Storage Technology
The table's charset='utf8' when was created.
When I used python code to read it, it become the following:
Intel® Management Engine Firmware
My python code as following:
db = MySQLdb.connect(db,user,passwd,dbName,port,charset='utf8')
The weird thing was that when I removed the charset='utf8' as following:
db = MySQLdb.connect(db,user,passwd,dbName,port), the result become correct.
Why when I indicated charset='utf8' in my code, but got wrong result please?
Have you tried leaving off the charset in the connect string and then setting afterwards?
db = MySQLdb.connect(db,user,passwd,dbName,port)
db.set_character_set('utf8')
When trying to use utf8/utf8mb4, if you see Mojibake, check the following.
This discussion also applies to Double Encoding, which is not necessarily visible.
The bytes to be stored need to be utf8-encoded.
The connection when INSERTing and SELECTing text needs to specify utf8 or utf8mb4.
The column needs to be declared CHARACTER SET utf8 (or utf8mb4).
HTML should start with <meta charset=UTF-8>.
See also Python notes