I have data in a text file which I need to upload into a table. My script in python 3 and uses mysql.connector (https://launchpad.net/myconnpy) to connect to DB and execute commands. I have been able successfully use mysql.connector in past without any problems but I am facing problem in using the command that uploads file to a table. My code is as follows:
def TableUpload(con2):
cur = con2.cursor()##Connect to destination server with table
res_file = 'extend2'
cur.execute("TRUNCATE TABLE data.results")## Clear table before writing
cur.execute("LOAD DATA LOCAL INFILE './extend2' INTO TABLE data.results FIELDS TERMINATED BY ','")
The code clears the table and than try to upload data from text file to table. It successfully clears the table but generated following error while filling table:
Traceback (most recent call last):
File "cl3.py", line 575, in <module>
TableUpload(con2)
File "cl3.py", line 547, in TableUpload
cur.execute("LOAD DATA LOCAL INFILE './extend2' INTO TABLE kakrana_data.mir_page_results FIELDS TERMINATED BY ','")
File "/usr/local/lib/python3.2/site-packages/mysql/connector/cursor.py", line 333, in execute
res = self.db().protocol.cmd_query(stmt)
File "/usr/local/lib/python3.2/site-packages/mysql/connector/protocol.py", line 137, in deco
return func(*args, **kwargs)
File "/usr/local/lib/python3.2/site-packages/mysql/connector/protocol.py", line 495, in cmd_query
return self.handle_cmd_result(self.conn.recv())
File "/usr/local/lib/python3.2/site-packages/mysql/connector/connection.py", line 180, in recv_plain
errors.raise_error(buf)
File "/usr/local/lib/python3.2/site-packages/mysql/connector/errors.py", line 84, in raise_error
raise get_mysql_exception(errno,errmsg)
mysql.connector.errors.NotSupportedError: 1148: The used command is not allowed with this MySQL version
When I use the command for uploading file directly from terminal than it works well. It is just that command is not working from script. The error says that command is not allowed with mysql version though it works from terminal. Please suggest what mistake I am making or alternative way to achieve data upload to a table from local file.
Related
I am trying to push the data to GCP DataStore, The below code snippet works fine in Jupyter Notebook but it is throwing error in VS Code.
def load_data_json(self, kind_name, data_with_qp_ID, qp_id):
#Load the data in JSON format to upload into the DataStore
data_with_qp_ID_as_JSON = self.convert_DF_to_JSON(data_with_qp_ID, qp_id)
#Loop to iterate through the JSON format and upload into the GCS Storage
for data in data_with_qp_ID_as_JSON.keys():
with self.client.transaction():
incomplete_key = self.client.key(kind_name)
task = datastore.Entity(key=incomplete_key)
task.update(data_with_qp_ID_as_JSON[data])
self.client.put(task)
return 'Ingestion Successful - Data Store Repository'
I have defined the name of the bucket in "Kind Name", data_with_qp_id is a pandas dataframe, qp_id is the name of the column name in pandas. Please see the error message that I get below,
Traceback (most recent call last):
File "/Users/ajaykrishnan/Desktop/Projects/Sprint 3/Data Migration/DataMigration_v1.1/main2.py", line 139, in <module>
write_datastore_db.load_data_json(ds_kindname, bookmarks_data_with_qp_ID, qp_id)
File "/Users/ajaykrishnan/Desktop/Projects/Sprint 3/Data Migration/DataMigration_v1.1/pkg/repository/ds_repository.py", line 50, in load_data_json
self.client.put(task)
File "/opt/anaconda3/lib/python3.9/site-packages/google/cloud/datastore/client.py", line 597, in put
self.put_multi(entities=[entity], retry=retry, timeout=timeout)
File "/opt/anaconda3/lib/python3.9/site-packages/google/cloud/datastore/client.py", line 634, in put_multi
current.put(entity)
File "/opt/anaconda3/lib/python3.9/site-packages/google/cloud/datastore/transaction.py", line 315, in put
super(Transaction, self).put(entity)
File "/opt/anaconda3/lib/python3.9/site-packages/google/cloud/datastore/batch.py", line 227, in put
_assign_entity_to_pb(entity_pb, entity)
File "/opt/anaconda3/lib/python3.9/site-packages/google/cloud/datastore/batch.py", line 373, in _assign_entity_to_pb
bare_entity_pb = helpers.entity_to_protobuf(entity)
File "/opt/anaconda3/lib/python3.9/site-packages/google/cloud/datastore/helpers.py", line 208, in entity_to_protobuf
key_pb = entity.key.to_protobuf()
File "/opt/anaconda3/lib/python3.9/site-packages/google/cloud/datastore/key.py", line 298, in to_protobuf
key.path.append(element)
TypeError: Parameter to MergeFrom() must be instance of same class: expected google.datastore.v1.Key.PathElement got PathElement.
My environment is as follows,
Mac OS Monterey V12.06
Python - Conda 3.9.12
I was able to clear this error. It was an issue with Protobuf library that my environment was using. I downgraded the version of protobuf from 4.x.x to 3.20.1 and it worked.
We have a problem when possible trying to connect with Hive Metastore by HiveMetastoreHook in Apache Airflow
thrift.transport.TTransport.TTransportException: b'Error in sasl_decode (-1) SASL(-1): generic failure: Unable to find a callback: 32775'
We googled this issue but still have not find any answer
But for temporary fix - We do REcreate our hive external table on which the issue happens and restart this task. Then after a few days this problem happens again and again and again. And we have no any idea where we need to fix.
NOTE:
This happens only on one big table ,
we have hive 3.1.0 and Airflow 1.10.5 ,
this issue reproduces from python3 cli by import airflow,
select this big table from hive is fine (data is fine too)
Full stack trace
[2021-06-30 18:05:41,143] {{base_hook.py:84}} INFO - Using connection to: id: our_hive_metastore_connection. Host: server_name, Port: someport, Schema: our_schema, Login: some_login, Password: None, extra: {'authMechanism': 'GSSAPI', 'kerberos_service_name': 'some_name'}
>>> hm.get_table(table_name='some_big_table', db='some_schema')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/site-packages/airflow/hooks/hive_hooks.py", line 608, in get_table
return client.get_table(dbname=db, tbl_name=table_name)
File "/usr/local/lib/python3.6/site-packages/hmsclient/genthrift/hive_metastore/ThriftHiveMetastore.py", line 2253, in get_table
return self.recv_get_table()
File "/usr/local/lib/python3.6/site-packages/hmsclient/genthrift/hive_metastore/ThriftHiveMetastore.py", line 2266, in recv_get_table
(fname, mtype, rseqid) = iprot.readMessageBegin()
File "/usr/local/lib64/python3.6/site-packages/thrift/protocol/TBinaryProtocol.py", line 134, in readMessageBegin
sz = self.readI32()
File "/usr/local/lib64/python3.6/site-packages/thrift/protocol/TBinaryProtocol.py", line 217, in readI32
buff = self.trans.readAll(4)
File "/usr/local/lib64/python3.6/site-packages/thrift/transport/TTransport.py", line 62, in readAll
chunk = self.read(sz - have)
File "/usr/local/lib/python3.6/site-packages/thrift_sasl/__init__.py", line 166, in read
self._read_frame()
File "/usr/local/lib/python3.6/site-packages/thrift_sasl/__init__.py", line 180, in _read_frame
message=self.sasl.getError())
thrift.transport.TTransport.TTransportException: b'Error in sasl_decode (-1) SASL(-1): generic failure: Unable to find a callback: 32775'```
Is anybody have some idea what we need to fix?
Appreciate for help!
Currently in Azure ML's while executing python script, with following code. (Python 2.7.11)
In which results obtained from the mongoDB are trying to return in DataFrame using pyMongo.
I got an error like ::
"C:\pyhome\lib\site-packages\pymongo\topology.py", line 97, in select_servers
self._error_message(selector))
ServerSelectionTimeoutError: ... ('The write operation timed out',)
Please let me know if you know about the cause of the error and what to improve.
My Source code :
import pymongo as m
import pandas as pd
def azureml_main(dataframe1 = None, dataframe2 = None):
uri = "mongodb://xxxxx:yyyyyyyyyyyyyyy#zzz.mongodb.net:xxxxx/?ssl=true&replicaSet=globaldb"
client = m.MongoClient(uri,connect=False)
db = client['dbName']
coll = db['colectionName']
cursor = coll.find()
df = pd.DataFrame(list(cursor))
return df,
Error Details:
Error 0085: The following error occurred during script evaluation, please view the output log for more information:
---------- Start of error message from Python interpreter ----------
Caught exception while executing function: Traceback (most recent call last):
File "C:\server\invokepy.py", line 199, in batch
odfs = mod.azureml_main(*idfs)
File "C:\temp\55a174d8dc584942908423ebc0bac110.py", line 32, in azureml_main
result = pd.DataFrame(list(cursor))
File "C:\pyhome\lib\site-packages\pymongo\cursor.py", line 977, in next
if len(self.__data) or self._refresh():
File "C:\pyhome\lib\site-packages\pymongo\cursor.py", line 902, in _refresh
self.__read_preference))
File "C:\pyhome\lib\site-packages\pymongo\cursor.py", line 813, in __send_message
**kwargs)
File "C:\pyhome\lib\site-packages\pymongo\mongo_client.py", line 728, in _send_message_with_response
server = topology.select_server(selector)
File "C:\pyhome\lib\site-packages\pymongo\topology.py", line 121, in select_server
address))
File "C:\pyhome\lib\site-packages\pymongo\topology.py", line 97, in select_servers
self._error_message(selector))
ServerSelectionTimeoutError: xxxxx-xxx.mongodb.net:xxxxx: ('The write operation timed out',)
Process returned with non-zero exit code 1
As I known, there are a limitation of Execute Python Scripts which will cause this issue, please refer to the section Limitations to know it, as below.
Limitations
The Execute Python Script currently has the following limitations:
Sandboxed execution. The Python runtime is currently sandboxed and, as a result, does not allow access to the network or to the local file system in a persistent manner. All files saved locally are isolated and deleted once the module finishes. The Python code cannot access most directories on the machine it runs on, the exception being the current directory and its subdirectories.
Due to the reason above, you can not directly import the data from Azure Cosmos DB online via pymongo driver in Execute Python Script module. But you can use Import Data module with the connection and parameters information of your Azure Cosmos DB and connect its output to the input of Execute Python Script to get the data, as the figure below.
For more information to import data online, please refer to the section Import from online data sources of the offical document Import your training data into Azure Machine Learning Studio from various data sources.
I am using flask migrate to for database creation & migration in flask with flask-sqlalchemy.
Everything was working fine until I changed my database user password contains '#' then it stopped working so, I updated my code based on
Writing a connection string when password contains special characters
It working for application but not for flask-migration, Its showing error while migrating
i.e on python manage.py db migrate
ValueError: invalid interpolation syntax in u'mysql://user:p%40ssword#localhost/testdb' at position 15
Here password is p#ssword and its escaped by urlquote (see above question link).
Full error stack:
Traceback (most recent call last):
File "manage.py", line 20, in <module>
manager.run()
File "/usr/local/lib/python2.7/dist-packages/flask_script/__init__.py", line 412, in run
result = self.handle(sys.argv[0], sys.argv[1:])
File "/usr/local/lib/python2.7/dist-packages/flask_script/__init__.py", line 383, in handle
res = handle(*args, **config)
File "/usr/local/lib/python2.7/dist-packages/flask_script/commands.py", line 216, in __call__
return self.run(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/flask_migrate/__init__.py", line 177, in migrate
version_path=version_path, rev_id=rev_id)
File "/usr/local/lib/python2.7/dist-packages/alembic/command.py", line 117, in revision
script_directory.run_env()
File "/usr/local/lib/python2.7/dist-packages/alembic/script/base.py", line 407, in run_env
util.load_python_file(self.dir, 'env.py')
File "/usr/local/lib/python2.7/dist-packages/alembic/util/pyfiles.py", line 93, in load_python_file
module = load_module_py(module_id, path)
File "/usr/local/lib/python2.7/dist-packages/alembic/util/compat.py", line 79, in load_module_py
mod = imp.load_source(module_id, path, fp)
File "migrations/env.py", line 22, in <module>
current_app.config.get('SQLALCHEMY_DATABASE_URI'))
File "/usr/local/lib/python2.7/dist-packages/alembic/config.py", line 218, in set_main_option
self.set_section_option(self.config_ini_section, name, value)
File "/usr/local/lib/python2.7/dist-packages/alembic/config.py", line 245, in set_section_option
self.file_config.set(section, name, value)
File "/usr/lib/python2.7/ConfigParser.py", line 752, in set
"position %d" % (value, tmp_value.find('%')))
ValueError: invalid interpolation syntax in u'mysql://user:p%40ssword#localhost/testdb' at position 15
Please help
In the migrations/env.py file, you will find the code that is responsible for this issue.
config.set_main_option('sqlalchemy.url',
current_app.config.get('SQLALCHEMY_DATABASE_URI'))
If there are % signs in the SQLALCHEMY_DATABASE_URI, this will cause an error.
You can solve this by editing the migrations/env.py file, and changing the offending line as follows
db_url_escaped = current_app.config.get('SQLALCHEMY_DATABASE_URI').replace('%', '%%')
config.set_main_option('sqlalchemy.url', db_url_escaped)
Also see the documentation of set_main_option:
Note that this value is passed to ConfigParser.set, which supports variable interpolation using pyformat (e.g. %(some_value)s). A raw percent sign not part of an interpolation symbol must therefore be escaped, e.g. %%. The given value may refer to another value already in the file using the interpolation format.
I have a solution for this issue after experiencing it as well.
There's an issue with '%' (percent signs) in the db connection URI after you urlencode the string.
I tried substituting the percent sign with double percent signs ('%%') which gets me past the interpolation error. However, that resulted in not being able to connect to the database because of an incorrect password.
Solution I'm going with for now is to avoid using '%' in my db password. Not a satisfactory solution, but will do for now. I'll make a note in "alembic"'s github of the issue. Seems using RawConfigParser in their package could help avoid this issue.
You may want to look at http://docs.sqlalchemy.org/en/latest/dialects/mysql.html#mysql-unicode
I was having the same issue with my password and the mysql connector. using the mysql+pymysql connector allowed me to connect in application and in migration scripts.
My web2py application returned me an error today, which is quite odd.
Traceback (most recent call last):
File "/var/www/web2py/gluon/restricted.py", line 212, in restricted
exec ccode in environment
File "/var/www/web2py/applications/1MedCloud/controllers/default.py", line 475, in <module>
File "/var/www/web2py/gluon/globals.py", line 194, in <lambda>
self._caller = lambda f: f()
File "/var/www/web2py/applications/1MedCloud/controllers/default.py", line 63, in patient_register
rows = db(db.patientaccount.email==email).select()
File "/var/www/web2py/gluon/dal.py", line 7837, in __getattr__
return ogetattr(self, key)
AttributeError: 'DAL' object has no attribute 'patientaccount'
I am using Mysql as the database, and the table 'patientaccount' does exist. There is no connection issue as I can create tables but not fetch them from the server.
I have been using the very same code to do the db thing, here is my code
db = DAL('mysql://###:$$$#^^^^^^:3306/account_info', pool_size=0)
rows = db(db.patientaccount.email==email).select()
I did not change any code in my default.py file, but accidentally deleted some files inside "database" folder in my application. But I doubt if that could result the error, since the module is fetching tables on the server rather than using local files.
Please help! Thanks in advance!
The DAL does not inspect the MySQL database to discover its tables and fields. You must define the data models explicitly. So, somewhere in your code, you must do:
db.define_table('patientaccount',
Field('email'),
...)
That will define the db.patientaccount table so the DAL knows it exists and what fields it includes.