Error calling Python module function in MySQL Workbench - python

I'm kind of at my wits end here, and so far have had no feedback from the MySQL Workbench bug reporting site, so I thought I'd throw this question/problem out to more sites.
I'm attempting to migrate from a MSSQL server on a Windows Server 2003 machine to MySQL server running on a Centos 6.5 VM. I can connect to the source and target databases, select a schemata, and runs through a pass through once for retrieving tables. After this the process fails and throws the following errors:
Traceback (most recent call last):
File "/usr/lib64/mysql-workbench/modules/db_mssql_grt.py", line 409, in reverseEngineer
reverseEngineerProcedures(connection, schema)
File "/usr/lib64/mysql-workbench/modules/db_mssql_grt.py", line 1016, in reverseEngineerProcedures
for idx, (proc_count, proc_name, proc_definition) in enumerate(cursor):
MemoryError
Traceback (most recent call last):
File "/usr/share/mysql-workbench/libraries/workbench/wizard_progress_page_widget.py", line 192, in thread_work
self.func()
File "/usr/lib64/mysql-workbench/modules/migration_schema_selection.py", line 160, in task_reveng
self.main.plan.migrationSource.reverseEngineer()
File "/usr/lib64/mysql-workbench/modules/migration.py", line 353, in reverseEngineer
self.state.sourceCatalog = self._rev_eng_module.reverseEngineer(self.connection, self.selectedCatalogName, self.selectedSchemataNames, self.state.applicationData)
SystemError: MemoryError(""): error calling Python module function DbMssqlRE.reverseEngineer
ERROR: Reverse engineer selected schemata: MemoryError(""): error calling Python module function DbMssqlRE.reverseEngineer
Failed
I thought this was initally a memory error, so I've upped the memory on the box to 16 GiB. This error also occurs on any size DBs, as I've tried very minimal sized ones with hardly any tables.
Any thoughts? Thanks for looking

Just in case anyone else runs into this. I had the same problem and fixed it by getting rid of non-ASCII characters in schemas, tables....basically all MSSQL objects. This was confounded by the fact that I had SQL# (www.sqlsharp.com) installed, which adds a number of functions and stored procs with a schema called SQL#. You can remove that with this command:
EXEC SQL#.SQLsharp_Uninstall
Once you get rid of non-ASCII chars, the migration works.

The OP (I assume) closed their bug report with this message:
[...] I figured out a work around, or the flaw in the system perhaps. Turns out that Null values were not allowed inside the Datetime fields when doing a migration. I turned every Datetime field in my database to a default value and migrated it successfully after that.

Related

pyars issue when get data from remedy database

My part code:
from pyars import erars
ars = erars.erARS()
self._db_login(ars)
(entries, num) = ars.GetListEntryWithFields(schema, query, fields,None)
>>>> code will failed in GetListEntryWithFields
error info:
File "c:\users\admini~1\appdata\local\temp\easy_install-x_sfek\pyars-1.8.2-py2.7-win32.egg.tmp\pyars\erars.py", line 3546, in GetListEntryWithFields
File "c:\users\admini~1\appdata\local\temp\easy_install-x_sfek\pyars-1.8.2-py2.7-win32.egg.tmp\pyars\erars.py", line 470, in conv2QualifierStruct
pyars.erars.ARError: An WARNING (0) occured: None
Sometimes i will hit the issue above, then it will succeed without any code change. I think the root cause is the db connection is not stable. But it is so boring that will happen several times. Anyone know how to fix it?
Instead of directly hitting DB which is not recommend from BMC you can use webservice to perform your action

Azure ML Studio: How to change input value with Python before it goes through data process

I am currently attempting to change the value of input as it goes through data process in Azure ML. However, I cannot find a clue about how to access to the input data with python.
For example, if you were to use python, you can access to the column of data with
print(dataframe1["Hello World"])
I tried to change the name of Web Service Input and tried to do it like how I did for other dataframe (e.g. sample)
print(dataframe["sample"])
But it returns an error with no luck, and from what I read from an error, it's not compatible to dataframe:
object of type 'NoneType' has no len()
I tried to look up a solution with Nonetype error, but there is no good solution.
The whole error message:
requestId = 1f0f621f1d8841baa7862d5c05154942 errorComponent=Module. taskStatusCode=400. {"Exception":{"ErrorId":"FailedToEvaluateScript","ErrorCode":"0085","ExceptionType":"ModuleException","Message":"Error 0085: The following error occurred during script evaluation, please view the output log for more information:\r\n---------- Start of error message from Python interpreter ----------\r\nCaught exception while executing function: Traceback (most recent call last):\r\n File \"C:\\server\\invokepy.py\", line 211, in batch\r\n xdrutils.XDRUtils.DataFrameToRFile(outlist[i], outfiles[i], True)\r\n File \"C:\\server\\XDRReader\\xdrutils.py\", line 51, in DataFrameToRFile\r\n attributes = XDRBridge.DataFrameToRObject(dataframe)\r\n File \"C:\\server\\XDRReader\\xdrbridge.py\", line 40, in DataFrameToRObject\r\n if (len(dataframe) == 1 and type(dataframe[0]) is pd.DataFrame):\r\nTypeError: object of type 'NoneType' has no len()\r\nProcess returned with non-zero exit code 1\r\n\r\n---------- End of error message from Python interpreter ----------"}}Error: Error 0085: The following error occurred during script evaluation, please view the output log for more information:---------- Start of error message from Python interpreter ----------Caught exception while executing function: Traceback (most recent call last): File "C:\server\invokepy.py", line 211, in batch xdrutils.XDRUtils.DataFrameToRFile(outlist[i], outfiles[i], True) File "C:\server\XDRReader\xdrutils.py", line 51, in DataFrameToRFile attributes = XDRBridge.DataFrameToRObject(dataframe) File "C:\server\XDRReader\xdrbridge.py", line 40, in DataFrameToRObject if (len(dataframe) == 1 and type(dataframe[0]) is pd.DataFrame):TypeError: object of type 'NoneType' has no len()Process returned with non-zero exit code 1---------- End of error message from Python interpreter ---------- Process exited with error code -2
I have also tried to a way to pass python script in data, but it is not able to make any change to Web Service Input value as I want it to be.
I have tried to look on forums like msdn or SO, but it's been difficult to find any information about it. Please let me know if you need any more information if needed. I would greatly appreciate your help!
tl;dr; You need to also link the dataset you used for training to the same port you link the Web service input, so that the Execute Python Script has something to work on - see the image below for how this should look.
You need to keep in mind that the Predictive experiment has some conventions that need to be followed (or learned the hard way :) ). One of them is that in order to use the Web service input, you need to pair it with an actual dataset, which Azure ML Studio can then use to infer structure and to provide you with some data while testing your predictive experiment. You can see it as some sort of 'ghost' module that doesn't do anything by itself.
Hope this helps.

Code for gensim Word2vec as an HTTP service 'KeyedVectors' Attribute error

I am using the w2v_server_googlenews code from the word2vec HTTP server running at https://rare-technologies.com/word2vec-tutorial/#bonus_app. I changed the loaded file to a file of vectors trained with the original C version of word2vec. I load the file with
gensim.models.KeyedVectors.load_word2vec_format(fname, binary=True)
and it seems to load without problems. But when I test the HTTP service with, let's say
curl 'http://127.0.0.1/most_similar?positive%5B%5D=woman&positive%5B%5D=king&negative%5B%5D=man'
I got an empty result with only the execution time.
{"taken": 0.0003361701965332031, "similars": [], "success": 1}
I put a traceback.print_exc() on the except part of the related method, which is in this case def most_similar(self, *args, **kwargs): and I got:
Traceback (most recent call last):
File "./w2v_server.py", line 114, in most_similar
topn=5)
File "/usr/local/lib/python2.7/dist-packages/gensim/models/keyedvectors.py", line 304, in most_similar
self.init_sims()
File "/usr/local/lib/python2.7/dist-packages/gensim/models/keyedvectors.py", line 817, in init_sims
self.syn0norm = (self.syn0 / sqrt((self.syn0 ** 2).sum(-1))[..., newaxis]).astype(REAL)
AttributeError: 'KeyedVectors' object has no attribute 'syn0'
Any idea on why this might happens?
Note: I use python 2.7 and I installed gensim using pip, which gave me gensim 2.1.0.
FYI that demo code was baed on gensim 0.12.3 (from 2015, as listed in its requirements.txt), and would need updating to work with the latest gensim.
It might be sufficient to add a line to w2v_server.py at line 70 (just after the load_word2vec_format()), to force the creation of the needed syn0norm property (which in older gensims was auto-created on load), before deleting the raw syn0 values. Specifically:
self.model.init_sims(replace=True)
(You would leave out the replace=True if you were going to be doing operations other than most_similar(), that might require raw vectors.)
If this works to fix the problem for you, a pull-request to the w2v_server_googlenews repo would be favorably received!

Django matching query does not exist

I was just checking if a change I had made to my models had taken affect when I started to get this for some (but not all) of my models. I've never seen this before and I'm fairly certain I've had no issue querying these models in the past.
>>> record = Record.objects.get(id=1)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/local/alittlesquid/grocerygod/fratgroceries/ggenv/local/lib/python2.7/site-packages/django/db/models/manager.py", line 143, in get
return self.get_query_set().get(*args, **kwargs)
File "/usr/local/alittlesquid/grocerygod/fratgroceries/ggenv/local/lib/python2.7/site- packages/django/db/models/query.py", line 404, in get
self.model._meta.object_name)
DoesNotExist: Record matching query does not exist.
After more digging I discovered that a query for all Record.objects.all() works as expected. Can anyone shed light on why this would be happening to some of my models? A fix would also be tremendously helpful, thanks.
There is probably no Record with the id as 1 (maybe you meant pk?). You can easily verify this by running Record.objects.values("id") and checking the output manually.

py2neo Cypher transactions failing

I'm trying to batch import millions of nodes through Py2Neo.
I don't know what's faster, the BatchWrite or the cipher.Transaction, but the latter seemed the best option as I need to split my batches.
However, when I try to execute a simple transaction, I receive a weird error.
The python code:
session = cypher.Session("http://127.0.0.1:7474/db/data/") #error also w/o /db/data/
def init():
tx = session.create_transaction()
for ngram, one_grams in data.items():
tx.append("CREATE "+str(n)+":WORD {'word': "+ngram+", 'rank': "+str(ngram_rank)+", 'prob': "+str(ngram_prob)+", 'gram': '0gram'}")
tx.execute() # line 69 in the error below
The error:
Traceback (most recent call last):
File "Ngram_neo4j.py", line 176, in <module>
init(rNgram_file="dataset_id.json")
File "Ngram_neo4j.py", line 43, in init
data = probability_items(data)
File "Ngram_neo4j.py", line 69, in probability_items
tx.execute()
File "D:\datasets\GOOGLE~1\virtenv\lib\site-packages\py2neo\cypher.py", line 224, in execute
return self._post(self._execute or self._begin)
File "D:\datasets\GOOGLE~1\virtenv\lib\site-packages\py2neo\cypher.py", line 209, in _post
raise TransactionError(error["code"], error["status"], error["message"])
KeyError: 'status'
I tried catching the exception:
except cypher.TransactionError as e:
print("--------------------------------------------------------------------------------------------")
print(e.status)
print(e.message)
But never gets called. (maybe an error on my part?)
Regular insert using graph_db.create({"node:" node}) do work, but are incredibly slow (36hrs for 2.5M nodes)
Note that the dataset consists of a series of JSON files, each with a structure to 5 levels deep.
I'd like to batch the last 2 levels (around 100 to 20.000 nodes per batch)
--- EDIT ---
I'm using Py2Neo 1.6.1, Neo4j 2.0.0. Currently on Windows 7 (but also OSX Mav., CentOS 6)
The problem you're seeing is due to a last minute alteration in the way that Cypher transaction errors are reported by the Neo4j server. Py2neo 1.6 was built against M05/M06 and when a few features changed in RC1/GA, Py2neo broke in a few places.
This has been fixed for Py2neo 1.6.2 (https://github.com/nigelsmall/py2neo/issues/224) but I do not yet know when I will get a chance to finish and release this version.
What neo4j and py2neo versions are you using?
You should use parameters for your create statements.
Can you check the server logs in data/logs and data/graph.db/messages.log for errors?
If you have so much data to insert then perhaps direct batch-insertion would make more sense?
See: http://neo4j.org/develop/import
Two tools I wrote for this:
https://github.com/jexp/neo4j-shell-tools/tree/20
https://github.com/jexp/batch-import/tree/20#binary-download

Categories

Resources