pyars issue when get data from remedy database - python

My part code:
from pyars import erars
ars = erars.erARS()
self._db_login(ars)
(entries, num) = ars.GetListEntryWithFields(schema, query, fields,None)
>>>> code will failed in GetListEntryWithFields
error info:
File "c:\users\admini~1\appdata\local\temp\easy_install-x_sfek\pyars-1.8.2-py2.7-win32.egg.tmp\pyars\erars.py", line 3546, in GetListEntryWithFields
File "c:\users\admini~1\appdata\local\temp\easy_install-x_sfek\pyars-1.8.2-py2.7-win32.egg.tmp\pyars\erars.py", line 470, in conv2QualifierStruct
pyars.erars.ARError: An WARNING (0) occured: None
Sometimes i will hit the issue above, then it will succeed without any code change. I think the root cause is the db connection is not stable. But it is so boring that will happen several times. Anyone know how to fix it?

Instead of directly hitting DB which is not recommend from BMC you can use webservice to perform your action

Related

bingads V13 report request fails in python sdk

I try to download a bingads report using python SDK, but I keep getting an error says: "Type not found: 'Aggregation'" after submitting a report request. I've tried all 4 options mentioned in the following link:
https://github.com/BingAds/BingAds-Python-SDK/blob/master/examples/v13/report_requests.py
Authentication process prior to request works just fine.
I execute the following:
report_request = get_report_request(authorization_data.account_id)
reporting_download_parameters = ReportingDownloadParameters(
report_request=report_request,
result_file_directory=FILE_DIRECTORY,
result_file_name=RESULT_FILE_NAME,
overwrite_result_file=True, # Set this value true if you want to overwrite the same file.
timeout_in_milliseconds=TIMEOUT_IN_MILLISECONDS
)
output_status_message("-----\nAwaiting download_report...")
download_report(reporting_download_parameters)
after a careful debugging, it seems that the program fails when trying to execute a command within "reporting_service_manager.py". Here is workflow:
download_report(self, download_parameters):
report_file_path = self.download_file(download_parameters)
then:
download_file(self, download_parameters):
operation = self.submit_download(download_parameters.report_request)
then:
submit_download(self, report_request):
self.normalize_request(report_request)
response = self.service_client.SubmitGenerateReport(report_request)
SubmitGenerateReport starts a sequence of events ending with a call to "_SeviceCall.init" function within "service_client.py", returning an exception "Type not found: 'Aggregation'"
try:
response = self.service_client.soap_client.service.__getattr__(self.name)(*args, **kwargs)
return response
except Exception as ex:
if need_to_refresh_token is False \
and self.service_client.refresh_oauth_tokens_automatically \
and self.service_client._is_expired_token_exception(ex):
need_to_refresh_token = True
else:
raise ex
Can anyone shed some light? .
Thanks
Please be sure to set Aggregation e.g., as shown here.
aggregation = 'Daily'
If the report type does not use aggregation, you can set Aggregation=None.
Does this help?
This may be a bit late 2 months after the fact but maybe this will help someone else. I had the same error (though I suppose it may not be the same issue). It does look like you did what I did (and I'm sure others will as well): copy-paste the Microsoft example code and tried to run it only to find that it didn't work.
I spent quite some time trying to debug the issue and it looked to me like the XML wasn't being searched correctly. I was using suds-py3 for the script at the time so I tried suds-community and everything just worked after that.
I also re-read the Bing Ads API walkthrough for getting started again and found that they recommend suds-jurko instead.
Long story short: If you want to use the bingads API don't use suds-py3, use either suds-community (which I can confirm works for everything I've used the API for) or suds-jurko (which is the one recommended by Microsoft).

Error Message when trying to delete features in Feature Service Python API 1.7 for ArcGIS

I am trying to delete features using the delete_features method on the FeatureLayer Object and I keep getting the following error: "This SqlTransaction has completed; it is no longer usable."
The code is below. The error message seems to populate in the last line where="OBJECTID >=0", but I'm not a 100 sure if this is the problem. Unfortunately I'm not very good at programming.
gis = arcgis.GIS("http://gfcgis.maps.arcgis.com", "UserName", "Password")
feature_layer_item = gis.content.search(FeatureLayer, item_type = 'Feature Service')[0]
flayers = feature_layer_item.layers
flayer = flayers[0]
flayer.delete_features(where="OBJECTID >= 0", rollback_on_failure=True)
Any help would be greatly appreciated.
Michael
This sounds like a zombie transaction. Check with your DBA if there's a query, most likely a Stored Procedure, which is being called when your code runs. This message usually shows up when the application code tries to do a commit on the DB after the SP has already committed.
That's the SQL Transaction which has already completed.
Come to find out, it was a simple syntax error causing the issue. I didn't put quotations around 'True' for the rollback_on_failure parameter.

Azure ML Studio: How to change input value with Python before it goes through data process

I am currently attempting to change the value of input as it goes through data process in Azure ML. However, I cannot find a clue about how to access to the input data with python.
For example, if you were to use python, you can access to the column of data with
print(dataframe1["Hello World"])
I tried to change the name of Web Service Input and tried to do it like how I did for other dataframe (e.g. sample)
print(dataframe["sample"])
But it returns an error with no luck, and from what I read from an error, it's not compatible to dataframe:
object of type 'NoneType' has no len()
I tried to look up a solution with Nonetype error, but there is no good solution.
The whole error message:
requestId = 1f0f621f1d8841baa7862d5c05154942 errorComponent=Module. taskStatusCode=400. {"Exception":{"ErrorId":"FailedToEvaluateScript","ErrorCode":"0085","ExceptionType":"ModuleException","Message":"Error 0085: The following error occurred during script evaluation, please view the output log for more information:\r\n---------- Start of error message from Python interpreter ----------\r\nCaught exception while executing function: Traceback (most recent call last):\r\n File \"C:\\server\\invokepy.py\", line 211, in batch\r\n xdrutils.XDRUtils.DataFrameToRFile(outlist[i], outfiles[i], True)\r\n File \"C:\\server\\XDRReader\\xdrutils.py\", line 51, in DataFrameToRFile\r\n attributes = XDRBridge.DataFrameToRObject(dataframe)\r\n File \"C:\\server\\XDRReader\\xdrbridge.py\", line 40, in DataFrameToRObject\r\n if (len(dataframe) == 1 and type(dataframe[0]) is pd.DataFrame):\r\nTypeError: object of type 'NoneType' has no len()\r\nProcess returned with non-zero exit code 1\r\n\r\n---------- End of error message from Python interpreter ----------"}}Error: Error 0085: The following error occurred during script evaluation, please view the output log for more information:---------- Start of error message from Python interpreter ----------Caught exception while executing function: Traceback (most recent call last): File "C:\server\invokepy.py", line 211, in batch xdrutils.XDRUtils.DataFrameToRFile(outlist[i], outfiles[i], True) File "C:\server\XDRReader\xdrutils.py", line 51, in DataFrameToRFile attributes = XDRBridge.DataFrameToRObject(dataframe) File "C:\server\XDRReader\xdrbridge.py", line 40, in DataFrameToRObject if (len(dataframe) == 1 and type(dataframe[0]) is pd.DataFrame):TypeError: object of type 'NoneType' has no len()Process returned with non-zero exit code 1---------- End of error message from Python interpreter ---------- Process exited with error code -2
I have also tried to a way to pass python script in data, but it is not able to make any change to Web Service Input value as I want it to be.
I have tried to look on forums like msdn or SO, but it's been difficult to find any information about it. Please let me know if you need any more information if needed. I would greatly appreciate your help!
tl;dr; You need to also link the dataset you used for training to the same port you link the Web service input, so that the Execute Python Script has something to work on - see the image below for how this should look.
You need to keep in mind that the Predictive experiment has some conventions that need to be followed (or learned the hard way :) ). One of them is that in order to use the Web service input, you need to pair it with an actual dataset, which Azure ML Studio can then use to infer structure and to provide you with some data while testing your predictive experiment. You can see it as some sort of 'ghost' module that doesn't do anything by itself.
Hope this helps.

How to catch specific Postgres exceptions in Python psycopg2

Default psycopg2 error messages are too broad. Most of the time it simply throws:
psycopg2.OperationalError
without any additional information. So it is hard to guess, what was the real reason of the error - either incorrect user credentials, or simply the fact the server is not running. So, I need some more appropriate error handling, like error codes in pymysql library.
I've seen this page, but it does not help. When I do
except Exception as err:
print(err.pgcode)
it always prints None. And as for errorcodes it is simply undefined. I tried to import it, but failed. So, I need some help.
For anyone looking a quick answer:
Short Answer
import traceback # Just to show the full traceback
from psycopg2 import errors
InFailedSqlTransaction = errors.lookup('25P02')
try:
feed = self._create_feed(data)
except InFailedSqlTransaction:
traceback.print_exc()
self._cr.rollback()
pass # Continue / throw ...
Long Answer
Go to psycopg2 -> erros.py you will find the lookup function lookup(code).
Go to psycopg2\_psycopg\__init__.py in sqlstate_errors you will find all codes to add as string into the lookup function.
This is not an answer to the question but merely my thoughts that don't fit in a comment. This is a scenario in which I think setting pgcode in the exception object would be helpful but, unfortunately, it is not the case.
I'm implementing a pg_isready-like Python script to test if a Postgres service is running on the given address and port (e.g., localhost and 49136). The address and port may or may not be used by any other program.
pg_isready internally calls internal_ping(). Pay attention to the comment of Here begins the interesting part of "ping": determine the cause...:
/*
* Here begins the interesting part of "ping": determine the cause of the
* failure in sufficient detail to decide what to return. We do not want
* to report that the server is not up just because we didn't have a valid
* password, for example. In fact, any sort of authentication request
* implies the server is up. (We need this check since the libpq side of
* things might have pulled the plug on the connection before getting an
* error as such from the postmaster.)
*/
if (conn->auth_req_received)
return PQPING_OK;
So pg_isready also uses the fact that an error of invalid authentication on connection means the connection itself is already successful, so the service is up, too. Therefore, I can implement it as follows:
ready = True
try:
psycopg2.connect(
host=address,
port=port,
password=password,
connect_timeout=timeout,
)
except psycopg2.OperationalError as ex:
ready = ("fe_sendauth: no password supplied" in except_msg)
However, when the exception psycopg2.OperationalError is caught, ex.pgcode is None. Therefore, I can't use the error codes to compare and see if the exception is about authentication/authorization. I'll have to check if the exception has a certain message (as #dhke pointed out in the comment), which I think is kind of fragile because the error message may be changed in the future release but the error codes are much less likely to be changed, I think.

Error calling Python module function in MySQL Workbench

I'm kind of at my wits end here, and so far have had no feedback from the MySQL Workbench bug reporting site, so I thought I'd throw this question/problem out to more sites.
I'm attempting to migrate from a MSSQL server on a Windows Server 2003 machine to MySQL server running on a Centos 6.5 VM. I can connect to the source and target databases, select a schemata, and runs through a pass through once for retrieving tables. After this the process fails and throws the following errors:
Traceback (most recent call last):
File "/usr/lib64/mysql-workbench/modules/db_mssql_grt.py", line 409, in reverseEngineer
reverseEngineerProcedures(connection, schema)
File "/usr/lib64/mysql-workbench/modules/db_mssql_grt.py", line 1016, in reverseEngineerProcedures
for idx, (proc_count, proc_name, proc_definition) in enumerate(cursor):
MemoryError
Traceback (most recent call last):
File "/usr/share/mysql-workbench/libraries/workbench/wizard_progress_page_widget.py", line 192, in thread_work
self.func()
File "/usr/lib64/mysql-workbench/modules/migration_schema_selection.py", line 160, in task_reveng
self.main.plan.migrationSource.reverseEngineer()
File "/usr/lib64/mysql-workbench/modules/migration.py", line 353, in reverseEngineer
self.state.sourceCatalog = self._rev_eng_module.reverseEngineer(self.connection, self.selectedCatalogName, self.selectedSchemataNames, self.state.applicationData)
SystemError: MemoryError(""): error calling Python module function DbMssqlRE.reverseEngineer
ERROR: Reverse engineer selected schemata: MemoryError(""): error calling Python module function DbMssqlRE.reverseEngineer
Failed
I thought this was initally a memory error, so I've upped the memory on the box to 16 GiB. This error also occurs on any size DBs, as I've tried very minimal sized ones with hardly any tables.
Any thoughts? Thanks for looking
Just in case anyone else runs into this. I had the same problem and fixed it by getting rid of non-ASCII characters in schemas, tables....basically all MSSQL objects. This was confounded by the fact that I had SQL# (www.sqlsharp.com) installed, which adds a number of functions and stored procs with a schema called SQL#. You can remove that with this command:
EXEC SQL#.SQLsharp_Uninstall
Once you get rid of non-ASCII chars, the migration works.
The OP (I assume) closed their bug report with this message:
[...] I figured out a work around, or the flaw in the system perhaps. Turns out that Null values were not allowed inside the Datetime fields when doing a migration. I turned every Datetime field in my database to a default value and migrated it successfully after that.

Categories

Resources