Default psycopg2 error messages are too broad. Most of the time it simply throws:
psycopg2.OperationalError
without any additional information. So it is hard to guess, what was the real reason of the error - either incorrect user credentials, or simply the fact the server is not running. So, I need some more appropriate error handling, like error codes in pymysql library.
I've seen this page, but it does not help. When I do
except Exception as err:
print(err.pgcode)
it always prints None. And as for errorcodes it is simply undefined. I tried to import it, but failed. So, I need some help.
For anyone looking a quick answer:
Short Answer
import traceback # Just to show the full traceback
from psycopg2 import errors
InFailedSqlTransaction = errors.lookup('25P02')
try:
feed = self._create_feed(data)
except InFailedSqlTransaction:
traceback.print_exc()
self._cr.rollback()
pass # Continue / throw ...
Long Answer
Go to psycopg2 -> erros.py you will find the lookup function lookup(code).
Go to psycopg2\_psycopg\__init__.py in sqlstate_errors you will find all codes to add as string into the lookup function.
This is not an answer to the question but merely my thoughts that don't fit in a comment. This is a scenario in which I think setting pgcode in the exception object would be helpful but, unfortunately, it is not the case.
I'm implementing a pg_isready-like Python script to test if a Postgres service is running on the given address and port (e.g., localhost and 49136). The address and port may or may not be used by any other program.
pg_isready internally calls internal_ping(). Pay attention to the comment of Here begins the interesting part of "ping": determine the cause...:
/*
* Here begins the interesting part of "ping": determine the cause of the
* failure in sufficient detail to decide what to return. We do not want
* to report that the server is not up just because we didn't have a valid
* password, for example. In fact, any sort of authentication request
* implies the server is up. (We need this check since the libpq side of
* things might have pulled the plug on the connection before getting an
* error as such from the postmaster.)
*/
if (conn->auth_req_received)
return PQPING_OK;
So pg_isready also uses the fact that an error of invalid authentication on connection means the connection itself is already successful, so the service is up, too. Therefore, I can implement it as follows:
ready = True
try:
psycopg2.connect(
host=address,
port=port,
password=password,
connect_timeout=timeout,
)
except psycopg2.OperationalError as ex:
ready = ("fe_sendauth: no password supplied" in except_msg)
However, when the exception psycopg2.OperationalError is caught, ex.pgcode is None. Therefore, I can't use the error codes to compare and see if the exception is about authentication/authorization. I'll have to check if the exception has a certain message (as #dhke pointed out in the comment), which I think is kind of fragile because the error message may be changed in the future release but the error codes are much less likely to be changed, I think.
Related
I try to sniff network adapter(TP-LINK, 0bda:b711) with python3 , But I get an OSError: Could not activate the pcap handler
from scapy.all import *
from scapy.config import conf
from scapy.layers.dot11 import Dot11
conf.use_pcap = True
def callBack(pkg):
if pkg.haslayer(Dot11):
if pkg.type == 0 and pkg.subtype == 8:
print("dBm_AntSignal=", pkg.dBm_AntSignal)
print("dBm_AntNoise=", pkg.dBm_AntNoise)
sniff(iface='wlp1s1', monitor='True', prn=callBack)
I think there is something wrong with libpcap, I want to get dBm_AntSignal and dBm_AntNoise from sniff, the code can run Macbook according to other people(you can browse my last question). Is there somebody can solve this issue ?
If you posted issue #1136 on the libpcap issues list, then you presumably somehow managed to determine that pcap_activate() returned PCAP_ERROR. If you did that by modifying the Scapy code, try modifying it further to, if pcap_activate() returns PCAP_ERROR, report the result of pcap_geterr(), in order to try to find out why, in this particular instance, pcap_activate() returned PCAP_ERROR. The problem is that PCAP_ERROR can be returned for a number of different reasons, and it's difficult if not impossible to guess which one it was.
(And then file an issue on Scapy's issue list indicating that the error message for pcap_activate() failing should be based on both the return value from pcap_activate() and, for certain errors, the result of pcap_geterr(). They should also distinguish between error returns from pcap_activate(), which are negative numbers, and warning returns from pcap_activate(), which indicate that the "pcap handler" could be activated, but something unexpected happened, and are positive numbers.)
Update:
No need to file the Scapy issue; I've already submitted a pull request for the change to fix the error reporting, and it's been merged. Apply the changes from that pull request to Scapy and try again.
I was deploying my fastapi application to github using CI/CD,and it was running the job and gave an error,that a default connection was not defined (in this app I am using FastAPI to the api itself,mongoengine in most of the part of the db and pytest to tests,I only use pymongo to delete the test db content after the tests,because it is easier to do it without the errors of default connection not being set up)
Every single route has the same dependency ,of the connect function,so that when I am testing, I can replace it to the testing db (because if I was not going to test,I could just call the connect() function inside of the main.py,and it would just work fine (I tested it gentlemen)
#router.method('/route')
def route_function(db=Depends(connect):
do something
return {something}
The connect function consists of :
def connect():
try:
mongoengine.connect(host=connection_url, db=db_name,
alias=db_name, uuidRepresentation='standard')
except Exception as e:
print('Failed to connect to production db')
print(e)
finally:
disconnect(alias=db_alias)
And in every test I do a
app.dependency_overrides[connect] = test_connect
(do tests)
app.dependency_overrides.clear()
And the test_connect function is :
def test_connect():
try:
disconnect(db_alias)
mongoengine.connect(host=test_connection_url,
db=test_db_name, alias=test_db_alias, uuidRepresentation='standard')
except:
print('Failed to connect to testing db')
finally:
disconnect(alias=test_db_alias)
And now that you have read until here,PLEASE read until the end so that I can explain you all the problem
When I run the tests,it does not replace the test_connect function to the connect one,even when I use some tricks like lambda:test_connect to try to make it work,it doesnt...
And so that I can complete all my doubts,in the meta dictionary of each model,I now only use "db" keyword to say what database it is going to be stored,because when I used "db_alias",it gave an error saying that the alias that I used in the production was not defined,because the testing db alias was different of course
And when I used the same alias,it gave an error saying that I did not set an default connection (and even when I used alias of default in the connect function or even in the model,it still gave this missing default connection error or just dont still replace the prod db to the test one)
So I was tired of this problems and after a lot of research I found out that using only "db" could solve this problem
I had an idea that if I could transform a pymongo connection to mongoengine connection,I could solve this problem,but I dont know how to do this at all
So after all this (first THANK YOU SO MUCH for reading until here,second->)...any ideas please?
I'd like to setup a "timeout" for the ldap library (python-ldap-2.4.15-2.el7.x86_64) and python 2.7
I'm forcing my /etc/hosts to resolve an non existing IP address in
order to raise the timeout.
I've followed several examples and took a look at the documentation or questions like this one without luck.
By now I've tried forcing global timeouts before the initialize:
ldap.set_option(ldap.OPT_NETWORK_TIMEOUT, 1)
ldap.set_option(ldap.OPT_TIMEOUT, 1)
ldap.protocol_version=ldap.VERSION3
Forced the same values at object level:
ldap_obj = ldap.initialize("ldap://%s:389" % LDAPSERVER ,trace_level=9)
ldap_obj.set_option(ldap.OPT_NETWORK_TIMEOUT, 1)
ldap_obj.set_option(ldap.OPT_TIMEOUT, 1)
I've also tried with:
ldap.network_timeout = 1
ldap.timelimit = 1
And used both methods, search and search_st (The synchronous form with timeout)
Finally this is the code:
def testLDAPConex( LDAPSERVER ) :
"""
Check ldap
"""
ldap.set_option(ldap.OPT_NETWORK_TIMEOUT, 1)
ldap.set_option(ldap.OPT_TIMEOUT, 1)
ldap.protocol_version=ldap.VERSION3
ldap.network_timeout = 1
ldap.timelimit = 1
try:
ldap_obj = ldap.initialize("ldap://%s:389" % LDAPSERVER ,trace_level=9)
ldap_result_id = ldap_obj.search("ou=people,c=this,o=place", ldap.SCOPE_SUBTREE, "cn=user")
I've printed the constants of the object OPT_NETWORK_TIMEOUT and OPT_TIMEOUT, the values were assigned correctly.
The execution time is 56s everytime and I'm not able to control the amount of seconds for the timeout.
Btw, the same code in python3 does work as intended:
real 0m10,094s
user 0m0,072s
sys 0m0,013s
After some testing I've decided to rollback the VM to a previous state.
The Networking Team were doing changes across the network configuration, that might have changed the behaviour over the interface and some kind of bug when the packages were being sent over the wire in:
ldap_result_id = ldap_obj.search("ou=people,c=this,o=place", ldap.SCOPE_SUBTREE, "cn=user")
After restoring the VM, which included a restart, the error of not being able to reach LDAP raises as expected.
I try to download a bingads report using python SDK, but I keep getting an error says: "Type not found: 'Aggregation'" after submitting a report request. I've tried all 4 options mentioned in the following link:
https://github.com/BingAds/BingAds-Python-SDK/blob/master/examples/v13/report_requests.py
Authentication process prior to request works just fine.
I execute the following:
report_request = get_report_request(authorization_data.account_id)
reporting_download_parameters = ReportingDownloadParameters(
report_request=report_request,
result_file_directory=FILE_DIRECTORY,
result_file_name=RESULT_FILE_NAME,
overwrite_result_file=True, # Set this value true if you want to overwrite the same file.
timeout_in_milliseconds=TIMEOUT_IN_MILLISECONDS
)
output_status_message("-----\nAwaiting download_report...")
download_report(reporting_download_parameters)
after a careful debugging, it seems that the program fails when trying to execute a command within "reporting_service_manager.py". Here is workflow:
download_report(self, download_parameters):
report_file_path = self.download_file(download_parameters)
then:
download_file(self, download_parameters):
operation = self.submit_download(download_parameters.report_request)
then:
submit_download(self, report_request):
self.normalize_request(report_request)
response = self.service_client.SubmitGenerateReport(report_request)
SubmitGenerateReport starts a sequence of events ending with a call to "_SeviceCall.init" function within "service_client.py", returning an exception "Type not found: 'Aggregation'"
try:
response = self.service_client.soap_client.service.__getattr__(self.name)(*args, **kwargs)
return response
except Exception as ex:
if need_to_refresh_token is False \
and self.service_client.refresh_oauth_tokens_automatically \
and self.service_client._is_expired_token_exception(ex):
need_to_refresh_token = True
else:
raise ex
Can anyone shed some light? .
Thanks
Please be sure to set Aggregation e.g., as shown here.
aggregation = 'Daily'
If the report type does not use aggregation, you can set Aggregation=None.
Does this help?
This may be a bit late 2 months after the fact but maybe this will help someone else. I had the same error (though I suppose it may not be the same issue). It does look like you did what I did (and I'm sure others will as well): copy-paste the Microsoft example code and tried to run it only to find that it didn't work.
I spent quite some time trying to debug the issue and it looked to me like the XML wasn't being searched correctly. I was using suds-py3 for the script at the time so I tried suds-community and everything just worked after that.
I also re-read the Bing Ads API walkthrough for getting started again and found that they recommend suds-jurko instead.
Long story short: If you want to use the bingads API don't use suds-py3, use either suds-community (which I can confirm works for everything I've used the API for) or suds-jurko (which is the one recommended by Microsoft).
I have a database table with UNIQUE key. If I want to insert some record there are two possible ways. First, the unique item doesn't exist yet, that's OK, just return new id. Second, the item already exists and I need to get the id of this unique record.
The problem is, that anything I try, I get always some exception.
Here's example of the code:
def __init__(self, host, user, password, database):
# set basic attributes
super().__init__(host, user, password, database)
#open connection
try:
self.__cnx = mysql.connector.connect(
database=database, user=user, password=password, host = host)
#self.__cursor = self.__cnx.cursor()
except ...
def insert_domain(self, domain):
insertq = "INSERT INTO `sp_domains` (`domain`) VALUES ('{0}')".format(domain)
cursor = self.__cnx.cursor()
try:
cursor.execute(insertq)
print("unique")
except (mysql.connector.errors.IntegrityError) as err:
self.__cnx.commit()
print("duplicate")
s = "SELECT `domain_id` FROM `sp_domains` WHERE `domain` = '{0}';".format(domain)
try:
id = cursor.execute(s).fetchone()[0]
except AttributeError as err:
print("Unable to execute the query:", err, file=sys.stderr)
except mysql.connector.errors.ProgrammingError as err:
print("Query syntax error:", err, file=sys.stderr)
else:
self.__cnx.commit()
cursor.close()
but anyting I try, on the first duplicate record I get either 'MySQL Connection not available', 'Unread result'. The code is just example to demonstrate it.
This is my first program using Connector/python, so I don't know all the rules, about fetch the results, commiting queries and so on.
Could anyone help me with this issue, please? Or is there any efficient way to such task ('cause this one seems to be not the best solution to me). Thank you for any advice.
I can't fix your code, because you've given us two different versions of the code and two partially-described errors without full information, but I can tell you how to get started.
From a comment:
In previous version it was type error I guess, something like "NoneType has no attribute 'fetchone'.
Looking at your code, the only place you call fetchone is here:
id = cursor.execute(s).fetchone()[0]
So obviously, cursor.execute(s) returned None. Why did it return None? Well, that's what it's supposed to return, according to the documentation.*
What you want to do is:
cursor.execute(s)
id = cursor.fetchone()[0]
… as all of the sample code does.
And for future reference, it's a lot easier to debug an error like this if you first note which line it happens on instead of throwing away the traceback, and then breaking that line into pieces and logging the intermediate values. Usually, you'll find one that isn't what you expected, and the problem will be much more obvious at that point, then three steps later on when you get a bizarre exceptions.
* Technically, the documentation just says that "Return values are not defined" for cursor.execute, so it would be perfectly legal for a DB-API module to return self here. Then again, it would also be legal to return some object that erases your hard drive when you call a method on it.