This question is related to How do we handle Python xmlrpclib Connection Refused?
When I try to use the following code, with my RPC server down, _get_rpc() returns False and I'm good to go. However if the server is running, it fails with unknown method. Is it trying to execute .connect() on the remote server? How can I get around this, when I needed to use .connect() to detect if the returned proxy worked (see related question)?
import xmlrpclib
import socket
def _get_rpc():
try:
a = xmlrpclib.ServerProxy('http://dd:LNXFhcZnYshy5mKyOFfy#127.0.0.1:9001')
a.connect() # Try to connect to the server
return a.supervisor
except socket.error:
return False
if not _get_rpc():
print "Failed to connect"
Here is the issue:
ahiscox#lenovo:~/code/dd$ python xmlrpctest2.py
Failed to connect
ahiscox#lenovo:~/code/dd$ supervisord -c ~/.supervisor # start up RPC server
ahiscox#lenovo:~/code/dd$ python xmlrpctest2.py
Traceback (most recent call last):
File "xmlrpctest2.py", line 13, in <module>
if not _get_rpc():
File "xmlrpctest2.py", line 7, in _get_rpc
a.connect() # Try to connect to the server
File "/usr/lib/python2.6/xmlrpclib.py", line 1199, in __call__
return self.__send(self.__name, args)
File "/usr/lib/python2.6/xmlrpclib.py", line 1489, in __request
verbose=self.__verbose
File "/usr/lib/python2.6/xmlrpclib.py", line 1253, in request
return self._parse_response(h.getfile(), sock)
File "/usr/lib/python2.6/xmlrpclib.py", line 1392, in _parse_response
return u.close()
File "/usr/lib/python2.6/xmlrpclib.py", line 838, in close
raise Fault(**self._stack[0])
xmlrpclib.Fault: <Fault 1: 'UNKNOWN_METHOD'>
Well i was just looking in to it ; my old method suck because xmlrpclib.ServerProxy try to connect to the XmlRPC server when you call a method, not before !!!
Try this instead :
import xmlrpclib
import socket
def _get_rpc():
a = xmlrpclib.ServerProxy('http://dd:LNXFhcZnYshy5mKyOFfy#127.0.0.1:9001')
try:
a._() # Call a fictive method.
except xmlrpclib.Fault:
# connected to the server and the method doesn't exist which is expected.
pass
except socket.error:
# Not connected ; socket error mean that the service is unreachable.
return False, None
# Just in case the method is registered in the XmlRPC server
return True, a
connected, server_proxy = _get_rpc():
if not connected
print "Failed to connect"
import sys
sys.exit(1)
To summarize this we have 3 cases :
XmlRPC server is up and in it we defined a method called _():
(EDIT : i did choose the name _ because it unlikely to have a method with this name, but this case still can happen)
In this case no exception will be catch and the code will execute
the return True
XmlRPC server is up and in it we don't have any method methoded call
_():
This time xmlrpclib.Fault will be raised and we will also pass to the return True
XmlRPC server is down:
Now the socket.error exception will be raised and that
when we call a._() so we should return False
I don't know if there is an easy way to do this and i will love to see it until then , hope this can fix thing this time :)
N.B: when you do if a: python will again search for a method __nonzero__() to test the boolean value of a and this will fail to.
N.B 2: Some xmlrpc service offer a rpc path specialized to do an authentication , in this path the service offer methods like login() ... , this kinds of method can replace the _() method in our case, so just calling login(), will be enough to know if the service is up or down (socket.error), and in the same time this login() method authenticate the user if the service is up .
Related
I am trying to check if a domain name has MX records resolved using dnspython module. I am getting the following error while connecting to the mx record server. Can anyone explain why I am facing this issue?
Traceback (most recent call last):
File "c:\Users\iamfa\OneDrive\Desktop\test\email_mx.py", line 26, in <module>
dns.resolver.resolve("cmrit.ac.in", 'MX')
File "c:\Users\iamfa\OneDrive\Desktop\test\env1\lib\site-packages\dns\resolver.py", line 1193, in resolve
return get_default_resolver().resolve(qname, rdtype, rdclass, tcp, source,
File "c:\Users\iamfa\OneDrive\Desktop\test\env1\lib\site-packages\dns\resolver.py", line 1066, in resolve
timeout = self._compute_timeout(start, lifetime,
File "c:\Users\iamfa\OneDrive\Desktop\test\env1\lib\site-packages\dns\resolver.py", line 879, in _compute_timeout
raise LifetimeTimeout(timeout=duration, errors=errors)
dns.resolver.LifetimeTimeout: The resolution lifetime expired after 5.001 seconds: Server 10.24.0.1 UDP port 53 answered The DNS operation timed out.; Server 198.51.100.1 UDP port 53 answered The DNS operation timed out.; Server 10.95.11.110 UDP port 53 answered The DNS operation timed out.
This is my code:
import dns.resolver
if dns.resolver.resolve("cmrit.ac.in", 'MX'):
print(True)
else:
print(False)
However it was working fine till yesterday but when I try to run the same code today I am facing this issue.
If the remote DNS server takes a long time to respond, or accepts the connection but does not respond at all, the only thing you can really do is move along. Perhaps try again later. You can catch the error with try/except:
import dns.resolver
try:
if dns.resolver.resolve("cmrit.ac.in", 'MX'):
print(True)
else:
print(False)
except dns.resolver.LifetimeError:
print("timed out, try again later maybe?")
If you want to apply a longer timeout, the resolve method accepts a lifetime keyword argument which is documented in the Resolver.resolve documentation.
The Resolver class (documented at the top of the same page) also has a timeout parameter you can tweak if you build your own resolver.
For production code, you should probably add the other possible errors to the except clause; the exemplary documentation shows you precisely which exceptions resolve can raise.
...
except (dns.resolver.LifetimeTimeout, dns.resolver.NXDOMAIN,
dns.resolver.YXDOMAIN, dns.resolver.NoAnswer,
dns.resolver.NoNameservers) as err:
print("halp, something went wrong:", err)
Probably there is a base exception class which all of these inherit from; I was too lazy to go back and check. Then you only have to list the base class in the except statement.
It's probably more useful to extract the actual MX record and display it, rather than just print True, but that's a separate topic.
Your error message indicates that you were able to connect to your own resolver at 10.24.0.1 but in the general case, this error could also happen if your network (firewall etc) prevents you from accessing DNS for some reason.
I am attempting to send an email with AWS SES using a Dockerized Python program. When I try to connect to SES by making an SMTP instance, the program hangs and times out.
To reproduce:
Start the Python 3.6.8 REPL
Import smtplib
>>> import smtplib
Try to connect to SES, and observe how the statement hangs.
>>> conn = smtplib.SMTP(host='email-smtp.us-west-2.amazonaws.com', port=25)
# terminal hangs for 10 seconds
Try using the connection, which fails.
>>> conn.noop()
(451, b'4.4.2 Timeout waiting for data from client.')
>>> conn.noop()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python3.6/smtplib.py", line 514, in noop
return self.docmd("noop")
File "/usr/lib64/python3.6/smtplib.py", line 421, in docmd
return self.getreply()
File "/usr/lib64/python3.6/smtplib.py", line 394, in getreply
raise SMTPServerDisconnected("Connection unexpectedly closed")
smtplib.SMTPServerDisconnected: Connection unexpectedly closed
Unsurprisingly, step (3) sends a DNS query for the SES endpoint, and then connects; the SES endpoint responds with 220 service ready nearly instantaneously. No traffic is exchanged until 10 seconds later, when SES closes the connection with 451 timeout waiting for client. The statement completes, and then runs the rest of my program which, of course, doesn't work.
For context, the rest of my script is as follows:
smtp_host = 'email-smtp.us-west-2.amazonaws.com'
smtp_port = 25
smtp_proto = 'tls'
with smtplib.SMTP(host=smtp_host, port=smtp_port) as connection:
try:
ctx = ssl.SSLContext(ssl.PROTOCOL_SSLv23 if smtp_proto == 'ssl' else ssl.PROTOCOL_TLS)
connection.starttls(context=ctx)
connection.ehlo()
connection.login(smtp_user, smtp_pass)
connection.send_message(from_addr=sender, to_addrs=recipients, msg=message)
except smtplib.SMTPHeloError as e:
print(f"SMTP HELO Failed: {e}")
except smtplib.SMTPAuthenticationError as e:
print(f"SMTP AUTH Failed: {e}")
except smtplib.SMTPException as e:
print(f"Failed to send email: {e}")
I've attempted to connect on ports 25, 587, 2587, and 465. I've done the same with the SMTP_SSL object instead, as well as changing the context in the starttls call to various SSL / TLS versions (the SSL versions result in this error - but this isn't relevant since I can't get to this portion of the script, anyway).
I've tested my connection to SES according to this article. I've also tried parts of this and this SO post (as well as a myriad of others that are lost in my browser history).
To actually send emails, I need to connect a second time. Connecting, then waiting for a timeout, then connecting again, seems wrong. What is the proper way to do this?
I'm getting the error below when trying to call a stub method.
Any idea what is causing it?
[bolt.api.handlers] 2019-08-21 20:07:57,792 ERROR handlers:1066: 'ResourceHandler' object has no attribute 'ontology_service_handler'
Traceback (most recent call last):
File "/bolt-webserver/bolt/api/onse/onse_handlers/ontology_service.py", line 17, in post
ontology_id = await self.onse_stub.createOntology()
File "/bolt-webserver/bolt/api/onse/onse_stub.py", line 41, in createOntology
return self.stub.CreateOntology(ontology_messages_pb2.Ontology())
File "/usr/local/lib/python3.6/site-packages/grpc/_channel.py", line 565, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/local/lib/python3.6/site-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses"
debug_error_string = "{"created":"#1566418077.791002345","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3818,"referenced_errors":[{"created":"#1566418077.790965749","description":"failed to connect to all addresses","file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":395,"grpc_status":14}]}"
I've tried to provide ip address instead of hostname but still getting the same error . The OnseStub class is initialized right before calling createOntology method.
The service is up and running.
The failing call is done from a tornado web app (in case that might matter)
class OnseStub:
def __init__(self, ontology_service_backend):
self.channel = grpc.insecure_channel('localhost:51051')
self.stub = ontology_service_pb2_grpc.OntologyServiceStub(self.channel)
def __del__(self):
if self.channel != None:
self.channel.close() # close grpc channel
async def createOntology(self):
return self.stub.CreateOntology(ontology_messages_pb2.Ontology())
What fixed it for me is adding the following option to client channel.
grpc.insecure_channel('localhost:50051', options=(('grpc.enable_http_proxy', 0),))
This is missing in the grpc quickstarts as well and is not really highlighted.
This is common error, it can occured in different cases. But in most usual case decribed in
https://github.com/grpc/grpc/issues/9987 can be fixed by unset http_proxy enviroment variable
if os.environ.get('https_proxy'):
del os.environ['https_proxy']
if os.environ.get('http_proxy'):
del os.environ['http_proxy']
If you are using unix domain sockets, then it is possible you have stale socket files. Be sure to remove old files before making grpc calls.
I'm implementing gRPC client and server in python. Server receives data from client successfully, but client receives back "RST_STREAM with error code 2".
What does it actually mean, and how do I fix it?
Here's my proto file:
service MyApi {
rpc SelectModelForDataset (Dataset) returns (SelectedModel) {
}
}
message Dataset {
// ...
}
message SelectedModel {
// ...
}
My Services implementation looks like this:
class MyApiServicer(my_api_pb2_grpc.MyApiServicer):
def SelectModelForDataset(self, request, context):
print("Processing started.")
selectedModel = ModelSelectionModule.run(request, context)
print("Processing Completed.")
return selectedModel
I start server with this code:
import grpc
from concurrent import futures
#...
server = grpc.server(futures.ThreadPoolExecutor(max_workers=100))
my_api_pb2_grpc.add_MyApiServicer_to_server(MyApiServicer(), server)
server.add_insecure_port('[::]:50051')
server.start()
My client looks like this:
channel = grpc.insecure_channel(target='localhost:50051')
stub = my_api_pb2_grpc.MyApiStub(channel)
dataset = my_api_pb2.Dataset()
# fill the object ...
model = stub.SelectModelForDataset(dataset) # call server
After client makes it's call, server starts the processing until completed (takes a minute, approximately), but the client returns immediately with the following error:
Traceback (most recent call last):
File "Client.py", line 32, in <module>
run()
File "Client.py", line 26, in run
model = stub.SelectModelForDataset(dataset) # call server
File "/usr/local/lib/python3.5/dist-packages/grpc/_channel.py", line 484, in __call__
return _end_unary_response_blocking(state, call, False, deadline)
File "/usr/local/lib/python3.5/dist-packages/grpc/_channel.py", line 434, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.INTERNAL, Received RST_STREAM with error code 2)>
If I do the request asynchronously and wait on future,
model_future = stub.SelectModelForDataset.future(dataset) # call server
model = model_future.result()
the client waits until completion, but after that still returns an error:
Traceback (most recent call last):
File "AsyncClient.py", line 35, in <module>
run()
File "AsyncClient.py", line 29, in run
model = model_future.result()
File "/usr/local/lib/python3.5/dist-packages/grpc/_channel.py", line 276, in result
raise self
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.INTERNAL, Received RST_STREAM with error code 2)>
UPD: After enabling tracing GRPC_TRACE=all I discovered the following:
Client, immediately after request:
E0109 17:59:42.248727600 1981 channel_connectivity.cc:126] watch_completion_error: {"created":"#1515520782.248638500","description":"GOAWAY received","file":"src/core/ext/transport/chttp2/transport/chttp2_transport.cc","file_line":1137,"http2_error":0,"raw_bytes":"Server shutdown"}
E0109 17:59:42.451048100 1979 channel_connectivity.cc:126] watch_completion_error: "Cancelled"
E0109 17:59:42.451160000 1979 completion_queue.cc:659] Operation failed: tag=0x7f6e5cd1caf8, error={"created":"#1515520782.451034300","description":"Timed out waiting for connection state change","file":"src/core/ext/filters/client_channel/channel_connectivity.cc","file_line":133}
...(last two messages keep repeating 5 times every second)
Server:
E0109 17:59:42.248201000 1985 completion_queue.cc:659] Operation failed: tag=0x7f3f74febee8, error={"created":"#1515520782.248170000","description":"Server Shutdown","file":"src/core/lib/surface/server.cc","file_line":1249}
E0109 17:59:42.248541100 1975 tcp_server_posix.cc:231] Failed accept4: Invalid argument
E0109 17:59:47.362868700 1994 completion_queue.cc:659] Operation failed: tag=0x7f3f74febee8, error={"created":"#1515520787.362853500","description":"Server Shutdown","file":"src/core/lib/surface/server.cc","file_line":1249}
E0109 17:59:52.430612500 2000 completion_queue.cc:659] Operation failed: tag=0x7f3f74febee8, error={"created":"#1515520792.430598800","description":"Server Shutdown","file":"src/core/lib/surface/server.cc","file_line":1249}
... (last message kept repeating every few seconds)
UPD2:
The full content of my Server.py file:
import ModelSelectionModule
import my_api_pb2_grpc
import my_api_pb2
import grpc
from concurrent import futures
import time
class MyApiServicer(my_api_pb2_grpc.MyApiServicer):
def SelectModelForDataset(self, request, context):
print("Processing started.")
selectedModel = ModelSelectionModule.run(request, context)
print("Processing Completed.")
return selectedModel
# TODO(shalamov): what is the best way to run a python server?
def serve():
server = grpc.server(futures.ThreadPoolExecutor(max_workers=100))
my_api_pb2_grpc.add_MyApiServicer_to_server(MyApiServicer(), server)
server.add_insecure_port('[::]:50051')
server.start()
print("gRPC server started\n")
try:
while True:
time.sleep(24 * 60 * 60) # run for 24h
except KeyboardInterrupt:
server.stop(0)
if __name__ == '__main__':
serve()
UPD3:
Seems, that ModelSelectionModule.run is causing the problem. I tried to isolate it into a separate thread, but it didn't help. The selectedModel is eventually calculated, but the client is already gone at that time. How do I prevent this call from messing with grpc?
pool = ThreadPool(processes=1)
async_result = pool.apply_async(ModelSelectionModule.run(request, context))
selectedModel = async_result.get()
The call is rather complicated, it spawns and joins lots of threads, calls different libraries like scikit-learn and smac and other. It'll be too much if I post all of it here.
While debugging, I discovered that after client's request, the server keeps 2 connections open (fd 3 and fd 8). If I manually close fd 8 or write some bytes to it, the error I see in the Client becomes Stream removed (instead of Received RST_STREAM with error code 2). Seems, that the socket (fd 8) somehow becomes corrupted by child processes. How is it possible? How can I protect the socket from being accessed by child processes?
This is a result of using fork() in the process handler. gRPC Python doesn't support this use case.
I met this problem and solved just now, did you used with_call() method?
Error code:
response = stub.SayHello.with_call(request=request, metadata=metadata)
And the response is a tuple.
Success code: don't use with_call()
response = stub.SayHello(request=request, metadata=metadata)
The response is a response object.
I'm trying to figure out how to deploy a Bokeh slider chart over an IIS server.
I recently finished up a Flask application, so I figured I'd try the route where you embed through flask:
https://github.com/bokeh/bokeh/tree/master/examples/howto/server_embed
It's nice and easy when I launch the script locally.. but I can't seem to set it up properly over IIS. I believe the complexity is stemming from the fact that the wfastcgi.py module I'm using to deploy over IIS can't easily multi-thread without some sort of hack-like work around.
So, my second attempt was to wrap the flask app in tornado as below OPTION B
(without much success, but still think this is my best lead here)
Run Flask as threaded on IIS 7
My third attempt was to try and run Bokeh server standalone on a specific port. I figured I'd be able to run the server via standalone_embed.py using wfastcgi.py on say port 8888 & while using port 5000 for the server callbacks. However, the Server function:
from bokeh.server.server import Server
still launches it locally on the host machine
server = Server({'/': bokeh_app}, io_loop=io_loop, port=5000)
server.start()
So this actually works if I go to http://localhost:5000/ on the host,
but fails if I go to http://%my_host_ip%:5000/ from a remote machine.
I even tried manually setting the host but get an "invalid host" error:
server = Server({'/': bokeh_app}, io_loop=io_loop, host='%my_host_ip_address_here%:5000')
server.start()
ERR:
Error occurred while reading WSGI handler: Traceback (most recent call last): File "C:\Python34\lib\site-packages\bokeh\server\server.py", line 45, in _create_hosts_whitelist int(parts[1]) ValueError: invalid literal for int() with base 10: '' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\WebsitesFlask\bokehTest\wfastcgi.py", line 711, in main env, handler = read_wsgi_handler(response.physical_path) File "C:\WebsitesFlask\bokehTest\wfastcgi.py", line 568, in read_wsgi_handler return env, get_wsgi_handler(handler_name) File "C:\WebsitesFlask\bokehTest\wfastcgi.py", line 537, in get_wsgi_handler handler = import(module_name, fromlist=[name_list[0][0]]) File ".\app.py", line 41, in server = Server({'/': bokeh_app}, io_loop=io_loop, host='%my_host_ip_address_here%:5000') File "C:\Python34\lib\site-packages\bokeh\server\server.py", line 123, in init tornado_kwargs['hosts'] = _create_hosts_whitelist(kwargs.get('host'), self._port) File "C:\Python34\lib\site-packages\bokeh\server\server.py", line 47, in _create_hosts_whitelist raise ValueError("Invalid port in host value: %s" % host) ValueError: Invalid port in host value: : StdOut: StdErr:
First off, the --host parameter should no longer be needed in the next 0.12.5 release. It's probably been the most confusing stumbling block for people trying to deploy a Bokeh server app in a "production" environment. You can follow the discussion on this issue on GitHub for more details.
Looking at the actual implementation in Bokeh that generates the error you are seeing, it is just this:
parts = host.split(':')
if len(parts) == 1:
if parts[0] == "":
raise ValueError("Empty host value")
hosts.append(host+":80")
elif len(parts) == 2:
try:
int(parts[1])
except ValueError:
raise ValueError("Invalid port in host value: %s" % host)
The exception you are reporting that states that int(parts[1]) is failing:
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\bokeh\server\server.py", line 45,
in _create_hosts_whitelist int(parts[1])
ValueError: invalid literal for int() with base 10:
So, there is something amiss with the string you are passing for hosts that's causing the part after the colon to not be able to be converted to in int But without seeing the actual string, it's impossible to say much more. Maybe there is some encoding issue that needs to be handled differently or better. If you can provide a concrete string example that reproduces the problem I can take a closer look.