I am using kubernetes-client library in python and looking at the various examples, it appears we don't need to explicitly close the client connection to the API server. Does the client connection gets terminated automatically or are the examples missing the call to close the connection? I also found the docs page for the APIs (AppsV1 for example) and the examples shown there use context manager for the calls so the connection gets disconnected automatically there but I still have questions for the scripts that don't use the context manager approach.
Kubernetes's API is HTTP-based, so you can often get away without explicitly closing a connection. If you have a short script, things should get cleaned up automatically at the end of the script and it's okay to not explicitly close things.
The specific documentation page you link to shows a safe way to do it:
with kubernetes.client.ApiClient(configuration) as api_client:
api_instance = kubernetes.client.AppsV1Api(api_client)
api_instance.create_namespaced_controller_revision(...)
The per-API-version client object is stateless if you pass in an ApiClient to its constructor, so it's safe to create these objects as needed.
The ApiClient class includes an explicit close method, so you could also do this (less safely) without the context-manager syntax:
api_client = kubernetes.client.ApiClient(configuration)
apps_client = kubernetes.client.AppsV1Api(api_client)
...
api_client.close()
The library client front-page README suggests a path that doesn't explicitly create an ApiClient. Looking at one of the generated models' code, if you don't pass an ApiClient option explicitly, a new one will be created for each API-version client object; that includes a connection pool as well. That can leak local memory and cause extra connections to the cluster, but this might not matter to you for small scripts.
Related
I am using SolrClient for python with Solr 6.6.2. It works as expected but I cannot find anything in the documentation for closing the connection after opening it.
def getdocbyid(docidlist):
for id in docidlist:
solr = SolrClient('http://localhost:8983/solr', auth=("solradmin", "Admin098"))
doc = solr.get('Collection_Test',doc_id=id)
print(doc)
I do not know if the client closes it automatically or not. If it doesn't, wouldn't it be a problem if several connections are left open? I just want to know if it there is any way to close the connection. Here is the link to the documentation:
https://solrclient.readthedocs.io/en/latest/
The connections are not kept around indefinitely. The standard timeout for any persistent http connection in Jetty is five seconds as far as I remember, so you do not have to worry about the number of connections being kept alive exploding.
The Jetty server will also just drop the connection if required, as it's not required to keep it around as a guarantee for the client. solrclient uses a requests session internally, so it should do pipelining for subsequent queries. If you run into issues with this you can keep a set of clients available as a pool in your application instead, then request an available client instead of creating a new one each time.
I'm however pretty sure you won't run into any issues with the default settings.
I am writing python code on top of the openstack shade library.
Connecting to a stack is pretty straight forward:
return shade.openstack_cloud(cloud='mycloud', **auth_data)
Now I am simply wondering: is there a canonical way to disconnect when I am done?
Or is the assumption that my script ending will do a "graceful" shutdown of that connection; not leaving anything behind?
OpenStack works on a RESTful api model. This means the connections are stateless, i.e. it makes a HTTP connection when you do your request, and closes that connection when the request is finished.
The above code simply initialises things by reading your config, authentication data, etc. A connection is not made until you do something with that object, e.g. create an image:
image = cloud.create_image( 'ubuntu-trusty',
filename='ubuntu-trusty.qcow2', wait=True)
In summary, no, you don't need to disconnect, shade's underlying code will take care of closing connections.
High level overview:
I have a server.py file and a class WorkTask.py that has some function execute in it, both stored on my server. I also have a client.py that runs remotely and connects to the server using pyro. Is there anyway that I can pass the WorkTask class from the server to the client and then run the WorkTask.execute() function on the client side?
Only if you have a copy of WorkTask.py on the client already, and are using the pickle serializer.
You could also perhaps look at Pyro4.utils.flame.createModule(). See https://pythonhosted.org/Pyro4/flame.html
It's a big security risk though because using pickle allows for arbitrary code execution if you connect to an untrusted remote party.
I have a Flask app that accepts HTTP requests. When certain HTTP requests come in, I want to trigger a message on a zeromq stream. I'd like to keep the zeromq stream open all the time. I'm wondering what the appropriate way to do this is. Since it is recommended to use gunicorn with Flask in production, doesn't that mean that there will be multiple instances of the Flask app, and if I put the zeromq connection in the same place as the Flask app, only one of those will be able to connect, and the others will fail.
I use a threading.local() object to store the zeromq context and socket objects.
That way I can re-use the already connected sockets inside a thread, while ensuring each thread will have its own socket objects.
Is the ZMQ socket in your app connect()-ing, or is it bind()-ing? If your app is considered the client and it's connecting, then multiple instances should be able to connect without issue. If it's considered the server and it's binding, then yes, you'll have problems... but in your case, it seems like you should consider your Flask app to be more transient, and thus the client, and the other end to be more reliable, and thus the server.
But it's hard to really give any concrete advice without code, there's only so much I can intuit from the little information you've given.
ZeroMQ shall not reuse context across different threads. The same applies to sockets.
If you manage to keep the socket used exclusively by one thread in worker, you might reuse the
socket.
Anyway, I would start with creating new context and socket with every request and see, if there is
any need to go into complexities of sharing a ZeroMQ connection. Set up ZeroMQ is often rather fast.
I am looking for a RPC library in Java or Python (Python is preferred) that uses TCP. It should support:
Asynchronous
Bidirectional
RPC
Some sort of event loop (with callbacks or similar)
Any recommendations? I have looked a things like bjsonrpc which seemed to be the right sort of thing however it didn't seem possible for the server to identify which connections; so if a user has identified himself/herself and a request comes in from another user to send a message to that user it doesn't expose that users connection so we can send the message.
You should definitely check out Twisted. It's an event-based Python networking framework that has an implementation of an event loop (called the "reactor") supporting select, poll, epoll, kqueue and I/O completion ports, and mediates asynchronous calls with objects called Deferreds
As for your RPC requirement, perhaps you should check out Twisted's PB library or AMP.
I'm not entirely sure what you meanאt by "Event loop", but you should check out RPyC (Python)
RPyC Project page
i'm the author of bjsonrpc. I'm sure it's possible to do what you want with it.
Some things maybe are poorly documented or maybe some examples are needed.
But, in short, Handlers can store internal states (like authenticated or not, or maybe username). From any handler you can access the "Connection" class, which has the socket itself.
Seems you want something like a chat as an example. I did something similar in the past. I'll try to add a chat example for a new release.
Internal states are explained here:
http://packages.python.org/bjsonrpc/tutorial1/index.html#stateful-server
They should be used for authentication (but no standard auth method is provided yet).
On how to reach the connection class from the handler, that isn't documented yet (sorry), but it is used sometimes in the examples inside the source code. For example, example1-server.py contains this public function:
def gettotal(self):
self._conn.notify.notify("total")
return self.value_total
BaseHandler._conn represents the connection for that user. And is exactly the same class you get when you connect:
conn = bjsonrpc.connect(host=host,port=port,handler_factory=MyHandler)
So, you can store the connections for logged users in a global variable, and later call any client method you want to.
I am involved in developing Versile Python (VPy) which provides the capabilities you are requesting. It is currently available as development releases intended primarily for testing, however you may want to check it out.
Regarding identifying users you can configure remote methods to receive a context object which enables the method to receive information about an authenticated user, using a syntax similar to this draft code.
from versile.quick import *
#doc
class MessageBox(VExternal):
"""Dispatches IM messages."""
#publish(show=True, doc=True, ctx=True)
def send_message(self, msg, ctx=None):
"""Sends a message to the message box"""
if ctx.identity is None:
raise VException('No authenticated user')
else:
# do something ...
pass