I'm working on a REST client library, and recently started working on adding support for sending batch messages.
However, I don't like the current design. It requires you to maintain large Client and RequestMessage classes with identical method signatures.
I am looking for a way to consolidate the two classes.
The original Client class's methods contained all the code needed to prepare and send the request:
class Client(object):
def __init__(self, config):
self.request = Request(config, "application/json")
def create_vertex(self, data):
path = "vertices"
params = remove_null_values(data)
self.request.post(path, params)
To add support for batch messages, I pulled out the guts of the code inside the each Client method, and put it into a separate RequestMessage class so that you can add the messages to the batch without sending it, until you're ready:
class Client(object):
def __init__(self, config):
self.request = Request(config, "application/json")
self.message = RequestMessage(config)
def create_vertex(self, data):
message = self.message.create_vertex(data)
return self.request.send(message)
# ...more REST client methods...
def batch(self, messages):
path = "batch"
params = messages
return self.request.post(path, params)
class RequestMessage(object):
def __init__(self, config):
self.config = config
def create_vertex(self, data):
path = "vertices"
params = remove_null_values(data)
return POST, path, params
# ...more REST client methods...
But I don't like the design because now you have to maintain the Client and RequestMessage class -- two large classes with identical signatures.
Here's what the Batch class looks like:
class Batch(object):
def __init__(self, client):
self.client = client
self.messages = []
def add(self, message):
self.messages.append(message)
def send(self):
return self.client.batch(self.messages)
Here's example usage for creating a vertex on the server:
>>> client = Client(config)
>>> vertex = client.create_vertex({'name':'James'})
Here's example usage for creating a batch of vertices on the server:
>>> message1 = client.message.create_vertex({'name':'James'})
>>> message2 = client.message.create_vertex({'name':'Julie'})
>>> batch = Batch(client)
>>> batch.add(message1)
>>> batch.add(message2)
>>> batch.send()
Batch is used less frequently than Client so I want to make the normal Client interface easiest to use.
Here's one idea, but I'm not quite sure how to achieve it or if something else would be better:
>>> vertex = client.create_vertex(data)
>>> message = client.create_vertex(data).message()
My personal preference is an API like:
client.create_vertex({'name':'James'}) # single
client.create_vertex([{'name':'James'}, {'name':'Julie'}]) # batch
That way you use the exact same function, you're just giving it more data. Typical usage would probably look more like:
batch = []
batch.append({'name':'James'})
batch.append({'name':'Julie'})
client.create_vertex(batch)
I would agree with #Karl with some modifications
client.create_vertex({'name':'James'}) # single
client.create_vertex({'name':'James'}, {'name':'Julie'}) # batch
batch = []
batch.append({'name':'James'})
batch.append({'name':'Julie'})
client.create_vertex(*batch)
That way you don't have to check the type of your input. Easier to write, easier to use.
Related
I have a class that inherits from another class in which we build a client:
class Client(ClientLibrary):
def __init__(self, hosts=[{'host':<HOST_ADDRESS>, 'port':<PORT>}], **kwargs):
''' alternative constructor, where i'd pass in some defaults to simplify connection'''
super().__init__(hosts, *args, **kwargs)
def some_method(self):
...
I want to test this class, and already have a test server set up that I want to connect to for testing. My initial approach was to create a MockClient that inherits from the original Client but swaps out the hosts parameter for the test host like so:
# I create a mock client that inherits from the original `Client` class, but passes in the host and port of the test server.
class MockClient(Client):
def __init__(self, hosts=[{'host':MOCK_HOST, 'port':MOCK_PORT}]):
super().__init__(hosts=hosts)
The idea was then that i'd use this mock client in the tests, however I have faced a lot of issues where I am testing functions that encapsulate the original Client class. I have tried patching it but keep on running into issues.
Is there a better way to approach this? And can this be done using pytest fixtures?
I want to be able to perform the following sorts of tests:
class TestFunctionThatUtilisesClient:
def test_in_which_class_is_constructed_explicitly(self):
client = Client()
r = client.some_method()
assert r == 'something'
def test_in_which_class_is_constructed_implicitly(self):
r = another_method() # Client() is called somewhere in here
assert r == 'something else'
I want to test my tornado python application with pytest.
for that purpose, I want to have a mock db for the mongo and to use motor "fake" client to simulate the calls to the mongodb.
I found alot of solution for pymongo but not for motor.
any idea?
I do not clearly understand your problem — why not just have hard-coded JSON data?
If you just want to have a class that would mock the following:
from motor.motor_tornado import MotorClient
client = MotorClient(MONGODB_URL)
my_db = client.my_db
result = await my_db['my_collection'].insert_one(my_json_load)
So I recommend creating a Class:
Class Collection():
database = []
async def insert_one(self,data):
database.append(data)
data['_id'] = "5063114bd386d8fadbd6b004" ## You may make it random or consequent
...
## Also, you may save the 'database' list to the pickle on disk to preserve data between runs
return data
async def find_one(self, data):
## Search in the list
return data
async def delete_one(self, data_id):
delete_one.deleted_count = 1
return
## Then create a collection:
my_db = {}
my_db['my_collecton'] = Collection()
### The following is the part of 'views.py'
from tornado.web import RequestHandler, authenticated
from tornado.escape import xhtml_escape
class UserHandler(RequestHandler):
async def post(self, name):
getusername = xhtml_escape(self.get_argument("user_variable"))
my_json_load = {'username':getusername}
result = await my_db['my_collection'].insert_one(my_json_load)
...
return self.write(result)
If you would clarify your question, I will develop my answer.
I'm new to Python, and I think I'm trying to do something simple. However, I am confused with the results I am getting. I am declaring a class that has 2 class methods, add and remove, which in my simple example add or remove a client from a list class variable. Here's my code:
Service.py
from Client import Client
class Service:
clients = []
#classmethod
def add(cls, client):
cls.clients.append(client)
#classmethod
def remove(cls, client):
if client in cls.clients:
cls.clients.remove(client)
if __name == '__main__'
a = Client()
b = Client()
c = Client()
Service.add(a)
Service.add(b)
Service.add(c)
print(Service.clients)
c.kill()
print(Service.clients)
Service.remove(c)
print(Service.clients)
Client.py
class Client:
def kill(self):
from Service import Service
Service.remove(self)
I would expect calling c.kill() would remove the instance from the clients list.
However, when I evaluate the clients list, it is showing 0 items. when I call Service.remove(c), it shows the correct list, and removes it as expected. I am not sure what I am missing here.
If it matters, I am currently using PyCharm with my code running in a Virtualenv with Python 3.6.5.
Your current code is using circular imports, as both files utilize each other. Also, instead of relying on the client to destroy the connections, use a contextmanager to facilitate the updating of clients, and at the end of the procedure, empty clients:
import contextlib
class Client:
pass
class Service:
clients = []
#classmethod
def add(cls, client):
cls.clients.append(client)
#classmethod
#contextlib.contextmanager
def thread(cls):
yield cls
cls.clients = []
with Service.thread() as t:
t.add(Client())
t.add(Client())
How to receive the data from the client, bypassing the standard class function Protocol? For example,
class TW(protocol.Protocol):
def get_data(delim = '\n'):
#some code
return data
I.e, without using the function "dataReceived", and not freezing all other the server clients?
You can't bypass dataReceived unless you like doing things the hard way :D. You can do what ever you're doing in get_data() in dataReceived(). Alternatively, you could add a data param in your get_data() and do a callback form dataReceived.
class TW(Protocol):
def get_data(data, delim='\n'):
# some code
return result
def dataReceived(self, data):
result = self.get_data(data, delim='\r\n')
# do some more stuff
I'm writing a chat feature (like the Faceboook.com one) for a Google App Engine site. I need a way to keep track of what users have new messages. I'm currently trying to use Memcache:
class Message():
def __init__(self, from_user_key, message_text)
self.from_user_key = from_user_key
self.message_text = message_text
class NewMessages():
def __init__(self):
self.messages = []
def add_message(self, message):
self.messages.append(message)
def get_messages(self):
return self.messages
def messages_sent(self):
self.messages = [] #Clear all messages
def ChatUserManager():
def load(user_key):
manager = memcache.get("chat_user_%s" % user_key)
if manager is not None:
return manager
else:
manager = ChatUserManager(user_key)
memcache.set("chat_user_%s" % user_key, manager)
return manager
def save(self):
memcache.set("chat_user_%s" % user_key, self)
def __init__(self, user_key):
self.online = True
self.new_messages = NewMessages()
self.new_data = False
self.user_key = user_key
def recieve_message(self, message):
self.new_data = True
self.new_messages.add_message(Message(from_user_key, message_text))
def send_message(self, message):
to_manager = ChatUserManager.load(message.from_user_key)
to_manager.recieve_message(message)
def client_receive_success(self):
self.new_data = False
self.new_messages.messages_sent()
This chat is user to user, like Facebook or an IM session, not group chat.
Each user will poll a url with ajax to get new messages addressed to them every x seconds. The chat manager will be loaded on that page (ChatUserManager.load(user_key)) and new messages will be checked for. When they are sent the manager will be told that the messages have been sent (manager.client_receive_success()), and then saved back to memcache (manager.save()).
When a user sends a message in the javascript client, it will send an ajax request to a url. The url will load the client's UserChatManager and call .send_message(Message(to_user_key, message_string))).
I'm concerned about the practicality of this model. If everything is in memcache how will it be synchronized across different pages?
Is there a better way to do this?
I do admit that I'm not a python pro yet so the code might not be very pythonic, are there any best practices I'm missing?
The problem isn't so much about how to share data between "pages" but how will the usability of the service will be impacted by using memcache.
There are no guarantees associated with data persistence in memcache: one moment its there, the other it might not.