Querying objects from mysql with python - python

Since I can't explain clearly what I don't understand I'll use an example.
Lets say I have a client application and a server application. The server awaits and when the client sends some keyword to the server so the server knows what should be queried. And lets say that the client requests a product object so the server queries the database and gets back the row that the client needs as a set object. So every time I need some object I need send it to the client in form of a string and then instantiate it ?
Am i missing something ? Isn't it expensive to instantiate objects on every query ?
TIA!

Your question is very vague and doesn't really ask something but I'll try to give you a generic answer of how to interact between server and client.
When a user request a item in the client, you should provide the client with an API to the server, something like http://example.com/search?param=test. The client will use this API in either an AJAX call or a direct call.
The server should parse the param, connect to database, retrieve the requested item and return to client. The most common data types for this exchange are JSON and Plain Text.
The client will then parse either of the data types, generate if required an object from these and finnally show the user the requested data.
If this is not what you need please update your question to ask specifically the issue you have and maybe provide some code where you have the issue and I'll update my answer accordingly.

MySQL Server uses custom protocol over TCP. If you don't want to use any library you will have to parse TCP messages. MySQL Connector / Python does exactly that - you can look at its source code if wish.

Related

Django Websocket Send Text and bytes at sametime

I have client and server in my project. In the client part, the user will upload his own excel file and this file will come to my server for processing. My artificial intelligence python code will run on my server and it will make changes to excel. When every time it changes, I want to send the updated version to the client so that the client can see the change live. Example Let's say I have 10 functions on server side, each function changes some cells in excel(I can get the index of the changed cells). When each function is finished, I will send the changing indexes to the client and these changed places will be updated in the table in the client (C++, Qt).
At first, I made the server with PHP, but calling my artificial intelligence python codes externally(shell_exec) was not a good method. That's why I want to do the server part with python.
Is django the best way for me?
What I've tried with Django:
I wanted to send data continuously from server to client with StreamingHttpResponse object, but even though I used iter_content to recv the incoming data on the client, when all the code was finished, all came at once. When I set the chunksize value of iter_content to a small value, I could get it instantly, but it's not a full word. So I decided to use websocket.
I have a problem with websocket; I can't send text and byte data at the same time.
When client while uploading the Excel file, I need to send some text data as a parameter to my server.
Waiting for your help thank you!
You can send bytes as hexadecimal string.
Check this out: binascii hexlify

Protobuf how to use Any type with homebrew proto message

I'm currently building a python gRPC server that serializes tons of different proto messages into json to store them into a no-sql db. I'd like to simplify extension of this server such that we can add new types without rewriting the gRPC server and redeploying. Ideally, we would like to define a new message, put it in a proto file and update only the client. The server should expect any type at first but knows a .proto file or folder where to look for when it comes to serializing/deserializing.
I've read about the Any type and I'm exploring whether this is my way to do this. There is some documentation on it but very few examples to work with. One thing that I don't quite get is how to store/retrieve the type of an "Any" field.
All documentation use https as protocol for the type of an Any field (e.g. type.googleapis.com/google.protobuf.Duration). This is also the default. How would it look like if I use the local file system? How would I store this in the proto message on the client side?
How can I retrieve the type on the server side?
Where can I find a similar example?
Apologies, this is only a partial answer.
I've recently begun using Any in a project and can provide some perspective. I have a similar (albeit simpler) requirement to what you outline. Enveloped message content but, in my case, clients are required to ship a descriptor to the server and identify a specific method to help it (un)marshal etc.
I've been using Google's new Golang APIv2 and only familiar with it from Golang and Rust (not Python). The documentation is lacking but the Golang documents will hopefully help:
anypb
protoregistry
I too struggled with understanding the concept (implementation) of the global registry and so I hacked the above solution. The incoming message metadata provides sufficient context to the server that it can construct the message type and marshal the bytes into it.

Django, global variables and tokens

I'm using django to develop a website. On the server side, I need to transfer some data that must be processed on the second server (on a different machine). I then need a way to retrieve the processed data. I figured that the simplest would be to send back to the Django server a POST request, that would then be handled on a view dedicated for that job.
But I would like to add some minimum security to this process: When I transfer the data to the other machine, I want to join a randomly generated token to it. When I get the processed data back, I expect to also get back the same token, otherwise the request is ignored.
My problem is the following: How do I store the generated token on the Django server?
I could use a global variable, but I had the impression browsing here and there on the web, that global variables should not be used for safety reason (not that I understand why really).
I could store the token on disk/database, but it seems to be an unjustified waste of performance (even if in practice it would probably not change much).
Is there third solution, or a canonical way to do such a thing using Django?
You can store your token in django cache, it will be faster from database or disk storage in most of the cases.
Another approach is to use redis.
You can also calculate your token:
save some random token in settings of both servers
calculate token based on current timestamp rounded to 10 seconds, for example using:
token = hashlib.sha1(secret_token)
token.update(str(rounded_timestamp))
token = token.hexdigest()
if token generated on remote server when POSTing request match token generated on local server, when getting response, request is valid and can be processed.
The simple obvious solution would be to store the token in your database. Other possible solutions are Redis or something similar. Finally, you can have a look at distributed async tasks queues like Celery...

Socket.io connections distribution between several servers

I'm working on DB design tool (python, gevent-socket.io). In these tool multiple users can discuss one DB model, receiving changes in runtime. To support this feature, I'm using socket.io. I'd like to extend number of servers that handle socket.io connection easily. The simplest way to do it is to set up nginx to choose server basing of model ID.
I'd like module approach, where model ID is divided by number of servers. So if I have 3 nodes, model 1 will be handled on first, 2 - on second, 3 - on third, 4 - on first again etc.
My request for model loading looks like /models/, so no problem here - argument can be parsed to find server to handle it. But after model page is loaded, JS tries to establish connection:
var socket = io.connect('/models', {
'reconnection limit': 4000
});
It accesses default endpoint, so server receives following requests:
http://example.com/socket.io/1/xhr-pooling/111111?=1111111
To handle it, I create application this way:
SocketIOServer((app.config['HOST'], app.config['PORT']), app, resource='socket.io', transports=transports).serve_forever()
and then
#bp.route('/<path:remaining>')
def socketio(remaining):
app = current_app._get_current_object()
try:
# Hack: set app instead of request to make it available in the namespace.
socketio_manage(request.environ, {'/models': ModelsNamespace}, app)
except:
app.logger.error("Exception while handling socket.io connection", exc_info=True)
return Response()
I'd like to change it to
http://example.com/socket.io/<model_id>/1/xhr-pooling/111111?=1111111
to be able to choose right server in ngnix. How to do it?
UPDATE
I also like to check user permissions when it tries to establish connection. I'd like to do it in socketio(remaining) method, but, again, I need to know what model he is trying to access.
UPDATE 2
I implemented permission validator, taking model_id from HTTP_REFERER. Seems, it's only part of request that contains identifier of the model (example of values: http://example.com/models/1/).
The first idea - is to tell client side available servers for current time.
Furthermore you can generate server list for client side by priority, just put them in javascript generated array by order.
This answer means that your servers can answer on any models, you can control server loading by changing servers ordering in generated list for new clients.
I think this is more flexible way. But if you want - you can parse query string in nginx and route request on any underlying server - just have a table for "model id-server port" relations
Upd: Just thinking about your task. And find one another solution. When you generate client web page you can inline servers count in js somewhere. Then, when you requesting model updates, just use another parameter founded as
serverId = modelId%ServersCount;
that will be server identificator for routing in nginx.
Then in nginx config you can use simple parsing query string, and routing request to server you can find by serverId parameter.
in "metalanguage" it will be
get parameter serverId to var $servPortSuffix
route request to localhost:80$servPortSuffix
or another routing idea.
You can add additional GET parameters to socket.io via
io.connect(url, {query: "foo=bar"})

Amazon SQS - Communicating URL between servers

I was wondering if I could get some help with Amazon SQS. In my example I am trying to set up a queue on Server A and query it from Server B. The issue I’m having is that when I create a queue on server A it provides me with a URL like this:
https://sqs.us-east-1.amazonaws.com/599169622985/test-topic-queue
Then on my other server I apparently need to query this URL for information on the queue. The trouble is, my server B doesn’t know the URL that I created on server A. This seems like a bit of a flaw, do I really need to find a way to also communicate the URL to server B before it can connect to the queue, and if so, does anyone have any good solutions for this?
I have tried asking on Amazon and didn’t get any replies.
For sure servers A and B must share some kind of information regarding the queue. If not the full URL, you can just share the name, and retrieve the queue URL on server B using the GetQueueUrl API endpoint:
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/Query_QueryGetQueueUrl.html
Queues should be treated like any other resources (cache, datastores, etc) and defined ahead of time in some type of application configuration file.
If your use case involves queue end points that change on a regular basis, then you might want to store the queue endpoint in something that both instances can check. It could be a database, or it could be a config file pulled from s3.

Categories

Resources