For my final year project I plan to code a cloud in Python. The client will be written in Java by the other member of my team. The client will have a tabbed interface and it will provide a text editor, a media player, a couple of small Java based games and a maybe a few more services.
The server will work like this:
1) Validate the user.
2) Send a file, called "dump" to the user. Dump will contain all the file names and file types that the user created by himself or the files which the user can read/write. This info will be fetched from the database.
3) The tabs in the client will display the file types associated with the tab application. e.g the media tab will only select and show the media files from the dump readable by user. The text editor tab will show only the txt files from the dump readable by the user.
4) A request to open the file will send the file back to client, which the associated application will open.
5) All the changes made to the files and all the actions (overwriting, saving, deleting etc.) will be sent back to the server along with the new object. Something similar will be done to the newly created objects.
My Questions are:
What are the best approaches for the communication between the client and the server. For the dump I plan to use some sort of encrypted XML file. For the other way round, I don't have a clue :/.
For easy integration with the database, I was planning to use Django (which I started few days back). How can I send my requests from the client to the server (without Django I'd use SQL queries) and the files from the server to the client? Maybe GET and POST will work for the former problem? Any other suggestions?
Q1: how should I transfer data between client/server securely
A: HTTPS to support encryption & JSON to serialise objects between languages (Python/Java) seems to be the most natural. You could experiment with XML-RPC over SSL or TSL if you want to be creative.
Q2: How do I send queries to the server's db?
A: My first response is to say talk to the person coding the server, and see what's easiest on that end. However, I think that your client should stick to HTTP. The server developer would ensure the server supports RESTful URIs. Then your client only access a URI and have the results processed by the server.
At its most raw, this could be implemented like this:
https://www.example.com/db?q="SELECT * FROM docs"
There are smarter ways to do it, but you get the idea.
If you're going to use a web framework on the server, it makes sense to use an HTTP-based protocol. The downside is that only the client can initiate a connection (e.g., the client needs to first ask for the "dump" file), but a simple GET request will suffice (remember, the server can send anything in the HTTP response, including your XML file).
Regarding encryption, it's best to use an existing protocol like HTTPS. There are well-vetted libraries that will correctly establish a secure connection between your client and the server.
Overall, I'm advocating the highest-level protocols that are appropriate for your application. HTTP(S) goes hand-in-hand with your web-based architecture, so make use of it.
Stick to Django. It's really productive. I would use JSON instead of XML. More convenient. import json. This should help you in communicating between client-server.
Also cloud computing is just a recent word that's just thrown around for (client+server+some services). Oh by the way all that you want to do can be completely done in Django itself. No need to go to JAVA.
Django is Cool :)
Related
I’ve got a standard client-server set-up with ReScript (ReasonML) on the front-end and a Python server on the back-end.
The user is running a separate process on localhost:2000 that I’m connecting to from the browser (UI). I can send requests to their server and receive responses.
Now I need to issue those requests from my back-end server, but cannot do so directly. I’m assuming I need some way of doing it through the browser, which can talk to localhost on the user’s computer.
What are some conceptual ways to implement this (ideally with GraphQL)? Do I need to have a subscription or web sockets or something else?
Are there any specific libraries you can recommend for this (perhaps as examples from other programming languages)?
I think the easiest solution with GraphQL would be to use Subscriptions indeed, the most common Rescript GraphQL clients already have such a feature, at least ReasonRelay, Reason Apollo Hooks and Reason-URQL have it.
I have JS running and essentially getting user entries from my HTML session storage and pushing these to a DB. I also need to use a HTTP request to pass a json object containing the entries to a python file hosted somewhere else.
Does anyone have any idea of documentation I could look at, or perhaps how to get JSON objects from JS to Python.
My client does not want me to grab the variables directly from the DB.
You have to create some sort of communication channel between the javascript and python code. This could be anything, SOAP, HTTP, RPC, any number of and flavor of message queue.
If nothing like that is in place, it's quite the long way around. A complex application might warrant you doing this, think micro services communicating across some sort of service bus. It's a sound strategy and perhaps that's why your client is asking for it.
You already have Firebase, though! Firebase is a real-time database that already has many of the characteristics of a queue. The simplest and most idiomatic thing to do would be to let the python code be notified of changes by Firebase: Firebase as service bus is a nice strategy too!
Actual situation:
The client downloads a small pythonscript that is executable.
The client executes it. The script gathers information from the computer and sends the data to the webserver vià POST-Method.
Wanted Situation:
After the webserver recived the data, it should forward the information to the website-session of the client. And the website should display the information.
This is a visual example of the principle:
There is also a example of this principle on Can You Run It.
How can I realize this?
A common way of implementing this is using a RESTful API. Basically the API does not care if the request is from a script or web browser, it just passes data structures to and from clients. The only tricky part to your example is when there are multiple users involved because a secret must be shared between the browser and script. I believe Can You Run It puts this unique secret into the program they ask you to download.
Look into Django Rest Framework for examples of how to implement this.
I'm planning an iOS app that requires a server backend capable of efficiently serving image files and performing some dynamic operations based on the requests it gets (like reading and writing into a data store, such as Redis). I'm most comfortable with, and would thus prefer to write the backend in Python.
I've looked at a lot of Python web framework/server options, Flask, Bottle, static and Tornado among them. The common thread seems to be that either they support serving static files as a development-time convenience only, discouraging it in production, or are efficient static file servers but not really geared towards the dynamic framework-like side of things. This is not to say they couldn't function as the backend, but at a quick glance they all seem a bit awkward at it.
In short, I need a web framework that specializes in serving JPEGs instead of generating HTML. I'm pretty certain no such thing exists, but right now I'm hoping that someone could suggest a solution that works without bending the used Python applications in ways they are not meant for.
Specifications and practical requirements
The images I'd be serving to the clients live in the file system in a shallow directory hierarchy. The actual file names would be invisible to the clients. The server would essentially read the directory hierarchy at startup, assigning a numeric ID for each file, and would route the requests to controller methods that then actually serve the image files. Here are a few examples of ways the client would want to access the images in different circumstances:
Randomly (example URL path: /image/random)
Randomly, each file only once (/image/random_unique), produces some suitable non-200 HTTP status code when the files are exhausted
Sequentially in either direction (/image/0, /image/1, /image/2 etc.)
and so on. In addition, there would be URL endpoints for things like ratings, image info and other metadata, some client-specific information as well (the client would "register" with the server, so that needs some logic, too). This data would live in a Redis datastore, most likely.
All in all, the backend needs to be good at serving image/jpeg and application/json (which it would also generate). The scalability and concurrency requirements are modest, at least to start with (this is not an App Store app, going for ad-hoc or enterprise distribution).
I don't want the app to rely on redirects. That is, I don't want a model where a request to a URL would return a redirect to another URL that is backed by, say, nginx as a separate static file server, leaving only the image selection logic for the Python backend. Instead, a request to a URL from the client should always return image/jpeg, with metadata in custom HTTP headers where necessary. I specify this because it is a way of avoiding serving static files from Python that I thought of, and someone else might think of too ;-)
Given this information, what sort of solution would you consider a good choice, and why? Or is this something for which I need to code non-trivial extensions to existing projects?
EDIT: I've been thinking about this a bit more. I don't want redirects due to the delay inherent in the multiple requests they entail, plus I'd like to abstract out the file names from the client, but I was wondering if something like this would be possible:
It's pretty self-explanatory, but the idea is that the Python program is given the request info by nginx (or whatever serves the role), mulls it over and then tells nginx to respond to the client's request with a specific file from the file system. It does so. The client is none the wiser about how the request was fulfilled, it just receives a response with the correct content type.
This would be pretty optimal in my view, but is it possible? If not with nginx, perhaps something else?
I've been using Django for well over a year now, and it is the hammer I use for all my nails. You could probably do this with a bit of database-image storage and django's builtin orm and url routing (with regex). If you store the images in the database, you will automatically get the unique-id's set. According to this stackoverflow answer, you can use redis with django.
I don't want a model where a request to a URL would return a redirect to another URL that is backed by, say, nginx as a separate static file server, leaving only the image selection logic for the Python backend.
I think Nginx for serving static and python for figuring out the image url is the better solution.
Still if you do not want to do that I would suggest you use any Python web framework (like Django) and write your models and convert them into REST resources (Eg. Using django-tastypie) and/or return a base64 encoded image which you can then decode in your iOS client.
Refs:
Decoding a Base64 image
TastyPie returns the path as default, you might have to do extra work to either store the image blob in the table or write more code to return a base64 encoded image string
You might want to look at one of the async servers like Tornado or Twisted.
I have a program that I wrote in python that collects data. I want to be able to store the data on the internet somewhere and allow for another user to access it from another computer somewhere else, anywhere in the world that has an internet connection. My original idea was to use an e-mail client, such as g-mail, to store the data by sending pickled strings to the address. This would allow for anyone to access the address and simply read the newest e-mail to get the data. It worked perfectly, but the program requires a new e-mail to be sent every 5-30 seconds. So the method fell through because of the limit g-mail has on e-mails, among other reasons, such as I was unable to completely delete old e-mails.
Now I want to try a different idea, but I do not know very much about network programming with python. I want to setup a webpage with essentially nothing on it. The "master" program, the program actually collecting the data, will send a pickled string to the webpage. Then any of the "remote" programs will be able to read the string. I will also need the master program to delete old strings as it updates the webpage. It would be preferred to be able to store multiple string, so there is no chance of the master updating while the remote is reading.
I do not know if this is a feasible task in python, but any and all ideas are welcome. Also, if you have an ideas on how to do this a different way, I am all ears, well eyes in this case.
I would suggest taking a look at setting up a simple site in google app engine. It's free and you can use python to do the site. Than it would just be a matter of creating a simple restful service that you could send a POST to with your pickled data and store it in a database. Than just create a simple web front end onto the database.
Another option in addition to what Casey already provided:
Set up a remote MySQL database somewhere that has user access levels allowing remote connections. Your Python program could then simply access the database and INSERT the data you're trying to store centrally (e.g. through MySQLDb package or pyodbc package). Your users could then either read the data through a client that supports MySQL or you could write a simple front-end in Python or PHP that displays the data from the database.
Adding this as an answer so that OP will be more likely to see it...
Make sure you consider security! If you just blindly accept pickled data, it can open you up to arbitrary code execution.
I suggest you to use a good middle-ware like: Zero-C ICE, Pyro4, Twisted.
Pyro4 using pickle to serialize data.