Does GAE support Server Sent Events (SSE)?
I tried using SSE but it did not work ,so I switched to Channel API. But still is it possible to implement SSE in GAE ?
I've been trying like crazy to pull this one off but the GAE response is being buffered and compressed.
I'll be very happy if someone has an idea how to write the code/headers so the php file is streamed.
FYI, these are the headers I'm using:
header("Content-Type: text/event-stream; charset=utf-8");
header("Accept-Encoding: identity");
header("Cache-Control: no-cache");
header("Access-Control-Allow-Origin: https://mail.google.com");
header("Access-Control-Allow-Credentials: true");
header('Access-Control-Allow-Methods "PUT, GET, POST, DELETE, OPTIONS"');
[UPDATE]
From: http://grokbase.com/t/gg/google-appengine/15623azjjf/server-sent-events-using-channels-api
What this means in practice is that your stream will not be
"keep-alive" and will close each time one response is sent. Or, if you
implement your server-sent event code server-side as most people do,
it will buffer up all of its responses and finally send them all only
when it terminates.
Please read: https://cloud.google.com/appengine/docs/php/requests#PHP_Responses
Resume: there is no way to do SSE using GAE.
Related
Sony's website provided a example to use WebSockets to works with their api in Node.js
https://developer.sony.com/develop/audio-control-api/get-started/websocket-example#tutorial-step-3
it worked fine for me. But when i was trying to implement it in Python, it does not seems to work
i use websocket_client
import websocket
ws = websocket.WebSocket()
ws.connect("ws://192.168.0.34:54480/sony/avContent",sslopt={"cert_reqs": ssl.CERT_NONE})
gives
websocket._exceptions.WebSocketBadStatusException: Handshake status 403 Forbidden
but in their example code, there is not any kinds of authrization or authentication
I recently had the same problem. Here is what I found out:
Normal HTTP responses can contain Access-Control-Allow-Origin headers to explicitly allow other websites to request data. Otherwise, web browsers block such "cross-origin" requests, because the user could be logged in there for example.
This "same-origin-policy" apparently does not apply to WebSockets and the handshakes can't have these headers. Therefore any website could connect to your Sony device. You probably wouldn't want some website to set your speaker/receiver volume to 100% or maybe upload a defective firmware, right?
That's why the audio control API checks the Origin header of the handshake. It always contains the website the request is coming from.
The Python WebSocket client you use assumes http://192.168.0.34:54480/sony/avContent as the origin by default in your case. However, it seems that the API ignores the content of the Origin header and just checks whether it's there.
The WebSocket#connect method has a parameter named suppress_origin which can be used to exclude the Origin header.
TL;DR
The Sony audio control API doesn't accept WebSocket handshakes that contain an Origin header.
You can fix it like this:
ws.connect("ws://192.168.0.34:54480/sony/avContent",
sslopt={"cert_reqs": ssl.CERT_NONE},
suppress_origin=True)
So I need to build an HTTP server that will contact a client and send him data like pictures or calculations and create a page with those things. I guess you understood that I do not really know what I'm doing... :(
I know python and the basic(+) of the client-server project but I don't understand that HTTP protocol and didn't understand anything from what I read on the internet...
Can anyone explain to me how to work with this protocol? What is the form of HTTP packets?
Here an example of 1 problem that I don't understand: I have been asked to get a packet (which I did) and understand what is the request there, then send back the name of the file the client wants and after it the file itself. I printed the packet and didn't understand where is the request or what the client wants...
Thank you very very much!
Can anyone explain to me how to work with this protocol? What is the form of HTTP packets?
The specification might be helpful.
Concerning the webz, you find a lot of specification on the RFCs.
More to HTTP below.
(Since you seem to be new to programming, I figured I might want to tell you the following:)
Usually one doesn't directly interact with HTTP(S) packets. Instead you use a framework, such as flask, django, aiohttp and many more. The choice of framework depends on the use-case. E.g.:
You need a database, authentication and any imaginable feature? Go with Django.
You just want to create a WebApplication without a bloated framework? Go with Flask.
You need the bare minimum or want to act as a client? Go with aiohttp.
More frameworks are listed here.
The advantage of using such frameworks is that they usually include useful things, that are battletested (i.e. usually no bugs), while you don't have to figure out pecularities of certain protocols.
You just import the framework and write awesomeness! :)
(Anyways, here is a little very oversimplified overview for completeness)
So, HTTP is an text protocol over TCP, which basically means that you send text over a simple tcp socket. When you receive your request you have to "parse" (i.e. comprehend its contents). Luckily for us the requests are standarized and follow the same scheme.
The smallest request would look like this:
GET / HTTP/1.0
Host: www.server.com
The first line starts with a verb (also called request method), in our example the verb is GET. The / denotes the path. Think of file paths on your HDD. The last part of the first line, namely HTTP/1.0, tells the receiver with which version of HTTP we are operating on. Currently the there is HTTP 1.0 and HTTP 1.1; however, I wouldn't bother with HTTP 1.1 yet and stick with HTTP 1.0, if you're implementing the requests your self.
Lastly the Host: www.server.com line tells us which server we want to talk to, since multiple instances of an HTTP server could be running under the same ip. This is used to revole the subdomain.
If you send this request to an HTTP Server, you're likely to receive an response like this:
HTTP/1.0 200 OK
Server: Apache/1.3.29 (Unix) PHP/4.3.4
Content-Length: 1337
Connection: close
Content-Type: text/html
<DATA>
This response contains the status in the first line HTTP/1.0 200 OK. The number and the 'OK' represent a status code, telling us that everything is fine. There are many status codes with their own meaning and usages.
The lines following the first are so-called Response-Headers. They provide additional useful information about the response. For instance, when we open a site like 'stackoverflow.com', the server transmits an HTML file to us for the browser to interpret. Before we can do that, we need to know the size of the HTML file.
Luckily the server tells us beforehand with Content-Length: 1337 line, that the file is 1337 bytes big. The file itself would be present where the <DATA> placeholder stands.
There are, yet again, many of these headers.
As you can see, there are many things to account for when working with HTTP, showing that it is not feasible, without a very good reason, to implement a HTTP client/server from scratch.
Instead it's preferred to use one of the frameworks (for python) listed above.
As a last note:
In the process of trying to explain the concepts as simple as possible I probably left-out or oversimplified some things. If you find any mistake, please let me know.
I searched but did not found any example showing on how to convert CoAP request or response to HTTP request.
Basically what I want to do is CoAP request POST some data from device to a server which will translate it and do HTTP request POST to other server to be save inside the database.
While the part to save the data is not a major problem right now, I did not managed to find any example script showing how to convert from CoAP to HTTP.
I already looked at coapthon , aiocoap but since aiocoap requires python 3.5,(I use python 2.7) that left me with coapthon. Unfortunately coapthon only has HTTP to CoAP proxy while CoAP to HTTP is still in development.
If anyone know other project regarding this or has any opinion on how to solve this, I am glad if you can share it. Thank you.
That is called Protocol Interoperability. You Need a CoAP - HTTP and HTTP - CoAP proxy that can translate the messages between them.
Here is californium-proxy on GitHub, I am using it already. Here is the example that shows how to use it.
I need to intercept an HTTP Response packet from the server and replace it with my own response, or at least modify that response, before it arrives to my browser.
I'm already able to sniff this response and print it, the problem is with manipulating/replacing it.
Is there a way to do so wiht scapy library ?
Or do i have to connect my browser through a proxy to manipulate the response ?
If you want to work from your ordinary browser, then you need proxy between browser and server in order to manipulate it. E.g. see https://portswigger.net/burp/ which is a proxy specifically created for penetration testing with easy replacing of responses/requests (which is sriptable, too).
If you want to script all your session in scapy, then you can create requests and responses to your liking, but response does not go to the browser. Also, you can record ordinary web session (with tcpdump/wireshark/scapy) into pcap, then use scapy to read pcap modify it and send similar requests to the server.
My webapp has two parts:
a GAE server which handles web requests and sends them to an EC2 REST server
an EC2 REST server which does all the calculations given information from GAE and sends back results
It works fine when the calculations are simple. Otherwise, I would have timeout error on the GAE side.
I realized that there are some approaches for this timeout issue. But after some researches, I found (please correct me if I am wrong):
taskqueue would not fit my needs since some of the calculations could take more than half an hours.
'GAE backend instance' works when I reserved another instance all the time. But since I have already resered an EC2 instance, I would like to find some "cheap" solutions (not paying GAE backend instance and EC2 at the same time)
'GAE Asynchronous Requests' also not an option, since it still wait for response from EC2 although users can send other requests while they are waiting
Below is a simple case of my code, and it asks:
users to upload a csv
parse this csv and send information to EC2
generate output page given response from EC2
OutputPage.py
from przm import przm_batchmodel
class OutputPage(webapp.RequestHandler):
def post(self):
form = cgi.FieldStorage()
thefile = form['upfile']
#this is where uploaded file will be processed and sent to EC2 for computing
html= przm_batchmodel.loop_html(thefile)
przm_batchoutput_backend.przmBatchOutputPageBackend(thefile)
self.response.out.write(html)
app = webapp.WSGIApplication([('/.*', OutputPage)], debug=True)
przm_batchmodel.py### This is the code which sends info. to EC2
def loop_html(thefile):
#parses uploaded csv and send its info. to the REST server, the returned value is a html page.
data= csv.reader(thefile.file.read().splitlines())
response = urlfetch.fetch(url=REST_server, payload=data, method=urlfetch.POST, headers=http_headers, deadline=60)
return response
At this moment, my questions are:
Is there a way on the GAE side allow me to just send the request to EC2 without waiting for its response? If this is possible, on the EC2 side, I can send users emails to notify them when the results are ready.
If question 1 is not possible. Is there a way to create a monitor on EC2 which will invoke the calculation once information are received from GAE side?
I appreciate any suggestions.
Here are some points:
For Question 1 : You do not need to wait on the GAE side for EC2 to complete its work. You are already using URLFetch to send the data across to EC2. As long as it is able to send that data across over to the EC2 side within 60 seconds and its size is not more than 10MB, then you are fine.
You will need to make sure that you have a Receipt Handler on the EC2 side that is capable of collecting this data from above and sending back an Ack. An Ack will be sufficient for the GAE side to track the activity. You can then always write some code on the EC2 side to send back the response to the GAE side that the conversion is done or as you mentioned, you could send an email off if needed.
I suggest that you create your own little tracker on the GAE side. For e.g. when the File is uploaded, created a Task and send back the Ack immediately to the client. Then you can use a Cron Job or Task Queue on the App Engine side to simply send off the work to EC2. Do not wait for EC2 to complete its job. Then let EC2 report back to GAE that its work is done for a particular Task Id and send off and email (if required) to notify the users that the work is done. In fact, EC2 can even report back with a batch of Task Ids that it completed, instead of sending a notification for each Task Id.