I have used this link and successfully run a python script using uWSGI. Although i just followed the doc line by line.
I have a GPS device which is sending data to a remote server. Document of the same device say that it connect to server using TCP which therefore would be http as simple device like a GPS device would not be able to do https (i hope i am right here.) Now as i have configure my Nginx server to forward all incoming HTTP request to python script for processing via uWSGI.
What i want to do is to simply print the url or query string on the HTML page. As i don't have control on the device side (i can only configure device to send data on a IP + Port), i have no clue how data is coming. Below is my access log
[23/Jan/2016:01:50:32 +0530] "(009591810720BP05000009591810720160122A1254.6449N07738.5244E000.0202007129.7200000000L00000008)" 400 172 "-" "-" "-"
Now i have look at this link on how to get the url parameters values, but i don't have a clue that what is the parameter here.
I tried to modified my wsgi.py file as
import requests
r = requests.get("http://localhost.com/")
# or r = requests.get("http://localhost.com/?") as i am directly routing incoming http request to python script and incoming HTTP request might not have #any parameter, just data #
text1 = r.status_code
def application(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/html')])
return ["<h1 style='color:blue'>Hello There shailendra! %s </h1>" %(text1)]
but when i restarted nginx, i get internal server error. Can some one help me to understand wrong i am doing here (literally i have no clue about the parameters of the application function. Tried to read this link, but what i get from here is that environ argument take care of many CGI environment variables.)
Can some one please help me to figure out what wrong am i doing and guide me to even a doc or resource.
Thanks.
why are you using localhost ".com" ?
Since you are running the webserver on the same machine,
you should change the line to
r = requests.get("http://localhost/")
Also move below lines from wsgi.py and put them in testServerConnection.py
import requests
r = requests.get("http://localhost/")
# or r = requests.get("http://localhost.com/?") as i am directly routing incoming http request to python script and incoming HTTP request might not have #any parameter, just data #
text1 = r.status_code
Start NGINX
and you also might have to run (i am not sure uwsgi set up on nginx)
uwsgi --socket 0.0.0.0:8080 --protocol=http -w wsgi
run testConnection.py to send a test request to localhost webserver and print the response
i got the answer for my question. Basically to process a TCP request, you need to open a socket and accept the TCP request on a specific port (as you specified on the hardware)
import SocketServer
class MyTCPHandler(SocketServer.BaseRequestHandler):
def handle(self):
# self.request is the TCP socket connected to the client
self.data = self.request.recv(1024).strip()
print "{} wrote:".format(self.client_address[0])
#data which is received
print self.data
if __name__ == "__main__":
#replace IP by your server IP
HOST, PORT = <IP of the server>, 8000
# Create the server, binding to localhost on port 9999
server = SocketServer.TCPServer((HOST, PORT), MyTCPHandler)
# Activate the server; this will keep running until you
# interrupt the program with Ctrl-C
server.serve_forever()
After you get the data, you can do any thing with the data. As my data format was shared in the GPS datasheet, i was able to parse the string and get the Lat and long out of it.
Related
So i want to send (through a proxy) a request to a website.. The script looks like this and its made with the socket library in python:
import socket
TargetDomainName="www.stackoverflow.com"
TargetIP="151.101.65.69"
TargetPort=80
ProxiesIP=["107.151.182.247"]
ProxiesPort=[80]
Connect=f"CONNECT {TargetDomainName} HTTP/1.1"
Connection=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
Connection.connect((ProxiesIP[0],ProxiesPort[0]))
Connection.sendto(str.encode(Connect),(TargetIP, TargetPort))
Connection.sendto(("GET /" + TargetIP + " HTTP/1.1\r\n").encode('ascii'), (TargetIP, TargetPort))
Connection.sendto(("Host: " + ProxiesIP[0] + "\r\n\r\n").encode('ascii'), (TargetIP, TargetPort))
print (Connection.recv(1028))
Connection.close()
My question is why i get the 400 bad request error?
You did not indicate whether the 400 reply is coming from the proxy or the target server. But both of your commands are malformed.
Your CONNECT command is missing a port number, a Host header since you are requesting HTTP 1.1, and trailing line breaks to terminate the command properly.
Your GET command is sent to the target server (if CONNECT is successful) and should not be requesting a resource by IP address. It is also sending the wrong value for the Host header. The command is relative to the target server, so it needs to specify the target server's host name.
Also, you should be using send()/sendall() instead of sendto().
Try something more like this instead:
import socket
TargetDomainName="www.stackoverflow.com"
TargetIP="151.101.65.69"
TargetPort=80
ProxiesIP=["107.151.182.247"]
ProxiesPort=[80]
Connection=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
Connection.connect((ProxiesIP[0], ProxiesPort[0]))
Connection.sendall((f"CONNECT {TargetDomainName}:{TargetPort} HTTP/1.1\r\n").encode("ascii"))
Connection.sendall((f"Host: {TargetDomainName}:{TargetPort}\r\n\r\n").encode("ascii"))
print (Connection.recv(1028))
Connection.sendall(("GET / HTTP/1.1\r\n").encode('ascii'))
Connection.sendall((f"Host: {TargetDomainName}\r\n").encode('ascii'))
Connection.sendall(("Connection: close\r\n\r\n").encode('ascii'))
print (Connection.recv(1028))
Connection.close()
You really need to read the proxy's reply before sending the GET command. The proxy will send its own HTTP reply indicating whether it successfully connected to the target server or not.
You really should not be implementing HTTP manually though, there are plenty of HTTP libraries for Python that can handle these details for you. Python even has one built-in: http.client
Currently I have something called a client, and a server.py that is running a flask server. I also have a different machine running a different flask server. (With the host being the ip address of the machine)
on one of the app routes, it will reroute the request to the different machine (Because of reasons) to process
Essentially, it looks something like this
server.py
#app.route('/endpointapi', methods = ['POST'])
def doStuff():
f = request.form['message']
correctURL = "http://somenumber.2.3:port/endpointapi"
r = requests.post(correctURL, data={'message', f})
return r
differentMachineOneSameNetwork.py
#app.route('/endpointapi', methods = ['POST'])
def doStuff():
//does things correctly
return updated message
The somenumber.2.3 is a valid ip address of the machine that is running differentMachineOneSameNetwork.py along with the port. Both machines are running a flask webserver, but the client can only send requests to the server.py. However, it is not connecting. Is there any reason why it might not be working?
EDIT
The ports are artifically chosen. 5555 for the server.py, and 8090 for the different machine. It works when we just test connections from the same machine to its own using curl commands, but it does not work when we try to curl from server machine to the other machine.
Whats working:
Curl from server to server flask
Curl from differentMachine to differentMachine flask
Curl from differentMachine to server flask
Not working:
Curl from server to differentMachine doesn't work. Connection just times out. Example command that was sent:
curl -X POST -F "message=some bojangle" http://correct.ip.address:8090/endpointapi
I get:
curl: (7) Failed to connect to correct.ip.address port 8090: Connection timed out
It just times out after trying to establish a connection, giving http response code of 500.
I am trying to make a server in python using sockets that I can connect to on any web browser. I am using the host as "localhost" and the port as 8888.
When I attempt to connect to it, the stuff I want to be shown shows up for a split-second, and then it goes away with the browser saying "The connection was reset".
I've made it do something very simple to test if it still does it, and it does.
Is there a way to stop this?
import time
import socket
HOST = "localhost"
PORT = 8888
def function(sck):
sck.send(bytes("test"),"UTF-8"))
sck.close()
ssck=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
ssck.bind((HOST,PORT))
ssck.listen(1)
while True:
sck,addr=ssck.accept()
function(sck)
Probably the same problem as Perl: Connection reset with simple HTTP server, Ultra simple HTTP socket server, written in PHP, behaving unexpectedly, HTTP Server Not Sending Complete File To WGET, Firefox. Connection reset by peer?. That is you don't read the HTTP header from the browser but simply send your response and close the connection.
tl;dr
your function should be
def function(sck):
sck.send(bytes("HTTP/1.1 200 OK\n\n<header><title>test page</title></header><body><h1>test page!</h1></body>"),"UTF-8"))
sck.close()
With a server as simple as that, you're only creating a TCP socket.
HTTP protocols suggest that the client should ask for a page, something like:
HTTP/1.1 GET /somepath/somepage.html
Host: somehost.com
OtherHeader: look at the http spec
The response should then be:
HTTP/1.1 200 OK
some: headers
<header></header><body></body>
Below is the simple script I'm using to redirect regular HTTP requests on port 8080, it redirects(causes them to be at least) them depending on the source IP address right away.
It works (for HTTP), however I would like to have the same behavior for HTTPS requests coming over 443 port. Assume that if the redirection was not present, incoming clients to this simple server would be able to handshake with the target they are being redirected to via a self signed certificate.
import SimpleHTTPServer
import SocketServer
LISTEN_PORT = 8080
source = "127.0.0.1"
target = "http://target/"
class simpleHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
def do_POST(self):
clientAddressString = ''.join(str(self.clientAddress))
if source in clientAddressString:
# redirect incoming request
self.send_response(301)
new_path = '%s%s' % (target, self.path)
self.send_header('Location', new_path)
self.end_headers()
handler = SocketServer.TCPServer(("", LISTEN_PORT), simpleHandler)
handler.serve_forever()
I can use a self signed certificate and have access to files "server.crt" and "server.key" that are normally used for this connection(without the middle redirecting python server). I am not sure what happens when I put a redirection in between like this, although I assume it has to be part of the hand-shaking chain.
How can I achieve this behavior?
Is there anything I should modify apart from the new target and the response code within request headers?
I will split my answer into Networking and Python parts.
On the Networking side, you cannot redirect at the SSL layer - hence you need a full HTTPs server, and redirect the GET/POST request once the SSL handshake is complete. The response code, and the actual do_POST or do_GET implementation would be exactly the same for both HTTP and HTTPs.
As a side note, don't you get any issues with redirecting POSTs? When you do a 301 on POST, the browser will not resend the POST data to your new target, so something is likely to break at the application level.
On the Python side, you can augment an HTTP server to an HTTPs one by wrapping the socket:
import BaseHTTPServer, SimpleHTTPServer
import ssl
handler = BaseHTTPServer.HTTPServer(("", LISTEN_PORT), simpleHandler)
handler.socket = ssl.wrap_socket (handler.socket, certfile='path/to/combined/PKCS12/container', server_side=True)
handler.serve_forever()
Hope this helps.
I have a web server using gevent.pywsgi.WSGIServer (http://www.gevent.org/gevent.pywsgi.html)and I need to handle a non-http request as well as normal http requests.
Server:
web_server = gevent.pywsgi.WSGIServer(('', 8080), web_server);
web_server.serve_forever();
Handler:
def viewer_command_server(env, start_response):
if env['REQUEST_METHOD'].upper() == "PUT":
path = env["PATH_INFO"]
start_response("200 OK", [("Content-Type", "text/html"), ("Cache-Control", "no-cache"), ("Connection","keep-alive")])
return [ ""]
This handles normal PUT requests, but I would like also server the crossdomain.xml file used by a flash application. But the problem is I get this when the flash application tries to retrieve its crossdomain.xml file.
"socket fileno=13 sock=66.228.55.170:9090 peer=96.54.202.251:63380: Invalid HTTP method: '<policy-file-request/>\x00'
96.54.202.251 - - [2012-05-21 22:58:53] "<policy-file-request/>" 400 0 2.940527
"
Is there any way to handle this request as well?
Adobe recommends running a separate tcp server on port 843 to serve this file.
I would like to keep everything on port 8080.
The protocol spoken on port 843 is not HTTP. See http://www.adobe.com/devnet/flashplayer/articles/socket_policy_files.html.
A valid HTTP request looks like
GET /path HTTP/1.0
(See e.g. http://www.jmarshall.com/easy/http/#sample for more examples.)
If there's a way to tell the Flash Player client to look for the policy file on some port other than 843, then maybe there's a way to tell it to use HTTP instead of this custom XML-ish "" message, and then and only then could you handle this from your HTTP server.
Anything is possible but I don't think it sounds like a good idea at all to handle non-HTTP requests as part of your WSGI server on the same port 8080 that it uses for HTTP.
I managed to peel this one back a bit further today. Buried in the adobe documentation is a note that if you are using a raw socket then fit will go looking for your cross domain file using their raw XML query. It does appear to work if you specify 'http' and it does go and get the cross domain file via http. The problem for me was that I was using a raw tcp socket in my flash script. So it went off to try to get the cross domain file from that server.
So to keep things simple I will change the network calls to use http. That is what they are doing anyway (I was using a sample I found that does streaming using http multipart response)