I have a NodeJS-socketIO server that has clients listening from JS, PHP & Python. It works like a charm when the communication happens over plain HTTP/WS channel.
Now, when i try to secure this communication, the websocket transport is not working anymore. It falls back to xhr-polling(long polling) transport. Xhr-polling still works for JS client but not on python which purely depends on socket transport.
Things i tried:
On node, Using https(with commercial certificates) instead of http - Works good for serving pages via Node but not for socketIO
Proxy via HAProxy (1.15-dev19). From HTTPS(HAProxy) to HTTP(Node). Couldn't get Websocket transport working and it falls back to xhr-polling on JS. Python gets 502 on handshake.
Proxy via STunnel (for HTTPS) -> HAProxy(Websocket Proxy) -> Node(SocketIO) - This doesnt work either. Python client still gets 502 on handshake.
Proxy via Stunnel(HTTPS) -> Node(SocketIO) - This doesnt work too. Not sure if STunnel support websocket proxy
node-http-proxy : Throws 500(An error has occurred: {"code":"ECONNRESET"}) on websocket and falls back to xhr-polling
Im sure its a common use case and there is a solution exist. Would really appreciate any help.
Thanks in advance!
My case seems to be a rare one. I built this whole environment on a EC2 instance based on Amazon Linux. As almost all the yum packages are not up to date, i had to install pretty much every yum packages from source. By doing so i could have missed configuration unchanged/added. Or HAProxy required lib could have been not the latest.
In any case, i tried building the environment again on ubuntu 12.04 based EC2 instance. HAProxy worked like a charm with a bit of configuration tweaks. I can now connect my SocketIO server from JS, Python & PHP over SSL without any problem. I could also create a Secured TCP Amazon ELB that listens on 443 and proxy it to non-standard port (8xxx).
Let me know if anyone else encounters a similar problem, I will be happy to help!
Related
So I have this web application running with Python's Flask and I use gevent.pywsgi.WSGIServer in order to make my application ready for production. My website is accessible from the Internet with all my devices and even others with different networks.
However I tried to add the HTTPS possibility by running the test with certbot letsencrypt... I passed the tests and obtain the certfile and keyfile but when I put them as arguments in the following function :
app_server = gevent.pywsgi.WSGIServer(
(CONFIG['Flask']['host'], int(CONFIG['Flask']['port'])),
app,
certfile="fullchain.pem",
keyfile="privkey_rsa.pem"
)
Well I get this error ssl.SSLError: [SSL] PEM lib.
PS: I opened my port for HTTPS server
That's why I wonder if the problem comes from:
the domain name passed for the letsencrypt test
domain name registrar
...
Or something else?
Thank you in advance.
I know this question is old and I came here looking for an answer to something else, but I have gone through your exact situation and couldn't help answering it.
I actually resolved the situation the proper way of building a website. I registered with a free dns server and routed all traffic coming to my router on ports 80/443 to a virtual machine in the LAN running NGINX, where I had already setup letsencrypt certs. Using this setup I don't have to enable SSL on any other machine in the network. Off topic, but to enable local network level ssl you need to have all the machines in the network. But you can simply forward requests from NGINX to any machine running in your local network using http and to the outside world all the traffic happens over SSL.
NGINX configuration is simple for LAN and you can create one yourself with a little google search, but the basic structure contains one server and several child location blocks where each location block corresponds to one web application in the LAN.
Hope this helps a bit. I can put a more detailed answer with specific steps if you are still looking.
I am using the Azure IoT Hub Client SDK for Python. I am using a slightly modified version of the sample script from the github repo to upload files to the IoT Hub. Everything works fine as long as I do not have to use a proxy for outgoing connections.
I tried to understand how to configurate a proxy for this, but I did not find anything for the Python SDK. I searched also in the other SDKs and found some ProxySettings in the iothub_client_options.h of the C SDK. But I do not know how to set these settings in the python client (in case the settings are actually working).
I also found an issue that the connection over websockets needs some special format of the Linux environment variables. But I do not use websockets.
I tried to run my script both in Windows and Linux environments where the proxy system settings are correctly configured (Win: Internet settings, Linux: environment variables).
Is there any documentation on this topic? Does anybody how to configure a proxy either on windows or on linux?
Per my experience, I think you can run the python script using Azure IoTHub Client SDK without any proxy settings to communicate with Azure IoT Hub if the OS configured correctly the proxy.
However, there are some notes which need to be focused by using different protocol (such as HTTP, Socks, etc) configured in proxy server, as below.
Normally, the proxy server was configured for working on HTTP protocol which only allow the HTTP communication. So if using IoTHub Client within HTTP mode, the script will works fine, but not works within AMQP/MQTT mode.
If the proxy server was configured for working on Socks protocol, such as Socks4/Socks5, the script within any mode will works fine, because the Socks protocol just transmit datagram, not check the protocol type.
So please check which protocols be supported in your proxy server, then to use HTTP mode or configure Socks protocol for proxy to make the script works.
I'm running anaconda python 2.7 and the latest Requests library on a Windows 7 desktop connected to a corporate network with an outbound proxy server at 10.0.0.255.
My python script reads as follows:
import requests
r = requests.get("http://google.com")
I've also tried many different intranet and internet urls, HTTP and HTTPs all with the same result: 503 error.
I've thought somehow the proxy is at fault. I've added the 'proxies = prox' statement with the following definition"
prox={
"http" : "10.0.0.255:80"
"https" : "10.0.0.225:443"
}
Which made no difference, but its entirely possible that my ports are wrong as the documentation is a bit sparse on the statement (only one example).
I did try localhost and it gave me a different error:
ConnectionError: ('Connection aborted.', error(10061, 'No connection could be made because the target machine actively refused it'))
My machine hates me. Great.
At this point I'm stumped. Its probably something related to all the security c_rp on this machine, but I'm not sure what my next move is.
I am a N00b to python, and haven't coded in 20 years. That said, I wrote hard core C and ran memory debugs deep in architecture to find leaks, so I'm not completely dumb, just very, very rusty.
Doing a GET request on localhost won't do anything unless there is a webserver running on localhost:80. Setup a node.js webserver running on localhost and then try again.
Most corporate proxies use port 8080 for all traffic.
I have a Django development server running on a remote centos VM on another lan. I have set up port forwarding using Secure CRT to access the web page through my browser from my desk pc. I am currently not using apache with the development server and is shutdown.
I start the server by running python manage.py runserver 0.0.0.0:80.
When I type either the ip or www.localhost.com into the web browser, my URL is read as if it has been doubled with the host being read as if it was also the path.
Page not found (404)##
Request Method: GET
Request URL: http://www.localhost.com/http://www.localhost.com/
When I try to access the development server from within the same LAN the page loads up fine.
I have been searching through the django documentation and stack overflow, but I have yet to find a similar problem to this. Does anyone have any thoughts on why this may be happening and what could be a possible solution?
Thank you very much in advance!
It looks like the request URL is incorrect:
http://www.localhost.com/http://www.localhost.com/ should probably be http://actual_machine_IP.com/
I'd start searching there. You won't be able to access the VM's port 80 from a different lan using localhost as the hostname since localhost is probably already set in your hosts file.
If you want to test your dev environ remotely, can I suggest either setting up Apache properly with port 80 (as opposed to using django's dev server--privilege restrictions and all that can be circumvented with sudo and other bad practice) or use a pre-built shared dev service like vagrant share.
For the last few days I have been trying ti install the Native Client SDK for chrome in Windows and/or Ubuntu.
I'm behind a corporate network, and the only internet access is through an HTTP proxy with authentication involved.
When I run "naclsdk update" in Ubuntu, it shows
"urlopen error Tunnel connection failed: 407 Proxy Authentication Required"
Can anyone please help ?
Try to download this file:
http://commondatastorage.googleapis.com/nativeclient-mirror/nacl/nacl_sdk/naclsdk_manifest2.json
It is the native client update summary, but in the URL I replaced the https with http. If you view the JSON file, you will see the different pepper_xx versions available. Use the links to download the one you want, but again replace https with http.
The naclsdk update tool is very difficult to use for those of us behind a strict firewall. It would be nice if Google provided a direct link to the latest SDK.
I got a solution-
not a direct one, though.
managed to use a program to redirect the HTTPS traffic through the HTTP proxy.
I used the program called "proxifier". Works great.