For the last few days I have been trying ti install the Native Client SDK for chrome in Windows and/or Ubuntu.
I'm behind a corporate network, and the only internet access is through an HTTP proxy with authentication involved.
When I run "naclsdk update" in Ubuntu, it shows
"urlopen error Tunnel connection failed: 407 Proxy Authentication Required"
Can anyone please help ?
Try to download this file:
http://commondatastorage.googleapis.com/nativeclient-mirror/nacl/nacl_sdk/naclsdk_manifest2.json
It is the native client update summary, but in the URL I replaced the https with http. If you view the JSON file, you will see the different pepper_xx versions available. Use the links to download the one you want, but again replace https with http.
The naclsdk update tool is very difficult to use for those of us behind a strict firewall. It would be nice if Google provided a direct link to the latest SDK.
I got a solution-
not a direct one, though.
managed to use a program to redirect the HTTPS traffic through the HTTP proxy.
I used the program called "proxifier". Works great.
Related
So I have this web application running with Python's Flask and I use gevent.pywsgi.WSGIServer in order to make my application ready for production. My website is accessible from the Internet with all my devices and even others with different networks.
However I tried to add the HTTPS possibility by running the test with certbot letsencrypt... I passed the tests and obtain the certfile and keyfile but when I put them as arguments in the following function :
app_server = gevent.pywsgi.WSGIServer(
(CONFIG['Flask']['host'], int(CONFIG['Flask']['port'])),
app,
certfile="fullchain.pem",
keyfile="privkey_rsa.pem"
)
Well I get this error ssl.SSLError: [SSL] PEM lib.
PS: I opened my port for HTTPS server
That's why I wonder if the problem comes from:
the domain name passed for the letsencrypt test
domain name registrar
...
Or something else?
Thank you in advance.
I know this question is old and I came here looking for an answer to something else, but I have gone through your exact situation and couldn't help answering it.
I actually resolved the situation the proper way of building a website. I registered with a free dns server and routed all traffic coming to my router on ports 80/443 to a virtual machine in the LAN running NGINX, where I had already setup letsencrypt certs. Using this setup I don't have to enable SSL on any other machine in the network. Off topic, but to enable local network level ssl you need to have all the machines in the network. But you can simply forward requests from NGINX to any machine running in your local network using http and to the outside world all the traffic happens over SSL.
NGINX configuration is simple for LAN and you can create one yourself with a little google search, but the basic structure contains one server and several child location blocks where each location block corresponds to one web application in the LAN.
Hope this helps a bit. I can put a more detailed answer with specific steps if you are still looking.
When I am running a Python micro-service in a dockerized or kubernetes container it works just fine. But with Istio service mesh, it is not working.
I have added ServiceEntry for two of my outbound external http apis. It seems I can access the url content form inside the container using curl command which is inside service mesh. So, I think the service entries are fine and working.
But when I try from the micro-service which uses xml.sax parser in Python, it gives me the upstream connect error or disconnect/reset before headers though the same application works fine without Istio.
I think it is something related to Istio or Envoy or Python.
Update: I did inject the Istio-proxy side-car. I have also added ServiceEntry for external MySQL database and mysql is connected from the micro-service.
I have found the reason for this not working. My Python service is using xml.sax parser library to parse xml form the internet, which is using the legacy urllib package which initiate http/1.0 request.
Envoy doesn't support http/1.0 protocol version. Hence, it is not working. I made the workaround by setting global.proxy.includeIPRanges="10.x.0.1/16" for Istio using helm. This actually bypass the entire envoy proxy for all outgoing connections outside the given ip ranges.
But I would prefer not to globally bypass Istio.
I have a Django development server running on a remote centos VM on another lan. I have set up port forwarding using Secure CRT to access the web page through my browser from my desk pc. I am currently not using apache with the development server and is shutdown.
I start the server by running python manage.py runserver 0.0.0.0:80.
When I type either the ip or www.localhost.com into the web browser, my URL is read as if it has been doubled with the host being read as if it was also the path.
Page not found (404)##
Request Method: GET
Request URL: http://www.localhost.com/http://www.localhost.com/
When I try to access the development server from within the same LAN the page loads up fine.
I have been searching through the django documentation and stack overflow, but I have yet to find a similar problem to this. Does anyone have any thoughts on why this may be happening and what could be a possible solution?
Thank you very much in advance!
It looks like the request URL is incorrect:
http://www.localhost.com/http://www.localhost.com/ should probably be http://actual_machine_IP.com/
I'd start searching there. You won't be able to access the VM's port 80 from a different lan using localhost as the hostname since localhost is probably already set in your hosts file.
If you want to test your dev environ remotely, can I suggest either setting up Apache properly with port 80 (as opposed to using django's dev server--privilege restrictions and all that can be circumvented with sudo and other bad practice) or use a pre-built shared dev service like vagrant share.
I have a NodeJS-socketIO server that has clients listening from JS, PHP & Python. It works like a charm when the communication happens over plain HTTP/WS channel.
Now, when i try to secure this communication, the websocket transport is not working anymore. It falls back to xhr-polling(long polling) transport. Xhr-polling still works for JS client but not on python which purely depends on socket transport.
Things i tried:
On node, Using https(with commercial certificates) instead of http - Works good for serving pages via Node but not for socketIO
Proxy via HAProxy (1.15-dev19). From HTTPS(HAProxy) to HTTP(Node). Couldn't get Websocket transport working and it falls back to xhr-polling on JS. Python gets 502 on handshake.
Proxy via STunnel (for HTTPS) -> HAProxy(Websocket Proxy) -> Node(SocketIO) - This doesnt work either. Python client still gets 502 on handshake.
Proxy via Stunnel(HTTPS) -> Node(SocketIO) - This doesnt work too. Not sure if STunnel support websocket proxy
node-http-proxy : Throws 500(An error has occurred: {"code":"ECONNRESET"}) on websocket and falls back to xhr-polling
Im sure its a common use case and there is a solution exist. Would really appreciate any help.
Thanks in advance!
My case seems to be a rare one. I built this whole environment on a EC2 instance based on Amazon Linux. As almost all the yum packages are not up to date, i had to install pretty much every yum packages from source. By doing so i could have missed configuration unchanged/added. Or HAProxy required lib could have been not the latest.
In any case, i tried building the environment again on ubuntu 12.04 based EC2 instance. HAProxy worked like a charm with a bit of configuration tweaks. I can now connect my SocketIO server from JS, Python & PHP over SSL without any problem. I could also create a Secured TCP Amazon ELB that listens on 443 and proxy it to non-standard port (8xxx).
Let me know if anyone else encounters a similar problem, I will be happy to help!
I'm trying to access a server via SOAP with Python and I'm running into errors that tell me that I need to use the correct certificate - but I'm sure that I'm doing that.
I have some instructions that I'm following for going through this exercise - I need to connect over a WSDL link using HTTPS and invoke one Method on the server. I converted the key that I received, using OpenSLL, to key.pm and cert.pem; I also tried SUDS and some other workarounds. With everything I try, though, the service tells me that I need to use the correct certificate.
I know that the certificates are actually correct because I installed a SOAP client for Firefox and when I connect to the server using thesame certificates and the same arguments, I get valid data instead of the error message. Because of this, I think there's something wrong with how SUDS communicates about certificates. I tried the workarounds from a "SUDS over HTTPS with cert" question, but those aren't working for me.
Is there a way around this, or is there another Python SOAP client that works with certificates correctly (even if I have to manually create the XML for the request)?