I am working on Google Cloud IoT core and there I came across a problem in one of the samples (end-to-end example) provided in the online docs: google cloud iot exercise
There are two scripts, for Server and Device, and while running the device script I am facing this problem:
From the commandline args, it looks like you're passing in rsa_cert.pem, which is your SSL private key. As #class said, you need to wget the Google root certificate (wget https://pki.google.com/roots.pem) and then pass the path to downloaded roots.pem for the --ca_certs argument.
error image
Traceback (most recent call last):
File "cloudiot_pubsub_example_mqtt_device.py", line 249, in <module>
main()
File "cloudiot_pubsub_example_mqtt_device.py", line 213, in main
client.connect(args.mqtt_bridge_hostname, args.mqtt_bridge_port)
File "/usr/local/lib/python2.7/dist-packages/paho/mqtt/client.py", line
768, in connect
return self.reconnect()
File "/usr/local/lib/python2.7/dist-packages/paho/mqtt/client.py", line
927, in reconnect
sock.do_handshake()
File "/usr/lib/python2.7/ssl.py", line 788, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
(_ssl.c:581)
It's possible that your firewall (e.g. the Cloud Shell Machine) is blocking Python from connecting via port 8883. Can you try calling the cloudiot_pubsub_example_mqtt_device.py script with the port set to 443, e.g.
python <your_existing_parameters> --mqtt_bridge_port=443
You may also want to try using the HTTP device sample to publish messages as it also doesn't use port 8883, which may be blocked on your network.
In my tests, I was only able to run the exercise from the Google Cloud Shell after setting my port to 443, this hopefully will resolve the issue for you.
Note If you're encountering issues with verifying the server certificate, you need to download the Google root certificate by calling:
wget https://pki.google.com/roots.pem
Update You may also want to try to set the Python version in your virtual environment to Python 2 by setting up the virtual environment as:
virtualenv env --python=python2
Related
I've created a web application using Django 3.2.3 and Python 3.6.8. The application runs without errors in my developer environment on Windows 10 and, on the Windows Server 2019. The problem starts when I try to integrate my app with IIS for production, I get two different errors.
The first error happens when I initially submit the form and the second after I click the browsers back button and re-submit the form again:
The first error is:
[WinError 5] Access is denied: 'C:\\windows\\system32\\config\\systemprofile\\AppData\\Local\\.certifi'
If I click the browser back button and re-submit the form, I then get this error:
Error occurred: Traceback (most recent call last):File "C:\Tools\VirtualEnvironments\podbv2\Lib\site-packages\wfastcgi.py", line 849, in mainfor part in result:File ".\PODBWeb\PODBInterfaces\pod.py", line 19, in searchSPs.get_file("Monthly Reports/", "Inventory Report.xlsx", output_location='c:/!test/')File ".\POD\PODInterfaces\sharepoint.py", line 134, in get_filer = self.retry_loop(self.session.get(rest_call))File "C:\Tools\VirtualEnvironments\pod\lib\site-packages\requests\sessions.py", line 555, in getreturn self.request('GET', url, **kwargs)File "C:\Tools\VirtualEnvironments\pod\lib\site-packages\requests\sessions.py", line 542, in requestresp = self.send(prep, **send_kwargs)File "C:\Tools\VirtualEnvironments\pod\lib\site-packages\requests\sessions.py", line 655, in sendr = adapter.send(request, **kwargs)File "C:\Tools\VirtualEnvironments\pod\lib\site-packages\requests\adapters.py", line 416, in sendself.cert_verify(conn, request.url, verify, cert)File "C:\Tools\VirtualEnvironments\pod\lib\site-packages\requests\adapters.py", line 228, in cert_verify"invalid path: {}".format(cert_loc)) OSError: Could not find a suitable TLS CA certificate bundle, invalid path: C:\windows\system32\config\systemprofile\AppData\Local\.certifi\cacert.pem StdOut: StdErr:
I could be wrong but something about the IIS configuration is preventing python from reading the Windows certificate store which is why its complaining about not being able to find a certificate bundle. My app reads the windows certificate store natively as I have the "python-certifi-win32" library installed so my app shouldn't be trying to find a certificate bundle on the file system.
Can anyone help me understand why this might be happening please and how I can fix the issue without having to deploy a certificate bundle i.e. how can I get my app working "as is". It works without IIS but of course, all the docs suggest I shouldn't really deploy my Django app into a production environment without a proper web server fronting it.
Please respect the fact that the business prefers I use IIS and Windows for support purposes.
Thank you in advance for help.
Context:
We run Cypress tests which use instances of our application started using manage.py test package.test.suite. The test fixtures and environment are all set up using a class extended from django.contrib.staticfiles.testing.StaticLiveServerTestCase. A unit test/method is added to the extended class which invokes Cypress as a subprocess to run the tests and asserts against the exit code of the subprocess.
Versions:
Python: 3.6.8
Django: 2.2.3
macOS: Mojave 10.14.6
The problem:
This worked well until yesterday when I updated Cypress via npm. Now when I start a server using manage.py test some.test.suite the server will fail to serve all of the static resources requested by the browser. Specifically, it almost always fails to serve some .woff fonts and a random javascript/css file or two. The rest of the files are served but those 5 or 6 which the browser never receives a response for. Eventually I'll get a ConnectionResetError: [Errno 54] Connection reset by peer error (stack trace below) in the terminal. Additionally, if I enable the cache in my browser and attempt a few refreshes things will work fine, almost as if theres a limit to the number of files that can be served at once and once some files are cached in the browser the number of requested files falls below that limit.
When I do python manage.py runserver, however, I don't seem to have this problem at all with or without caching enabled.
Stack Trace:
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 52655)
Traceback (most recent call last):
File "/Users/myuser/.pyenv/versions/3.6.8/lib/python3.6/socketserver.py", line 320, in _handle_request_noblock
self.process_request(request, client_address)
File "/Users/myuser/.pyenv/versions/3.6.8/lib/python3.6/socketserver.py", line 351, in process_request
self.finish_request(request, client_address)
File "/Users/myuser/.pyenv/versions/3.6.8/lib/python3.6/socketserver.py", line 364, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/Users/myuser/.pyenv/versions/3.6.8/lib/python3.6/socketserver.py", line 724, in __init__
self.handle()
File "/Users/myuser/Path/To/Project/venv/lib/python3.6/site-packages/django/core/servers/basehttp.py", line 171, in handle
self.handle_one_request()
File "/Users/myuser/Path/To/Project/venv/lib/python3.6/site-packages/django/core/servers/basehttp.py", line 179, in handle_one_request
self.raw_requestline = self.rfile.readline(65537)
File "/Users/myuser/.pyenv/versions/3.6.8/lib/python3.6/socket.py", line 586, in readinto
return self._sock.recv_into(b)
ConnectionResetError: [Errno 54] Connection reset by peer
----------------------------------------
Additional Notes/Observations
The problem also occurs when I run the tests headless.
Installing Cypress did run into a hitch that required me to reinstall Electron and Cypress, clear the npm cache and delete the Cypress cache folder.
It's also strange that the problem seems to be with the python environment and not Cypress itself since accessing the web server via my browser and not Cypress produces the same behaviour; so perhaps Cypress is just a red herring and some sort of shared dependency was changed/updated in the process.
If I attempt to directly access one of the files that doesn't get served during a request, which is displayed as 'pending' in Chrome Developer's tools network tab, directly via a second tab it will succeed (but sometimes be lagged by several seconds).
After closing the test server and attempting to run it again I will get an [OSError: [Errno 48] Address already in use] error for up to a minute or two. Previously the server would relinquish the address/port immediately upon closing (or I'm assuming as such since I've never seen this before and had rapidly closed and re-ran tests to test fixture changes in the past).
Things I've Tried:
Rebuilding my virtualenv from scratch
Copying my old venv folder from a Time Machine backup from a time when things worked
Reverting the version of Cypress back to what it was prior to the problem
Looking into timeouts and connection limits of using manage.py test vs manage.py runserver (Didn't find anything).
Toggling DEBUG mode and profiling on/off.
Switching between Chrome and Chromium in Cypress
Building new environments with different python versions (3.6.5, 3.6.7 and 3.6.8)
Switching Django back to 2.2.2 from 2.2.3
Any help would be appreciated!
Update
Looking into this further it looks like the requests for the files that don't get a response never make it to the WSGIHandler form the Socket layer (or even if they make it to the socket layer, I'd assume they do though).
Update 2
I see the same behaviour with manage.py runserver if I include the --nothreading switch. I had a co-worker give it a test and he indeed saw the same behaviour with manage.py runserver --nothreading but manage.py test test.to.run still functioned fine for him.
Also removing the font references from the css/templates just results in a different set of 5 files that aren't served.
I have a script that uses Neo4j for tracking user taste preferences on alcohol types. So, basically when a user sets his preferences via an API endpoint the response is buffered to Kafka and I pick it from there. I am getting the following error when trying to read/write to Neo4j via the neo4j python driver:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/ssl.py", line 787, in recv_into
return self.read(nbytes, buffer)
File "/usr/local/lib/python2.7/ssl.py", line 657, in read
v = self.\_sslobj.read(len, buffer)
socket.error: \[Errno 104\] Connection reset by peer
Exception socket.error: error(104, 'Connection reset by peer') in 'neo4j.bolt.\_io.ChunkedInputBuffer.receive' ignored
INFO:UserSettingsProcessorProduction:2018-09-25 13:01:34 Type:<class 'neo4j.exceptions.ServiceUnavailable'> Filename:user_settings_processor.py Line:258 ERROR: Failed to write to closed connection Address(host='54.225.50.91', port=24786)
Strange is that I cannot reproduce it on local but I am getting it often while it is running in
a Docker container. I read somewhere that it could be Docker configuration issue. If the container is private for an example or something like that. I deployed it via AWS / ECS(Elastic Container Service) and it is running on a EC2 instance which is Amazon Linux AMI. If you have any suggestions what may fix it I will be very thankful!
I will keep the thread updated if I find an answer also.
We decided to use GraphQL for communicating with Neo4j and this doesn't seem to be an issue any more. I couldn't find any solution or a clue why actually it is behaving like this, while working perfectly fine on my computer. The service is deployed on a docker container, using ECS.
I have a python program that has been running well for several years on a laptop in my office. The laptop is a windows device and python was installed using a windows installer. I am using python 2.7.8 on my laptop. The program basically checks an email address for specific emails and if it finds them it processes the data in them, re-packages the data into png's and ftp's it to a specific folder on my Godaddy shared hosting account and my website will then display the images. Been working fine for years. However that laptop has now died and it's now time I put this python program in the cloud. While Godaddy does has python its default version is 2.6.6 but you can access their 2.7.2 version with #!/usr/local/bin/python2.7 but since it is only 2.7.2 I decided to install python 2.7.8 on my shared hosting account since that is the version I have been successful with. So now that I have manually installed python I am running my program via SSH to see where it chokes and add the necessary packages. I am at a point where I am a bit stumped. I am getting a connection refused error;
Traceback (most recent call last):
File "CTWRT.py", line 627, in <module>
myEmailConnection = poplib.POP3_SSL(mailServer)
File "/home/tcimaglia/python27/lib/python2.7/poplib.py", line 350, in __init__
raise socket.error, msg
error: [Errno 111] Connection refused
mailServer = "pop.secureserver.net"
Does this have anything to do with running this program on the Godaddy shared host and trying to access an email that is linked to this account, something#my-domain.com? This command works just fine when running on my laptop. Should mailServer be something else?
I'm trying to run RabbitMQ Python tutorial but with sender on virtualbox host machine and receiver and queue on virtualbox guest machine. So I modified mentioned send.py code by only replacing localhost with 192.168.1.5. When I run it, i receive following error:
...
File "/home/damian/.virtualenvs/kivy_1.9/local/lib/python2.7/site-packages/pika/adapters/base_connection.py", line 153, in _check_state_on_disconnect
raise exceptions.ProbableAuthenticationError
pika.exceptions.ProbableAuthenticationError
rabbitmq-server seems to be running, because when I stop it send.py gives me:
...
File "/home/damian/.virtualenvs/kivy_1.9/local/lib/python2.7/site-packages/pika/adapters/blocking_connection.py", line 301, in _adapter_connect
raise exceptions.AMQPConnectionError(error)
pika.exceptions.AMQPConnectionError: Connection to 192.168.1.5:5672 failed: [Errno 111] Connection refused
which makes perfect sense.
How to fix that ProbableAuthenticationError?
Host machine is Debian 7 with Python 2.7.3 and pika 0.9.14, guest is Ubuntu 15.04 with rabbitmq-server 3.4.3-2
This is because you are trying to authenticate using the username and password guest remotely. Starting with RabbitMQ 3.3 you need to create a new account to use remotely, and guest/guest can only be used locally.
This is taken from the change log here.
25603 prevent access using the default guest/guest credentials except via localhost since (1.0.0)
It's possible to modify the RabbitMQ configuration to allow remote access using the guest account, by removing guest from loopback_users, but it's recommended to create a new user to follow best practices.
[{rabbit, [{loopback_users, []}]}].