I'm trying to Setup webtatic yum source for php-fpm in ansible playbook.
My code is:
- name: Setup webtatic yum source for php-fpm
yum: name=https://mirror.webtatic.com/yum/el7/webtatic-release.rpm
It fails with the error:
fatal: [test.example.com]: FAILED! => {"changed": false, "msg": "**Failed to validate the SSL certificate for mirror.webtatic.com:443. Make sure your managed systems have a valid CA certificate installed. You can use validate_certs=False if you do not need to confirm the servers identity but this is unsafe and not recommended.** Paths checked for this platform: /etc/ssl/certs, /etc/pki/ca-trust/extracted/pem, /etc/pki/tls/certs, /usr/share/ca-certificates/cacert.org, /etc/ansible. The exception msg was: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:618)."}
How can I write it correctly?
This tends to happen when your managed node does not have the CA root certificate bundle installed.
A possible fix would be to verify it is present before trying to install your rpm:
- name: Setup webtatic yum source for php-fpm
yum:
name: "{{ packages }}"
vars:
packages:
- ca-certificates # This package contains the required CA root certificate bundle
- https://mirror.webtatic.com/yum/el7/webtatic-release.rpm
The problem root might be in your local time if it is not synchronized correctly.
I assume that you have ca-certificates package already installed.
The CA certificate issues are sometimes related to the incorrect time.
openssl s_client -host mirror.webtatic.com -port 443 \
-CAfile /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt
Look for Verify return code: 9 (certificate is not yet valid) or notBefore=...
Please, try to install the ntp and ntpdate packages, then synchronize your time. There is an example for CentOS how to do it: https://thebackroomtech.com/2019/01/17/configure-centos-to-sync-with-ntp-time-servers/
This should fix your problem if it was due to unsynchronized time.
Related
I'm trying to add a certificate into a Dockerfile, needed for Python requests package:
FROM python:3.9-slim-buster
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ENV PYTHONPATH="$PYTHONPATH:/app"
WORKDIR /app
COPY ./app .
COPY ./certs/*.crt /usr/local/share/ca-certificates/
RUN update-ca-certificates
RUN pip3 install requests
CMD ["python3", "main.py"]
With the above Dockerfile, I get the following error:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain
Based on my tests, that is because requests is using certifi and is looking only inside /usr/local/lib/python3.9/site-packages/certifi/cacert.pem. If I add my certificates inside cacert.pem, everything works as expected and the errors are gone.
What is the pythonic way to deal with this issue? Ideally, I would prefer to insert certificates into a directory, instead of modifying a file. Is there a way to "force" Python requests look inside /etc/ssl/certs for certificates, as well into certifi cacert.pem file? If I list the /etc/ssl/certs directory contents, it contains my .pem certificates.
Running an apt-get update will not update ca-certificates, I'm already using the latest version. When I execute update-ca-certificates, the new certificates are detected:
STEP 10/11: RUN update-ca-certificates
Updating certificates in /etc/ssl/certs...
2 added, 0 removed; done.
Thank you for your help.
There only reasonable solution I found is:
from requests import post
from requests.exceptions import HTTPError, RequestException, SSLError
try:
result = post(url=url, data=dumps(data), headers=headers, verify='/etc/ssl/certs')
except (HTTPError, RequestException, SSLError) as e:
raise
Setting verify=/etc/ssl/certs will see the self-signed certificates.
I'm trying to integrate ElasticSearch with my Django project using the package django-elasticsearch-dsl, and I am getting this error:
>> $ curl -X GET http://localhost:9200
curl: (7) Failed to connect to localhost port 9200: Connection refused
I downloaded django-elasticsearch-dsl using the commands:
pip install https://github.com/sabricot/django-elasticsearch-dsl/archive/6.4.0.tar.gz
and
pip install django-elasticsearch-dsl, but they both caused the same results.
I don't believe this is a duplicate question because every other question I have read pertaining to this error has dealt with only the ElasticSearch library, not the django-elasticsearch-dsl library. The latter is built on top of the former, but I can't seem to find a elasticsearch.yml file as detailed in all other posts.
Here is what is installed in my virtual environment:
>> pip freeze
Django==2.2.2
django-elasticsearch-dsl==6.4.0
elasticsearch==7.0.2
elasticsearch-dsl==7.0.0
lazy-object-proxy==1.4.1
mccabe==0.6.1
pylint==2.3.1
python-dateutil==2.8.0
pytz==2019.1
requests==2.22.0
typed-ast==1.4.0
urllib3==1.25.3
According to this tutorial, the command http://127.0.0.1:9200 should return what looks like a JSON response, but instead I get the error:
curl: (7) Failed to connect to localhost port 9200: Connection refused
Have you made documents.py for each app? You need to make it to push the data to the elasticsearch database. But firstly you need to install elasticsearch properly(in your case its not installed properly).
Try this Tutorial for installing elasticsearch, i used it recently and it worked like a charm.
Link
And this is the path for elasticsearch.yml, (/etc/elasticsearch/elasticsearch.yml).
And dont forget to start it using - sudo systemctl start elasticsearch(check its status using - sudo systemctl status elasticsearch )
I have anaconda 3 and tensorflow set up and they work well from the anaconda command line. I would like to use PyCharm but cannot add the interpreter Conda.
I followed the instructions from:
https://www.jetbrains.com/help/pycharm/configuring-python-interpreter.html
I tried different things. The first is where the conda executable is anaconda.exe:
C:\Logiciels\Anaconda3\Scripts\anaconda.exe create -p C:\Logiciels\Anaconda3\envs\Ex_Files_TensorFlow -y python=3.7
I obtain the error:
anaconda: error: argument : invalid choice: 'create' (choose from 'auth', 'label', 'channel', 'config', 'copy', 'download', 'groups', 'login', 'logout', 'move', 'notebook', 'package', 'remove', 'search', 'show', 'upload', 'whoami')
I tried the conda.exe as executable:
C:\Logiciels\Anaconda3\Scripts\conda.exe create -p C:\Logiciels\Anaconda3\envs\Ex_Files_TensorFlow -y python=3.5
But obtain the output:
Collecting package metadata: ...working... failed
CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://repo.anaconda.com/pkgs/r/noarch/repodata.json.bz2>
Elapsed: -
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
If your current network has https://www.anaconda.com blocked, please file a support request with your network engineering team.
SSLError(MaxRetryError('HTTPSConnectionPool(host=\'repo.anaconda.com\', port=443): Max retries exceeded with url: /pkgs/r/noarch/repodata.json.bz2 (Caused by SSLError("Can\'t connect to HTTPS URL because the SSL module is not available."))'))
I also tried:
C:\Logiciels\Anaconda3\python.exe create -p C:\Users\hel\.conda\envs\Ex_Files_TensorFlow -y python=3.7
The command output is then:
C:\Logiciels\Anaconda3\python.exe: can't open file 'create': [Errno 2] No such file or directory
But the file exists and is there. Why PyCharm doesn't see it?
I also tried version 3.5 instead of 3.7 and a different folder to set the environment in. Do you have any suggestions?
As suggested in the comments, I looked for existing environement in the conda prompt with the command:
conda info --envs
that returns environement presents on the machine
And copy the path to the interpreter field in Add Interpreter > Conda > Existing Environment > Interpreter
I found my conda.exe buried in folder
C:\Users\MYUSER\AppData\Local\Continuum\anaconda3\Scripts
I want to use miniconda3 to install a package but it pops that error:
Solving environment: failed
CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://repo.anaconda.com/pkgs/main/noarch/repodata.json.bz2>
Elapsed: -
An HTTP error occurred when trying to retrieve this URL. HTTP errors are often intermittent, and a simple retry will get you on your way.
If your current network has https://www.anaconda.com blocked, please file a support request with your network engineering team.
SSLError(SSLError(SSLError("bad handshake: SysCallError(104, 'ECONNRESET')",),),)
BTW, the miniconda2 runs wells only miniconda3 has the problem.
PS. I also tried to use wget --no-check-certificate to download the file and got the error too:
wget https://repo.anaconda.com/pkgs/main/linux-64/repodata.json.bz2
--2019-03-14 02:03:44-- https://repo.anaconda.com/pkgs/main/linux-64/repodata.json.bz2
Resolving repo.anaconda.com (repo.anaconda.com)... 104.16.130.3,
104.16.131.3, 2606:4700::6810:8303, ...
Connecting to repo.anaconda.com (repo.anaconda.com)|104.16.130.3|:443... connected.
Unable to establish SSL connection.
The system is CentOS and I have already use sudo yum update to update the sys file and library.
I set up a Mosquitto broker in a Raspberry Pi and created self-signed TLS server certificate with OpenSSL. Configuration works as I can connect successfully with Moquitto client from terminal, as well as from MQTTBox and MQTT.fx.
However when trying to connect with Python and Paho-MQTT following error
import paho.mqtt.client as mqtt
# SETTINGS & CONSTANTS
(...)
TLS_CA = "./tls/mqtt.crt"
# MQTT CALLBACKS
(...)
# INIT & CONNECT CLIENT
client = mqtt.Client(DEVICE_ID)
(...)
client.tls_set(TLS_CA)
client.username_pw_set(MQTT_USER, MQTT_PSWD)
client.connect(MQTT_HOST, MQTT_PORT, MQTT_KEEPALIVE)
I get the following error:
File "/usr/lib/python3.4/ssl.py", line 804, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:600)
I've tried many things:
1) Insert self-signed certificate into Raspbian ca-certificates
sudo mkdir /usr/local/share/ca-certificates/extra
sudo cp mqtt.crt /usr/local/share/ca-certificates/extra/mqtt.crt
sudo update-ca-certificates
2) Play with Paho's tls_set() options. I think ca_certs=mqtt.crt and tls_version=ssl.PROTOCOL_TLSv1 should be enough.
3) Use tls_insecure_set(True). I know this is not a valid solution, but I just wanted to try if something happen. Result is still CERTIFICATE_VERIFY_FAILED error
4) Use Python 2.7.9 and Python 3.4.2
I've actually run out of ideas
After long time trying and reading everywhere I realized the problem was caused by self-signed certificates. I generated new certificates with different Common Names for CA and broker and everything seems to work fine.