I am getting this error: signalling support is unavailable because the blinker library is not installed.
I am running Django 1.6.5 under python 2.6.9.
Is it possible that the error will go away if i update python on the server to 2.7.x?
If so how can I update the server without losing everything I have done upto this point creating my website on the instance?
Thanks so much in advance.
Just install blinker by typing pip install blinker in the console.
Be sure you install it in your virtualenv if by any chance you use one, just by activating it before executing the pip command.
You may also review your staging procedure to correctly install project dependencies.
Related
I am trying to run a django project on an EC2 server, however, when I run python3 manage.py runserver, it returns this error, django.core.exceptions.ImproperlyConfigured: SQLite 3.9.0 or later is required (found 3.7.17).. I then check to see what version of SQLite3 is running on my python installation on my EC2 server by running sqlite3.sqlite_version, and it returns 3.7.17. So I then try to update SQLite3 using the default AWS EC2 Amazon Linux package manager, yum, by running yum install sqlite. It then returns this, Package sqlite-3.7.17-8.amzn2.1.1.x86_64 already installed and latest version, even though it is not the latest version. How can I install the latest version of SQLite3 to fix this?
I had the same problem. Since my app is very small with little dependency, I was able to quickly switch to EC2 sever running Ubuntu. It is necessary to learn how to use Ubuntu (apt).
You can find right now in the installation:
Package: sqlite3
Version: 3.31.1-4ubuntu0.2
I've been learning the ropes with AWS SAM and have successfully deployed a number of lambdas together with dependencies and other AWS services. However, I seem to have run into a problem when trying to deploy a lambda which relies on some specific dependencies.
Here is my requirements.txt file:
paramiko==2.4.2
cryptography==2.6.1
bcrypt==3.1.6
pynacl==1.3.0
This file is found in "packageRoot/myCodeUri/requirements.txt"
When I run sam build I get the following error:
2019-08-27 11:18:18 Running PythonPipBuilder:ResolveDependencies
Build Failed
Error: PythonPipBuilder:ResolveDependencies - {pynacl==1.3.0(wheel), cryptography==2.6.1(wheel), bcrypt==3.1.6(wheel)}
This (or at least similar) errors have been reported:here over 8 months ago but is currently not answered.
P.S. I tried this originally with just paramiko as this is the only library my script uses, as I understood; the dependencies should be automatically pulled in during the build, however this didn't work either.
Any help would be great?
I was getting same error with another dependency while running sam build. I was able to resolve this by installing wheel in our python (or venv) environment.
pip install wheel
This approach did not require --use-container flag while running sam build
Installing wheel didn't work for me, however upgrading pip did.
python -m pip install --upgrade pip
I've managed to get a workaround to build and deploy lambdas that need the paramiko library using a docker container in interactive mode. Anyone having the same problem have a look here
I had this issue when trying to use simplejson library. It was added to solve serialization issues... (pip wheel and upgrade didn't help), I just deleted the library and handle the serialization issues within the db query)
On our staging machine, running any airflow command gives error:
[2018-09-01 16:12:55,938] {__init__.py:37} CRITICAL - Cannot import api_auth.deny_all for API authentication due to: No module named api_auth.deny_all
api_auth seems to come along with airflow, as I tried pip install api_auth and could not find a lib.
On the same machine, I tried to reinstall a fresh clean airflow using virtualenv and pip install airflow, and still get this error.
I tried again on my own laptop and airflow works fine. So I suspect it is probably due to the historical ~/airflow/airflow.cfg on the staging machine.
I am not familiar with the airflow.cfg settings, and cannot find any clue on Google.
Anyone know what may cause the issue and how to resolve?
You are installing a wrong version of Apache Airflow.
Please install Airflow using the following:
pip install apache-airflow
instead of
pip install airflow
Airflow package has been renamed to apache-airflow since 1.8.0
Check the following link for documentation:
https://airflow.apache.org/installation.html#getting-airflow
I am trying to connect to the Watson Developer Cloud API, and am having issues installing Watson Developer Cloud. I am using
pip install --upgrade "watson-developer-cloud>=1.2.1
The install gets hung up on the install of Twisted. I have tried pip install twisted and installing using "Twisted-17.9.0.tar.bz2" from the Twisted site.
https://twistedmatrix.com/trac/wiki/Downloads
https://www.ibm.com/watson/developercloud/natural-language-classifier/api/v1/python.html?python
I have exhausted all my resources. Any ideas how to resolve the following error?
Error Code 1
Are you by chance running in a VM? If you are, try installing it on a stand alone (not VM) machine.
I had a similar issue trying to get Watson Developer going on a VM and it also stuck on the Twisted component - but i was able to take the (almost) same environment on a stand alone basis (not VM) and it installed.
I assumed that I had somehow not configured the interface correctly (which is probably true) - but it installed on the stand alone without issue.
Try to install it as Administrator
Update your pip
Before installing Watson, run this:
xcode-select --install
I have used the entire day trying to install EB CLI on windows in order to connect to AWS Elastic Beanstalk but I keep getting the same error:
Running setup.py install for docker-py
Could not find .egg-info directory in install record for docker-py>=1.1.0 <=1.7.2 (from awsebcli)
I started out with the latest version of Python but after reading of other users issues on Stack Overflow I decided to downgrade my Python version to 3.4.0. However, I still get the same error, meaning that I cannot do EB init to connect to my Elastic Beanstalk instance since it does not recognise the command.
I also tried to un-install docker-py and re-install it - still not working.
Any ideas to what I am doing wrong?
It looks as if you may have version conflicts. See a similar issue here
Try installing awsebcli in a virtual environment, as suggested by the aws docs.