I tried to install theano and keras on Pydroid 3 (Android) which I was successfull but while running keras theano wasn't the backend for keras so I installed ubuntu 20 on termux and installed keras and theano with the following command:-
apt install python3-keras --no-install-recommends && apt install python3-theano --no-install-recommends
and it was successfully installed and when i wanted the backend stuff as theano I searched for ~/.keras/keras.json but it wasn't there so anyway I wanted to check it so I ran it gave me the following error:-
root#localhost~# python3 testkeras.py
[localhost:21091] opal_ifinit: ioctl(SIOCGIFHWADDR) failed with errno=13
[localhost:21092] opal_ifinit: ioctl(SIOCGIFHWADDR) failed with errno=13
[localhost:21092] pmix_ifinit: ioctl(SIOCGIFHWADDR) failed with errno=13
[localhost:21092] oob_tcp: problems getting address for index 88256 (kernel index -1)
-------------------------------------------------------------------------- No network interfaces were found for out-of-band communications. We require
at least one available network for out-of-band messaging.
--------------------------------------------------------------------------
[localhost:21091] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ess_singleton_module.c at line 716
[localhost:21091] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ess_singleton_module.c at line 172 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems. This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):
orte_ess_init failed
--> Returned value Unable to start a daemon on the local node (-127) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):
ompi_mpi_init: ompi_rte_init failed
--> Returned "Unable to start a daemon on the local node" (-127) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
[localhost:21091] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
1.) I want to know what was the problem .
2.) Suggestions are welcomed.
The Code that I ran if any of you want to know
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout
And here is the modules I have installed :
Package Version
------------------- -------
decorator 4.4.2 h5py 2.10.0
Keras 2.2.4
Keras-Applications 1.0.6
Keras-Preprocessing 1.0.5
mpi4py 3.0.3
numpy 1.17.4
pip 20.0.2 PyYAML 5.3.1 scipy 1.3.3
setuptools 45.2.0
six 1.14.0
Theano 1.0.4
wheel 0.34.2
And I'am a new to this machine learning field
Some other information on the system
.-/+oossssoo+/-.
`:+ssssssssssssssssss+:`
-+ssssssssssssssssssyyssss+-
.ossssssssssssssssssdMMMNysssso.
/ssssssssssshdmmNNmmyNMMMMhssssss/ root#localhost
+ssssssssshmydMMMMMMMNddddyssssssss+ --------------
/sssssssshNMMMyhhyyyyhmNMMMNhssssssss/ OS: Ubuntu 20.04 LTS focal aarch64
.ssssssssdMMMNhsssssssssshNMMMdssssssss. Kernel: 4.4.147+
+sssshhhyNMMNyssssssssssssyNMMMysssssss+ Uptime: 18805 days, 10 hours, 9 min
ossyNMMMNyMMhsssssssssssssshmmmhssssssso Packages: 202 (dpkg)
ossyNMMMNyMMhsssssssssssssshmmmhssssssso Shell: bash 5.0.16
+sssshhhyNMMNyssssssssssssyNMMMysssssss+ Terminal: proot
.ssssssssdMMMNhsssssssssshNMMMdssssssss. CPU: Unisoc SC9863a (8) # 1.200GHz
/sssssssshNMMMyhhyyyyhdNMMMNhssssssss/ Memory: 957MiB / 1819MiB
+sssssssssdmydMMMMMMMMddddyssssssss+
/ssssssssssshdmNNNNmyNMMMMhssssss/
.ossssssssssssssssssdMMMNysssso.
-+sssssssssssssssssyyyssss+-
`:+ssssssssssssssssss+:`
.-/+oossssoo+/-.
Thank you Dr. Snoopy , I finally got it working correctly . But I had to delete the os and reinstalled it without any need by using the command apt install proot-distro from termux but I think the real problem was the command
apt install python3-keras --no-install-recommends as like you said there were some unsatisfied dependencies or platform inconsistencies.
Okay finally the steps to be done to get it working
1. Enter the following command in termux apt install proot-distro && proot-distro install ubuntu-18.04 && apt install python3-keras
2. And then use the favourite text editor to edit the file I use vim in this case vim .keras/keras.json
The file would be like the following:
{
"floatx": "float32",
"epsilon": 1e-07,
"backend": "tensorflow",
"image_data_format": "channels_last"
}
and change the value of backend to theano in my case I ad theano
the file should be like the following:
{
"floatx": "float32",
"epsilon": 1e-07,
"backend": "theano",
"image_data_format": "channels_last"
}
And then save the file.And test it by getting into python interactive mode and enter import keras
The output should be Using Theano as Backend
I think this should help someone out there.
Related
I am using elastic beanstalk to deploy my Django application. Today it suddenly stopped working without any breaking changes from the application side (I've changed some templates, nothing more).
The deployment time outs after 10 minutes of trying to deploy the app and nothing happens.
The only more or less useful hints I can see in the log is this:
[2020-02-20T15:00:20.437Z] INFO [19057] - [Application update .../postbuild_0_myproject/Command 01_migrate] : Activity execution failed, because: SystemCheckError: System check identified some issues:
ERRORS:
education.Author.photo: (fields.E210) Cannot use ImageField because Pillow is not installed.
HINT: Get Pillow at https://pypi.org/project/Pillow/ or run command "pip install Pillow".
education.Course.cover_image: (fields.E210) Cannot use ImageField because Pillow is not installed.
HINT: Get Pillow at https://pypi.org/project/Pillow/ or run command "pip install Pillow".
education.CourseCategory.icon_image: (fields.E210) Cannot use ImageField because Pillow is not installed.
HINT: Get Pillow at https://pypi.org/project/Pillow/ or run command "pip install Pillow".
Using staging settings
App receivers connected
(ElasticBeanstalk::ExternalInvocationError)
[2020-02-20T15:00:20.437Z] INFO [19057] - [Application update .../postbuild_0_myproject/Command 01_migrate] : Activity failed.
[2020-02-20T15:00:20.437Z] INFO [19057] - [Application update .../postbuild_0_myproject] : Activity failed.
[2020-02-20T15:00:20.437Z] INFO [19057] - [Application update ...] : Activity failed.
[2020-02-20T15:00:20.507Z] INFO [19057] - [Application update app-9a24-200220_145942-stage-200220_145942#142/AppDeployStage0/EbExtensionPostBuild] : Activity failed.
[2020-02-20T15:00:20.507Z] INFO [19057] - [Application update app-9a24-200220_145942-stage-200220_145942#142/AppDeployStage0] : Activity failed.
[2020-02-20T15:00:20.508Z] INFO [19057] - [Application update app-9a24-200220_145942-stage-200220_145942#142] : Completed activity. Result:
Application update - Command CMD-AppDeploy failed
But I already have Pillow in requirements.txt and the log above says:
Requirement already satisfied: Pillow==6.2.1 in /opt/python/run/venv/lib64/python3.6/site-packages (from -r /opt/python/ondeck/app/requirements.txt (line 51))
How can I troubleshoot and fix this? And how can I avoid similar issues in the future? I am really frightened that the same problem may randomly pop out on production environment.
Here's some more info about the configuration:
Here's what I have in .ebextensions:
01_packages.config:
packages:
yum:
git: []
postgresql93-devel: []
db-migrate.config
container_commands:
01_migrate:
command: "django-admin.py migrate"
leader_only: true
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: myproject.settings
django.config
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: myproject/wsgi.py
wsgi_custom.config
files:
"/etc/httpd/conf.d/wsgihacks.conf":
mode: "000644"
owner: root
group: root
content: |
WSGIPassAuthorization On
This one is a pain and a known issue with Django when using the ImageField model/form. Due to Pythons dynamic import system it will suddenly appear and annoyed the hell out of me when I first came across it.
The way I normally fix this is by using conda and its equivalent of a virtualenv to ensure the right interpreter (the one with my packages) is used.
If you are not using a virtualenv or equivalent, set one up now, if you are already using one then check you are installing pillow with pip3 install pillow - the pip3 being important here as on debian (and many other) systems normal pip will only install for python 2.x.
Using conda will ensure this doesnt happen in production, but I would also add it to your checklist of things to test when deploying - check correct version of pillow setup and working.
I had two Elastic Beanstalk environments with the same issue (one web tier env and a worker env).
On one of them the issue was resolved by restarting the environment.
The other one failed to restart and timed out every time on any operation. This one I managed to fix by going to configuration > capacity and changing the minimum and maximum number of instances to 0. I've applied the changes, waited for them to apply and then returned the previous values for min and max instance numbers.
That fixed the issue.
I still have no idea what caused the issue in the first place and would love to receive some comment on that.
I'm trying to integrate ElasticSearch with my Django project using the package django-elasticsearch-dsl, and I am getting this error:
>> $ curl -X GET http://localhost:9200
curl: (7) Failed to connect to localhost port 9200: Connection refused
I downloaded django-elasticsearch-dsl using the commands:
pip install https://github.com/sabricot/django-elasticsearch-dsl/archive/6.4.0.tar.gz
and
pip install django-elasticsearch-dsl, but they both caused the same results.
I don't believe this is a duplicate question because every other question I have read pertaining to this error has dealt with only the ElasticSearch library, not the django-elasticsearch-dsl library. The latter is built on top of the former, but I can't seem to find a elasticsearch.yml file as detailed in all other posts.
Here is what is installed in my virtual environment:
>> pip freeze
Django==2.2.2
django-elasticsearch-dsl==6.4.0
elasticsearch==7.0.2
elasticsearch-dsl==7.0.0
lazy-object-proxy==1.4.1
mccabe==0.6.1
pylint==2.3.1
python-dateutil==2.8.0
pytz==2019.1
requests==2.22.0
typed-ast==1.4.0
urllib3==1.25.3
According to this tutorial, the command http://127.0.0.1:9200 should return what looks like a JSON response, but instead I get the error:
curl: (7) Failed to connect to localhost port 9200: Connection refused
Have you made documents.py for each app? You need to make it to push the data to the elasticsearch database. But firstly you need to install elasticsearch properly(in your case its not installed properly).
Try this Tutorial for installing elasticsearch, i used it recently and it worked like a charm.
Link
And this is the path for elasticsearch.yml, (/etc/elasticsearch/elasticsearch.yml).
And dont forget to start it using - sudo systemctl start elasticsearch(check its status using - sudo systemctl status elasticsearch )
I'm running JupyterLab from Anaconda, and installed a JupyterLab plotly extension using:
conda install -c conda-forge jupyterlab-plotly-extension
Apparently, the installation was successful, but something is still wrong.
When launching JuyterLab, I'm getting this prompt:
Clicking BUILD gives me this:
And clicking RELOAD relods JupyterLab, BUT I'm getting this again:
And on and on it spins. Does anyone know why?
Clicking CANCEL does not help either because plotly won't produce any plots, only blank spaces:
Solution:
Deactivate firewall and run the following command in a windows command prompt:
jupyter lab build
The details:
This turned out to be a firewall problem, and I'm not sure why it would not be prompted as such in the JupyterLab interface. The following command in a windows command prompt returned the error message below:
Command:
jupyter lab build
Output:
C:>jupyter labextension list JupyterLab v0.34.9 Known labextensions:
app dir:
C:\Users*******\AppData\Local\Continuum\anaconda3\share\jupyter\lab
#jupyterlab/plotly-extension v0.18.2 enabled ok
Build recommended, please run jupyter lab build:
#jupyterlab/plotly-extension needs to be included in build
C:>jupyter lab build [LabBuildApp] JupyterLab 0.34.9 [LabBuildApp]
Building in
C:\Users*******\AppData\Local\Continuum\anaconda3\share\jupyter\lab
[LabBuildApp] > node
C:\Users*******\AppData\Local\Continuum\anaconda3\lib\site-packages\jupyterlab\staging\yarn.js
install yarn install v1.9.4 info No lockfile found. [1/4] Resolving
packages... error An unexpected error occurred:
"https://registry.yarnpkg.com/#jupyterlab%2fapplication: self signed
certificate in certificate chain". info If you think this is a bug,
please open a bug report with the information provided in
"C:\Users\*******\AppData\Local\Continuum\anaconda3\share\jupyter\lab\staging\yarn-error.log".
What pointed me towards suspecting a firewall problem was this part:
self signed certificate in certificate chain
Running the same command on less rigid fire-wall settings triggers this output (shortened):
WARNING in d3-array Multiple versions of d3-array found:
1.2.4 ./~/d3-scale/~/d3-array from ./~/d3-scale/~/d3-array\src\index.js
2.2.0 ./~/d3-array from ./~/d3-array\src\index.js
Check how you can resolve duplicate packages:
https://github.com/darrenscerri/duplicate-package-checker-webpack-plugin#resolving-duplicate-packages-in-your-bundle
Child html-webpack-plugin for "index.html":
1 asset
Entrypoint undefined = index.html
[KTNU] ./node_modules/html-loader!./templates/partial.html 567 bytes {0} [built]
[YuTi] (webpack)/buildin/module.js 497 bytes {0} [built]
[aS2v] ./node_modules/html-webpack-plugin/lib/loader.js!./templates/template.html
1.22 KiB {0} [built]
[yLpj] (webpack)/buildin/global.js 489 bytes {0} [built]
+ 1 hidden module
And despite some warning messages, JupyterLab now produces plotly figures without any problems:
I've been beating on this for over a week and been through all sorts of forum issues and posts and cannot resolve. I'm trying to package numpy in a function, individually building requirements (I have multiple functions with multiple requirements that I'd like to keep separate).
Environment:
Windows 10 Home
Docker Toolbox for Windows:
Client:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24302
Built: Fri Mar 23 08:31:36 2018
OS/Arch: windows/amd64
Experimental: false
Orchestrator: swarm
Server: Docker Engine - Community
Engine:
Version: 18.09.0
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:52:55 2018
OS/Arch: linux/amd64
Experimental: false
Serverless Version:
serverless version 6.4.1
serverless-python-requirements version 6.4.1
Directory Structure:
|-test
|-env.yml
|-serverless.yml
|-Dockerfile
|-functions
|-f1
|-index.py
|-requirements.txt
|-sub_function_1.py
|-sub_function_2.py
|-f2
|-index.py
|-requirements.txt
|-sub_function_3.py
|-sub_function_4.py
serverless.yml
service: test
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
zip: true
dockerFile: Dockerfile
dockerizePip: non-linux
provider:
name: aws
runtime: python3.6
stage: dev
environment: ${file(./env.yml):${opt:stage, self:provider.stage}.env}
region: ${file(./env.yml):${opt:stage, self:provider.stage}.aws.region}
profile: ${file(./env.yml):${opt:stage, self:provider.stage}.aws.profile}
package:
individually: true
functions:
f1:
handler:index.handler
module:functions/f1
f2:
handler:index.handleer
module:functions/f2
I have my project files in C:\Serverless\test. I run npm init, followed by npm i --save serverless-python-requirements, accepting all defaults. I get the following on sls deploy -v. even though I've added C:\ to Shared Folders on the running default VM in VirtualBox, and selected auto-mount and permanent.
If I comment out both dockerizePip and dockerFile I get the following as expected based on here and other SO posts:
Serverless: Invoke invoke
{
"errorMessage": "Unable to import module 'index'"
}
If I comment out dockerfile I get:
Serverless: Docker Image: lambci/lambda:build-python3.6
Error --------------------------------------------------
error during connect: Get https://XXXXXX/v1.37/version: dial tcp
XXXXXXXXXX: connectex: A connection attempt failed because the
connected party did not properly respond after a period of time, or
established connection failed because connected host has failed to
respond.
at dockerCommand (C:\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:20:11)
at getBindPath (C:\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:100:3)
With Dockerfile
# AWS Lambda execution environment is based on Amazon Linux 1
FROM amazonlinux:1
# Install Python 3.6
RUN yum -y install python36 python36-pip
# Install your dependencies
RUN curl -s https://bootstrap.pypa.io/get-pip.py | python3
RUN yum -y install python3-devel mysql-devel gcc
# Set the same WORKDIR as default image
RUN mkdir /var/task
WORKDIR /var/task
.
Serverless: Building custom docker image from Dockerfile...
Serverless: Docker Image: sls-py-reqs-custom
Error --------------------------------------------------
Unable to find good bind path format
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Stack Trace --------------------------------------------
Error: Unable to find good bind path format
at getBindPath (C:\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:142:9)
at installRequirements (C:\Serverless\test\node_modules\serverless-python-requirements\lib\pip.js:152:7)
at installRequirementsIfNeeded (C:\Serverless\test\node_modules\serverless-python-requirements\lib\pip.js:451:3)
If I move my project to C:\Users\, I get this instead:
Serverless: Docker Image: sls-py-reqs-custom
Serverless: Trying bindPath /c/Users/Serverless/test/.serverless/requirements (run,--rm,-v,/c/Users/Serverless/test/.serverless/req
uirements:/test,alpine,ls,/test/requirements.txt)
Serverless: /test/requirements.txt
Error --------------------------------------------------
docker: Error response from daemon: create "/c/Users/Serverless/test/.serverless/requirements": "\"/c/Users/Serverless/test/.serv
erless/requirements\"" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you in
tended to pass a host directory, use absolute path.
See 'docker run --help'.
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Stack Trace --------------------------------------------
Error: docker: Error response from daemon: create "/c/Users/Serverless/test/.serverless/requirements": "\"/c/Users/Serverless/test/
.serverless/requirements\"" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If y
ou intended to pass a host directory, use absolute path.
See 'docker run --help'.
at dockerCommand (C:\Users\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:20:11)
at getDockerUid (C:\Users\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:162:14)
I've seen the Makefile style recommendation from #brianz here, but I'm not sure how to adapt that to this (Makefiles are not my strong suit). I'm a bit at a loss as to what to do next and advice would be greatly appreciated. TIA.
I was unable to make the plugin work but I found a better solution anyhow - Lambda Layers. This is a bonus because it reduces the size of the lambda and allows code/file reuse. There is a pre-built lambda layer for numpy and scipy that you can use, but I built my own to show myself how it all works. Here's how I made it work:
Create a layer package:
Open an EC2 instance or Ubuntu or Linux or whatever - This is needed so we can compile the runtime binaries correctly
Make a dependencies package zip - Must use the directory structure python/lib/python3.6/site-packages for python to find during runtime
mkdir -p tmpdir/python/lib/python3.6/site-packages
pip install -r requirements.txt --no-deps -t tmpdir/python/lib/python3.6/site-packages
cd tmpdir zip -r ../py_dependencies.zip .
cd ..
rm -r tmpdir
Push layer zip to AWS - requires latest awscli
sudo pip install awscli --upgrade --user
sudo aws lambda publish-layer-version \
--layer-name py_dependencies \
--description "Python 3.6 dependencies [numpy=0.15.4]" \
--license-info "MIT" \
--compatible-runtimes python3.6 \
--zip-file fileb://py_dependencies.zip \
--profile python_dev_serverless
To use in any function that requires numpy, just use the arn that is shown in the console or during the upload above
f1:
handler: index.handler_f_use_numpy
include:
- functions/f_use_numpy.py
layers:
- arn:aws:lambda:us-west-2:XXXXX:layer:py_dependencies:1
As an added bonus, you can push common files like constants to a layer as well. Here's how I did it for testing use in windows and on the lambda:
import platform
\# Set common path
COMMON_PATH = "../../layers/common/"
if platform.system() == "Linux": COMMON_PATH = "/opt/common/"
def handler_common(event, context):
# Read from a constants.json file
with open(COMMON_PATH + 'constants.json') as f:
return text = json.load(f)
when I got the same issue, I opened docker went to settings/shared drive opted to reset credentials and after applied my changes and this cleared the error
I fixed this issue by temporarily disabling Windows Firewall.
I have set up mpi4py on a new server, and it isn't quite working. When I import mpi4py.MPI, it crashes. However, if I do the same thing under mpiexec, it works. On my other server and on my workstation, both techniques work fine. What am I missing on the new server?
Here's what happens on the new server:
$ python -c 'from mpi4py import MPI; print("OK")'
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems. This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):
PMI2_Job_GetId failed failed
--> Returned value (null) (14) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems. This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):
orte_ess_init failed
--> Returned value (null) (14) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):
ompi_mpi_init: ompi_rte_init failed
--> Returned "(null)" (14) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
[Octomore:45430] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed!
If I run it with mpiexec, it's fine.
$ mpiexec -np 1 python -c 'from mpi4py import MPI; print("OK")'
OK
I'm running on CentOS 6.7. I've installed Python 2.7 as a software collection, and I've loaded the openmpi/gnu/1.10.2 module. MPICH and MPICH2 are also installed, so they may be conflicting with OpenMPI. I haven't loaded the MPICH modules, though. I'm running Python in a virtualenv:
$ pip list
mpi4py (2.0.0)
pip (8.1.2)
setuptools (18.0.1)
wheel (0.24.0)
It turned out that mpi4py is not compatible with version 1.10.2 of OpenMPI. It works fine with version 1.6.5.
$ module load openmpi/gnu/1.6.5
$ python -c 'from mpi4py import MPI; print("OK")'
OK