OS Windows 10,
I am using Docker Engine version 18.09.2, the API version is 1.39
The website explaining the steps to run CAT is: https://libraries.io/pypi/medcat
I am building the medcat image locally. Output looks good until the end of the build process:
Step 10/11 : ENTRYPOINT ["python"]
---> Using cache
---> 66b414e2093d
Step 11/11 : CMD ["api.py"]
---> Using cache
---> db2acf6c4649
Successfully built db2acf6c4649
Successfully tagged cat:latest
SECURITY WARNING: You are building a Docker image from Windows against
a non-Windows Docker host. All files and directories added to build
context will have '-rwxr-xr-x' permissions. It is recommended to
double check and reset permissions for sensitive files and
directories.
When I am trying to start the container I just built, I get:
IT IS UMLS
* Serving Flask app "api" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a
production deployment.
Use a production WSGI server instead.
* Debug mode: on
Traceback (most recent call last):
File "api.py", line 66, in <module>
app.run(debug=True, host='0.0.0.0', port=5000)
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line
944, in run
run_simple(host, port, self, **options)
File "/usr/local/lib/python3.7/site-packages/werkzeug/serving.py",
line 1007, in run_simple
run_with_reloader(inner, extra_files, reloader_interval,
reloader_type)
File "/usr/local/lib/python3.7/site-packages/werkzeug/_reloader.py",
line 332, in run_with_reloader
sys.exit(reloader.restart_with_reloader())
File "/usr/local/lib/python3.7/site-packages/werkzeug/_reloader.py",
line 176, in restart_with_reloader
exit_code = subprocess.call(args, env=new_environ,
close_fds=False)
File "/usr/local/lib/python3.7/subprocess.py", line 323, in call
with Popen(*popenargs, **kwargs) as p:
File "/usr/local/lib/python3.7/subprocess.py", line 775, in __init__
restore_signals, start_new_session)
File "/usr/local/lib/python3.7/subprocess.py", line 1522, in
_execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
OSError: [Errno 8] Exec format error: '/cat/api/api.py'
Does anyone have experience with building on Windows? Does the security warning have anything to do with this?
Update:
I added the permission for linux executable as in the received answer at this post. Then I built the image locally using the following command docker build --network=host -t cat -f Dockerfile.MedMen ., and the end of the building process gives me the same Security Warning.
Then I checked docker run --env-file=./envs/env_medann ubuntu:18.04 env, which gave me:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=3d5fd66fadbe
TYPE=UMLS
DEBUG=False
CNTX_SPAN=6
CNTX_SPAN_SHORT=2
MIN_CUI_COUNT=100
MIN_CUI_COUNT_STRICT=1
MIN_ACC=0.01
MIN_CONCEPT_LENGTH=1
NEG_PROB=0.2
LBL_STYLE=def
SPACY_MODEL=en_core_sci_md
UMLS_MODEL=/cat/models/med_ann_norm.dat
VOCAB_MODEL=/cat/models/med_ann_norm_dict.dat
MKL_NUM_THREAD=1
NUMEXPR_NUM_THREADS=1
OMP_NUM_THREADS=1
HOME=/root
This is because windows & linux has CR-LF & LF difference issue, meanwhile, permission need to be added for linux executable.
For your case, as you have got the source code, I think you have git installed on your windows. Then, you can open Git Bash, change the path to your source code directory, and execute next in it:
find . -type f | xargs dos2unix
chmod -R 777 *
Finally, rebuild it.
Update:
I try your code completely, it seems the issue is in cat/api/api.py, it misses a #!. So, into your sourcecode, edit cat/api/api.py, add next at the beginning of the sourcecode:
#!/usr/bin/env python
Then, rebuild with Dockerfile & run it again, you can see the effect from browser:
Related
I have a python-flask hangman game, that I tried to wrap into a single executable file using pyinstaller with the following command line:
pyinstaller -w -F --add-data "templates:templates" --add-data "static:static" hangman.py
It seemed to work fine, and created build, dist, as well as the .spec file However, when I try to run the executable, I get the following error:
flask_sqlalchemy/__init__.py:800: UserWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True to suppress this warning.
* Serving Flask app "hangman" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
Traceback (most recent call last):
File "hangman.py", line 101, in <module>
File "flask/app.py", line 990, in run
File "werkzeug/serving.py", line 1012, in run_simple
File "werkzeug/serving.py", line 956, in inner
File "werkzeug/serving.py", line 807, in make_server
File "werkzeug/serving.py", line 701, in __init__
File "socketserver.py", line 452, in __init__
File "http/server.py", line 138, in server_bind
File "socketserver.py", line 466, in server_bind
OSError: [Errno 98] Address already in use
[46425] Failed to execute script 'hangman' due to unhandled exception!
I am relatively new to programming, so please don't mind any wrong use of technical terms.
Try to turn off the reloader like this :
app.run(debug=True, use_reloader=False)
[Edit]
it seems like some other application is using the same port. Check it by
netstat -tulpn
To get more information, you can use also :
tasklist
When you got the pid, I'd suggest stop it manually
You can also kill it via command kill:
Taskkill /PID THE_PORT /F
I tried to setup "airflow worker" to run after system start via rc.local(centos 7).
I have installed python and airflow as root. Path is /root/airflow and /root/anaconda3.
I added this to rc.local:
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.
touch /var/lock/subsys/local
exec 2> /home/centos/rc.local.log # send stderr from rc.local to a log file
exec 1>&2 # send stdout to the same log file
set -x # tell sh to display commands before execution
export C_FORCE_ROOT="true"
/root/anaconda3/bin/python /root/anaconda3/bin/airflow worker
exit 0
When I try it to run manually it works (sh /etc/rc.local)
But when it runs after boot, it crashes with this error in log file.
It seems like it can't find path to airflow, but I have written it in full.
+ export C_FORCE_ROOT=true
+ C_FORCE_ROOT=true
+ /root/anaconda3/bin/python /root/anaconda3/bin/airflow worker
Traceback (most recent call last):
File "/root/anaconda3/bin/airflow", line 37, in <module>
args.func(args)
File "/root/anaconda3/lib/python3.7/site-packages/airflow/utils/cli.py", line 75, in wrapper
return f(*args, **kwargs)
File "/root/anaconda3/lib/python3.7/site-packages/airflow/bin/cli.py", line 1129, in worker
sp = _serve_logs(env, skip_serve_logs)
File "/root/anaconda3/lib/python3.7/site-packages/airflow/bin/cli.py", line 1065, in _serve_logs
sub_proc = subprocess.Popen(['airflow', 'serve_logs'], env=env, close_fds=True)
File "/root/anaconda3/lib/python3.7/subprocess.py", line 800, in __init__
restore_signals, start_new_session)
File "/root/anaconda3/lib/python3.7/subprocess.py", line 1551, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'airflow': 'airflow'
A place to start is to change this line...
/root/anaconda3/bin/python /root/anaconda3/bin/airflow worker
to
/root/anaconda3/bin/airflow worker
As you only need to invoke the airflow bin you need and pass it a single service. Bear in mind you can pass more arguments. But calling a version of Python doesn't feel necessary.
Deployed a flask application and binded it to the ssl certificate to run on "https:" with the following code:
if __name__ == '__main__':
path = "/usr/local/nginx/ssl/"
context = (path + 'abc.crt' , path + 'abc.key')
app.run_server(debug=True,host='0.0.0.0',ssl_context=context)
Now when I run this script directly through python (python scriptname.py), it works fine,
However when I run in inside the docker container ,I get the following error:
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/site-packages/werkzeug/serving.py", line 1005, in inner
fd=fd,
File "/usr/local/lib/python3.6/site-packages/werkzeug/serving.py", line 848, in make_server
host, port, app, request_handler, passthrough_errors, ssl_context, fd=fd
File "/usr/local/lib/python3.6/site-packages/werkzeug/serving.py", line 766, in __init__
self.socket = ssl_context.wrap_socket(sock, server_side=True)
File "/usr/local/lib/python3.6/site-packages/werkzeug/serving.py", line 661, in wrap_socket
**kwargs
File "/usr/local/lib/python3.6/ssl.py", line 1158, in wrap_socket
ciphers=ciphers)
File "/usr/local/lib/python3.6/ssl.py", line 750, in __init__
self._context.load_cert_chain(certfile, keyfile)
FileNotFoundError: [Errno 2] No such file or directory
I guess the container is searching for the file elsewhere, this is my docker run command:
docker run -it --network="host" -p 8050:8050 -v /home/a/b/c:/app abc:1.1
What am i missing here?
Edit : Dockerfile
FROM python:3.6
COPY . /app
WORKDIR /app
RUN pip3 install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
Docker container will only be able to access what you copied inside it or what your mapped to it while running.
So you have 2 options. First option is to add a COPY statement to copy the certs, but looking at the current docker file, your certs were outside the app folder and hence not copied.
Other option is to use -v option to map the certs while running the container.
docker run -it --network="host" -p 8050:8050 -v /home/certs/path:/home/certs/path -v /home/a/b/c:/app abc:1.1
But in a production like environment I would suggest you don't do this. You should use a nginx and uwsgi and make sure the your terminate the SSL at nginx
See the below repo for such an option
https://github.com/tiangolo/uwsgi-nginx-flask-docker
We have established pipelines scripts that work very well. Lately, we decided to deploy to elastic beanstalk automatically, with the use of bitbucket pipelines and following the tutorial which uses the command eb deploy to deploy. Apparently, this command fails on pipelines. The config files seem legit because it runs locally. It also runs from inside a container of the same image that we have specified in the pipelines file and also by using docker exec from the local to run the command inside a container of the same image. The following are the pipelines file and the error we get using eb deploy --verbose command. I am obviously missing something here. Any help or direction would be appreciated. Thanking you in advance.
feature/KKLT-1065-deploy-via-pipelines:
- step:
deployment: staging
caches:
- composer
script:
- php -r "file_exists('.env') || copy('.env.example', '.env');"
- cat .env
- composer install
- php artisan cache:clear
- php artisan migrate
- php artisan db:seed
- eb init KMLT-staging-ttl -r eu-central-1 -p "64bit Amazon Linux 2017.09 v2.6.4 running PHP 7.1"
- eb deploy --verbose
services:
- postgres
+ eb deploy --verbose
INFO: Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ebcli/core/ebrun.py", line 41, in run_app
app.run()
File "/usr/lib/python2.7/site-packages/cement/core/foundation.py", line 797, in run
return_val = self.controller._dispatch()
File "/usr/lib/python2.7/site-packages/cement/core/controller.py", line 472, in _dispatch
return func()
File "/usr/lib/python2.7/site-packages/cement/core/controller.py", line 475, in _dispatch
self._parse_args()
File "/usr/lib/python2.7/site-packages/cement/core/controller.py", line 452, in _parse_args
self.app._parse_args()
File "/usr/lib/python2.7/site-packages/cement/core/foundation.py", line 1076, in _parse_args
for res in self.hook.run('post_argument_parsing', self):
File "/usr/lib/python2.7/site-packages/cement/core/hook.py", line 150, in run
res = hook[2](*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ebcli/core/hooks.py", line 35, in pre_run_hook
set_profile(app.pargs.profile)
File "/usr/lib/python2.7/site-packages/ebcli/core/hooks.py", line 47, in set_profile
profile = commonops.get_default_profile()
File "/usr/lib/python2.7/site-packages/ebcli/operations/commonops.py", line 973, in get_default_profile
profile = get_config_setting_from_branch_or_default('profile')
File "/usr/lib/python2.7/site-packages/ebcli/operations/commonops.py", line 1008, in get_config_setting_from_branch_or_default
setting = get_setting_from_current_branch(key_name)
File "/usr/lib/python2.7/site-packages/ebcli/operations/commonops.py", line 991, in get_setting_from_current_branch
branch_name = source_control.get_current_branch()
File "/usr/lib/python2.7/site-packages/ebcli/objects/sourcecontrol.py", line 184, in get_current_branch
stdout, stderr, exitcode = self._run_cmd(revparse_command, handle_exitcode=False)
File "/usr/lib/python2.7/site-packages/ebcli/objects/sourcecontrol.py", line 480, in _run_cmd
stdout, stderr, exitcode = exec_cmd(cmd)
File "/usr/lib/python2.7/site-packages/cement/utils/shell.py", line 40, in exec_cmd
proc = Popen(cmd_args, *args, **kw)
File "/usr/lib/python2.7/subprocess.py", line 390, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1024, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
INFO: OSError - [Errno 2] No such file or directory
Try python3 version of eb instead of python2.7. Might have more success.
I am trying to build TensorFlow from source. After configuring the installation, when I try to build to the pip package with the following command,
$ bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
I get the following error message:
ERROR: /workspace/tensorflow/core/BUILD:1312:1: Executing genrule //tensorflow/core:version_info_gen failed: bash failed: error executing command
(cd /root/.cache/bazel/_bazel_root/eab0d61a99b6696edb3d2aff87b585e8/execroot/workspace && \
exec env - \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
/bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; tensorflow/tools/git/gen_git_source.py --generate tensorflow/tools/git/gen
/spec.json tensorflow/tools/git/gen/head tensorflow/tools/git/gen/branch_ref "bazel-out/host/genfiles/tensorflow/core/util/version_info.cc"'): com.goo
gle.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1.
Traceback (most recent call last):
File "tensorflow/tools/git/gen_git_source.py", line 260, in <module>
generate(args.generate)
File "tensorflow/tools/git/gen_git_source.py", line 212, in generate
git_version = get_git_version(data["path"])
File "tensorflow/tools/git/gen_git_source.py", line 152, in get_git_version
str("--work-tree=" + git_base_path), "describe", "--long", "--tags"
File "/usr/lib/python2.7/subprocess.py", line 566, in check_output
process = Popen(stdout=PIPE, *popenargs, **kwargs)
File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Target //tensorflow/tools/pip_package:build_pip_package failed to build
INFO: Elapsed time: 8.567s, Critical Path: 7.90s
What's going wrong?
(Ubuntu 14.04, CPU only)
Your build appears to be encountering an error in
tensorflow/tools/git/gen_git_source.py
at line 152. At this stage in the build the script is trying to get the git version number of your tensor flow repo. Have you used git to check out your tensor flow repo? Are the .git files present in the /tensorflow/ root dir? Maybe you need to update your version of git?
Looks similar to this question: Build Error Tensorflow
I encountered this error even though I had git in my PATH variable. I got a hint from https://stackoverflow.com/a/5659249/212076 that the launched subprocess was not getting the PATH variable.
The solution was to hard-code the git command in <ROOT>\tensorflow\tensorflow\tools\git\gen_git_source.py by replacing
val = bytes(subprocess.check_output([
"git", str("--git-dir=%s/.git" % git_base_path),
str("--work-tree=" + git_base_path), "describe", "--long", "--tags"
]).strip())
with
val = bytes(subprocess.check_output([
"C:\Program Files (x86)\Git\cmd\git.cmd", str("--git-dir=%s/.git" % git_base_path),
str("--work-tree=" + git_base_path), "describe", "--long", "--tags"
]).strip())
Once this got fixed I got another error: fatal: Not a git repository: './.git'.
I figured the tensorflow root folder was the one that should have been referenced and so I edited <ROOT>\tensorflow\tensorflow\tools\git\gen_git_source.py to replace
git_version = get_git_version(".")
with
git_version = get_git_version("../../../../")
After that the build was successful.
NOTE: Unlike OP, my build platform was Windows 7 64 bit