/bin/bash: ./darknet: Permission denied - python

I've created an object detection model using Roboflow's tutorial and have all the saved weights. The one problem I have is deploying it in a Google Colaboratory. I've changed up some code, but it does not seem to work. So in short, model's trained.
How do I use the model in another Google Colaboratory? I've downloaded the whole darknet folder to the environment with a direct download, some plotting functions and then ran:
and then
!./darknet detect cfg/custom-yolov4-detector.cfg backup/custom-yolov4-detector_last.weights {img} #-dont-show
Only to get:
/bin/bash: ./darknet: Permission denied
Any suggestions?

Just add this before your command:
!chmod +x ./darknet

In step 4 of this tutorial you will find the command !chmod +x ./darknet. Depending on your directory, you may need to run !chmod +x ./darknet/darknet. It depends on your folder structure. Worked for me.

need to re-run the darknet !make file
%cd /your_path/
!sed -i 's/OPENCV=0/OPENCV=1/g' Makefile
!sed -i 's/GPU=0/GPU=1/g' Makefile
!sed -i 's/CUDNN=0/CUDNN=1/g' Makefile
!sed -i "s/ARCH= -gencode arch=compute_60,code=sm_60/ARCH= ${ARCH_VALUE}/g" Makefile
!make

you are lacking execution permission for that script, you need to do chmod +x darknet

If your files are already compiled using !make command, then use !chmod +x ./darknet/darknet or else first compile it and then use !chmod +x ./darknet/darknet
If it still doesn't work, delete the entire darknet package and then clone it again.

compile the darknet using the make function
!make
and don't forget to change the makefile
GPU=1
CUDNN=1
OPENCV=1

Related

Dockerfile WORKDIR distracts running program from layer?

I made the Dockerfile for making Docker image that runnable from AWS Batch, contains multiple layers, copy files to '/opt', which I set it as WORKDIR.
I have to run a program called 'BLAST', which is a single .exe program, requires several parameters including the location of DB.
When I run the image, the error comes out with it cannot find the mounted DB location. Full error message is b'BLAST Database error: No alias or index file found for nucleotide database [/mnt/fsx/ntdb/nt] in search path [/opt:/fsx/ntdb:]\n'] where /mnt/fsx/ntdb/nt is the DB path.
The only assumption is because I gave WORKDIR in my Dockerfile so the default workspace is set as '/opt:'.
I wonder how should I fix this issue. By removing WORKDIR ? or something else?
My Dockerfile looks like below
# Set Work dir
ARG FUNCTION_DIR="/opt"
# Get layers
FROM (aws-account).dkr.ecr.(aws-region).amazonaws.com/uclust AS layer_1
FROM (aws-account).dkr.ecr.(aws-region).amazonaws.com/blast AS layer_2
FROM public.ecr.aws/lambda/python:3.9
# Copy arg and set work dir
ARG FUNCTION_DIR
COPY . ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
# Copy from layers
COPY --from=layer_1 /opt/ .
RUN true
COPY --from=layer_2 /opt/ .
RUN true
COPY . ${FUNCTION_DIR}/
RUN true
# Copy and Install required libraries
COPY requirements.txt .
RUN true
RUN pip3 install -r requirements.txt
# To run lambda handler
RUN pip install \
--target "${FUNCTION_DIR}" \
awslambdaric
# To run blast
RUN yum -y install libgomp
# See files inside image
RUN dir -s
# Get permissions for files
RUN chmod +x /opt/main.py
RUN chmod +x /opt/mode/submit/main.py
# Set Entrypoint and CMD
ENTRYPOINT [ "python3" ]
CMD [ "-m", "awslambdaric", "main.lambda_handler" ]
Edit: Further info I found, When looking at the error, the BLAST program trying to search db at the path /opt:/fsx/ntdb:, which is the combination of path set as WORKDIR in Dockerfile and BLASTDB path set by os.environ.['BLASTDB'] (os.environ['BLASTDB'] description.).
Figured out the problem after many debug trials. So the problem was neither WORKDIR nor os.environ.['BLASTDB']. The paths were correctly defined, and the BLAST program searching [/opt:/fsx/ntdb:] was correct way according to what is says in here
Current working directory (*)
User's HOME directory (*)
Directory specified by the NCBI environment variable
The standard system directory (“/etc” on Unix-like systems, and given by the environment variable SYSTEMROOT on Windows)
The actual solution was checking whether file system is correctly mounted or not and the permission of the files inside the file system. Initially I thought file system was mounted correctly since I already tested from other Batch submit job many times, but only the mount folder is created, files were not exist. Therefore, even though the program tried to find the index file, it could not find any so the error came out.

Error in google colab /bin/sh: Permission denied

I downloaded this code from github and it runs fine on my local computer, but I need to use in google colab, where it throws errors. More precisely, what I get as error output is:
/bin/sh: 1: ./rbox: Permission denied
/bin/sh: 1: ./qhull: Permission denied
...
ModuleNotFoundError: No module named 'python_tools.fastmodules'
The full error message can be seen here as well as in the notebook here. I tried different suggestions for giving permission for executing the files ./rbox and ./qhull - I inserted a new cell before the code snippet containing
!chmod 755 -R /content/gdrive/MyDrive/PhD/revolver/qhull/src/ #pointing to the file-directory
or
!chmod 755 ./rbox
or
!chmod +x /content/gdrive/MyDrive/PhD/revolver/qhull/src/rbox
but none of them solves the error. How can I resolve the error message? It would be also great if someone can download the github code from the link into google drive confirm that he can run it without errors. Installations instructions are provided on the github page, you can use my params.py file and within it you can set tracer_file to point at this file that I used, or any dummy csv-file with 3 columns. Tnx! Your contribution would help me run an astrophysics simulation for my PhD
I made a mistake by confusing the location of my files. There were different copies of the rbox and qhull files in different folders, so I should have done
!chmod +x /content/gdrive/MyDrive/PhD/revolver/qhull/rbox
!chmod +x /content/gdrive/MyDrive/PhD/revolver/qhull/qhull
instead of
!chmod +x /content/gdrive/MyDrive/PhD/revolver/qhull/src/rbox
!chmod +x /content/gdrive/MyDrive/PhD/revolver/qhull/src/qhull

How to run a docker image in IBM Cloud functions?

I have a simple Python program that I want to run in IBM Cloud functions. Alas it needs two libraries (O365 and PySnow) so I have to Dockerize it and it needs to be able to accept a Json feed from STDIN. I succeeded in doing this:
FROM python:3
ADD requirements.txt ./
RUN pip install -r requirements.txt
ADD ./main ./main
WORKDIR /main
CMD ["python", "main.py"]
This runs with: cat env_var.json | docker run -i f9bf70b8fc89
I've added the Docker container to IBM Cloud Functions like this:
ibmcloud fn action create e2t-bridge --docker [username]/e2t-bridge
However when I run it, it times out.
Now I did see a possible solution route, where I dockerize it as an Openwhisk application. But for that I need to create a binary from my Python application and then load it into a rather complicated Openwhisk skeleton, I think?
But having a file you can simply run was is the whole point of my Docker, so to create a binary of an interpreted language and then adding it into a Openwhisk docker just feels awfully clunky.
What would be the best way to approach this?
It turns out you don't need to create a binary, you just need to edit the OpenWhisk skeleton like so:
# Dockerfile for example whisk docker action
FROM openwhisk/dockerskeleton
ENV FLASK_PROXY_PORT 8080
### Add source file(s)
ADD requirements.txt /action/requirements.txt
RUN cd /action; pip install -r requirements.txt
# Move the file to
ADD ./main /action
# Rename our executable Python action
ADD /main/main.py /action/exec
CMD ["/bin/bash", "-c", "cd actionProxy && python -u actionproxy.py"]
And make sure that your Python code accepts a Json feed from stdin:
json_input = json.loads(sys.argv[1])
The whole explaination is here: https://github.com/iainhouston/dockerPython

Execute python program with bash script

I have following bash script:
export MYCONFIG_CONFIG=TEST
/home/bmwant/projects/test/venv/bin/python3 /home/bmwant/projects/test/script.py
My python script needs environment variable which I have set, but running this script as
bash run_python.sh
shows an error
/home/bmwant/projects/test/venv/bin/python3: can't open file 'home/bmwant/projects/tes': [Errno 2] No such file or directory
What is wrong? I have set script as executable with chmod u+x run_python.sh
Have made script.py executable?
chmod u+x /home/bmwant/projects/test/script.py
Try this script:
#!/bin/bash
export MYCONFIG_CONFIG=TEST
pushd /home/bmwant/projects/test/venv/bin/
./python3 /home/bmwant/projects/test/script.py
popd
You can run it via:
./run_python.sh
So, the problem was in file format. I have created it in Windows and then copy via FileZilla to remote server where I was trying to run it. So just run
apt-get install dos2unix
dos2unix run_python.sh
bash run_python.sh
and all will work well. Thanks #Horst for the hint.

Django on Heroku, issue with Yuglify and CollectStatic

I'm using Django-Pipeline to minify my javascript. When I push my project to Heroku and CollectStatic runs, it gives me the error
pipeline.exceptions.CompressorError: /usr/bin/env: yuglify: No such file or directory
But when I run CollectStatic manually, Yuglify runs without issue. I'm unable to find out the problem. What code should I even show you guys in this situation?
My solution to this was to add a "yuglify" part to the codebase here: https://github.com/nigma/heroku-django-cookbook
Here's my code:
bin/install_yuglify
#!/usr/bin/env bash
set -eo pipefail
npm install -g yuglify
Then add the following to bin/post_compile (around line 23...)
if [ -f bin/install_yuglify ]; then
echo "-----> Running install_yuglify"
chmod +x bin/install_yuglify
bin/install_yuglify
fi
And you should be good to go :)
You can see my code here, for reference: https://github.com/GK-12/rpi_csdt_community/tree/master/bin
Good luck!
I managed to solve the problem with a less painful solution. Heroku provides you with buildpacks, which actually is the environment that your applications is going to build upon. By default you have the python buildpack. That is the reason why the system is able to run commands like python manage.py.... My solution is the following:
1) Install nodejs buildpack as the first buildpack
heroku buildpacks:add --index 1 heroku/nodejs
2) Add the package.json in the same path like requirements.txt
3) In the package.json add the dependency of yuglify
An other solution, is to change your compressions and use a python-written one.
PIPELINE['CSS_COMPRESSOR'] = 'pipeline.compressors.cssmin.CSSMinCompressor'
PIPELINE['JS_COMPRESSOR'] = 'pipeline.compressors.jsmin.JSMinCompressor'
pip install cssmin jsmin
I dont have a clear opinion which is better jsmin or yuglify.

Categories

Resources