To use TensorFlow serving, I had to use docker.
I downloaded the TensorFlow image using
docker pull tensorflow/serving
After that, I had to start tf serving and map my directories.
$ docker run -it -v D:\softwares\software saved file\GITHUB\potato-disease\Plant-disease-classification-using-Convolution-Neural-Networks:/tf_serving -p 8605:8605 --entrypoint /bin/bash tensorflow/serving
As a result I have an error :-
Unable to find image 'saved:latest' locally
docker: Error response from daemon: pull access denied for saved, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
The volume path contained spaces, putting "" around the path could solve the error
In this case, I changed the name of the directory.
Answered by #ai2ys
Related
I'm trying to train on azureml using a custom docker container with azure-cli, using the below command:
az ml job create -f train.yaml --resource-group DefaultResourceGroup-EUS2 --workspace-name test1234
and the train.yaml is:
$schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
type: command
environment:
image: user2001.azurecr.io/test/train:latest
command: >-
python test_local.py
compute: azureml:test1234
Upon firing the above command I'm getting this error on azure ml jobs:
Error: python: can't open file 'test_local.py': [Errno 2] No such file or directory
I have checked in my docker image, test_local.py is present,I have also tried the following combination- "./test_local.py & /test_local.py"
The error still continues. Can't figure out where I'm going wrong.
Edit:docker run -it user2001.azurecr.io/test/train:latest python test_local.py
on running this command the container executes, but same thing doesn't work on azureml
Adding the /app prefix solved the issue,
so the final statement is
command: >-
python /app/test_local.py
I am doing an encryption service that everytime a user goes on the server it changes the key , the problem is that when i run the python file alone in works like this
when it works
but when I am dockerizing it by the below code
FROM python:3
RUN mkdir -p "C:\Users\joel\Desktop\mcast-freshers-week-devops-main\mcast-freshers-week-devops-main\encryption-service"
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
CMD [ "python", "app.py" ]
The build and run are succesful even the container it is being created , even the output from the python code is shown
enter image description here
but when i go to the server it shows this
enter image description here
I tried everything but I don't know what to do.
I treid changing the code many times but i still cant solve it , I narrowed it down because i tried another python application and it worked.
When you run your command with docker run, you can always expose the internal port on the docker container to match your local host network port, use this command instead for running your docker container
docker run -it --rm -p 8080:8080 --name my-running-app my-python-app
You can then access your server by visiting localhost:8080
I am trying to mount a directory from host to container and at the same time running jupyter from that directory. What am I doing wrong here that docker is complaining as file now found please?
docker run -it --rm -p 8888:8888 tensorflow/tensorflow:nightly-jupyter -v $HOME/mytensor:/tensor --name TensorFlow python:3.9 bash
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "-v": executable file not found in $PATH: unknown.
I tried removing the version of python but still same problem. I searched extensively online and couldnt get an answer.
Basically i want to mount that directory which is is a git clone where I have tensor files. At the same time, I want to run jupyter notebook where I can see the files and run them. With so many issues with apple:m1 processor and tensorflow, i thought going the docker route would be better but am i not surprised :)
Appreciate the help
Docker run command syntax is
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
image name tensorflow/tensorflow:nightly-jupyter should be after options (-v, -p --name et.al.) and before the command.
docker run -it --rm -p 8888:8888 -v $HOME/mytensor:/tensor --name TensorFlow tensorflow/tensorflow:nightly-jupyter bash
I have been struggling to add env variables into my container for the past 3 hrs :( I have looked through the docker run docs but haven't managed to get it to work.
I have built my image using docker build -t sellers_json_analysis . which works fine.
I then go to run it with: docker run -d --env-file ./env sellers_json_analysis
As per the docs: $ docker run --env-file ./env.list ubuntu bash but I get the following error:
docker: open ./env: no such file or directory.
The .env file is in my root directory
But when running docker run --help I am unable to find anything about env variables, but it doesn't provide the following:
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
So not sure I am placing things incorrectly. I could add my variables into the dockerfile but I want to keep it as a public repo as it's a project I would like to display.
Your problem is wrong path, either use .env or ./.env, when you use ./env it mean a file named env in current directory
docker run -d --env-file .env sellers_json_analysis
I have a code in python that periodically takes pictures (using OpenCV). I created an image to execute this code in container. In linux, I can use it by perfectly executing the following command:
docker run -it --device=/dev/video0:/dev/video0 myImage
A little searching, the equivalent command in windows ( 10 Pro ) would be
docker run --privileged -v /dev/bus/usb:/dev/bus/usb myImage
But I am getting Error response from daemon: Mount denied error
I already tried to enable the shared drivers option in the Docker app, but the same error continued
I also tried some similar commands, but the same error continued
The command that generated a different error was:
docker run -it --device /dev/video0 myImage
generating the error:
Error response from daemon: error gathering device information while adding custom device "C": not a device node.