I'm using the following commands in my Dockerfile to install Miniconda. After I install it, I want to use the binaries in ~/miniconda3/bin like python and conda. I tried exporting the PATH with this new path prepended to it, but the subsequent pip command fails (pip is located in ~/miniconda3/bin.
Curiously, if I run the container in interactive terminal mode, the path is set correctly and I'm able to call the binaries as expected. It seems as though the issue is only when building the container itself.
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install -y python3.7
RUN apt-get install -y curl
RUN curl https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh --output miniconda.sh
RUN bash miniconda.sh -b
RUN export PATH="~/miniconda3/bin:$PATH"
RUN pip install pydub # errors out when building
Here's the result of echo $PATH
~/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Here's the error I get
/bin/sh: 1: pip: not found
export won't work. Try ENV
Replace
RUN export PATH="~/miniconda3/bin:$PATH"
with
ENV PATH="~/miniconda3/bin:$PATH"
Even though Miniconda is located in ~, it default installs to the root directory unless otherwise specified.
Here's the right command.
RUN export PATH="/root/miniconda3/bin:$PATH"
It looks like your export PATH ... command is putting the literal symbol ~ into the path. Try this:
ENV PATH="$HOME/miniconda3/bin:$PATH"
Related
I've tried everything I've seen on SO to get this to work, but so far everything fails. Using macOS Big Sur 11.6, bash in Terminal (not zsh).
I'm trying to create a setup file and execute with sh setup.sh that will setup the env, install python, and then activate it. Nothing fancy. Doing it manually works fine, but once I put it in a shell script, it won't work. I'm running this script from inside an empty project folder.
Current script:
conda create -n MASTER python=3.8.5 -y
conda activate MASTER
Yeah, it's that simple to start with. I commented out the other pip installs until this works properly.
I tried running: bash -i setup.sh but it still does not activate. I get no errors but I'm still stuck in (base).
I tried using source: source /opt/anaconda3/etc/profile.d/conda.sh at beginning of script and/or before activate, still doesn't work. No errors again, but stuck in (base).
I tried using: eval $(conda shell.bash hook) at the start of script and before I try to activate the env, but it fails. This time I get the error:
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
But if I run conda init bash (in Terminal or in the script itself), it outputs:
no change /opt/anaconda3/condabin/conda
no change /opt/anaconda3/bin/conda
no change /opt/anaconda3/bin/conda-env
no change /opt/anaconda3/bin/activate
no change /opt/anaconda3/bin/deactivate
no change /opt/anaconda3/etc/profile.d/conda.sh
no change /opt/anaconda3/etc/fish/conf.d/conda.fish
no change /opt/anaconda3/shell/condabin/Conda.psm1
no change /opt/anaconda3/shell/condabin/conda-hook.ps1
no change /opt/anaconda3/lib/python3.8/site-packages/xontrib/conda.xsh
no change /opt/anaconda3/etc/profile.d/conda.csh
no change /Users/liquidRock/.bash_profile
No action taken.
I tried doing /opt/anaconda3/bin/conda activate MASTER which also prompts me to do conda init bash.
Even tried adding #!/bin/bash to the top of the file just in case, but no dice.
Thanks to #fravadona for the simplest of solutions.
Simply executing the script with source instead of sh. 🤦🏻♂️
Final setup.sh script (with my preliminary pip installs):
# env & python
conda create -n MASTER python=3.8.5 -y
conda activate MASTER
# pip installs
pip install cmake
pip install --upgrade pip setuptools wheel
pip install opencv-python==4.2.0.32
pip install argparse
pip install datetime
pip install colorama
pip install python-dotenv
pip install python-dotenv[cli]
Executed thusly:
$ source setup.sh
Anaconda creates the env, installs python and dependencies, activates the env, then pip installs the additional dependencies.
Still not sure why it won't work by adding other things to the shell script, but this is still a great, simple solution. And yes, I am a novice with this stuff.
I'm attempting to run a Python file from a Docker container but receive the error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"./models/PriceNotifications.py\": stat ./models/PriceNotifications.py: no such file or directory": unknown.
I build and run using commands:
docker build -t pythonstuff .
docker tag pythonstuff adr/test
docker run -t adr/test
Dockerfile:
FROM ubuntu:16.04
COPY . /models
# add the bash script
ADD install.sh /
# change rights for the script
RUN chmod u+x /install.sh
# run the bash script
RUN /install.sh
# prepend the new path
ENV PATH /root/miniconda3/bin:$PATH
CMD ["./models/PriceNotifications.py"]
install.sh:
apt-get update # updates the package index cache
apt-get upgrade -y # updates packages
# installs system tools
apt-get install -y bzip2 gcc git # system tools
apt-get install -y htop screen vim wget # system tools
apt-get upgrade -y bash # upgrades bash if necessary
apt-get clean # cleans up the package index cache
# INSTALL MINICONDA
# downloads Miniconda
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O Miniconda.sh
bash Miniconda.sh -b # installs it
rm -rf Miniconda.sh # removes the installer
export PATH="/root/miniconda3/bin:$PATH" # prepends the new path
# INSTALL PYTHON LIBRARIES
conda install -y pandas # installs pandas
conda install -y ipython # installs IPython shell
# CUSTOMIZATION
cd /root/
wget http://hilpisch.com/.vimrc # Vim configuration
I've tried modifying the CMD within the Dockerfile to :
CMD ["/models/PriceNotifications.py"]
but the same error occurs.
The file structure is as follows:
How should I modify the Dockerfile or dir structure so that models/PriceNotifications.py is found and executed ?
Thanks to earlier comments, using the path:
CMD ["/models/models/PriceNotifications.py"]
instead of
CMD ["./models/PriceNotifications.py"]
Allows the Python script to run.
I would have thought CMD ["python /models/models/PriceNotifications.py"] should be used instead of CMD ["/models/models/PriceNotifications.py"] to invoke the Python interpreter but the interpreter seems to be already available as part of the Docker run.
I just came across a pip package I want to use however I'm new to python and PIP and not sure - is it possible to run that directly from terminal/the command line. If so, I can't seem to find the syntac to run a pip package.
So I installed pip using:
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
then
python get-pip.py
I then checked installation, by the command python -m pip
I then installed the package like:
python -m pip install openapi-cli-tool
Anyhow, as per the docs of that package I though I could just do:
openapi-cli-tool bundle -t html file1.json file2.yaml` > ./specification.html
Which didn't work, neither did this:
python -p pip openapi-cli-tool bundle -t html file1.json file2.yaml` > ./specification.html
Any help in explaining how this works would be appreciated.
You have to install it with pip, run command below:
pip install openapi-cli-tool
Then openapi-cli-tool will be available in your terminal, so you shall be able to run command, the command below is with correction of parameters passing:
Make sure you have file1.json and file2.yaml in file system.
openapi-cli-tool bundle -t html file1.json file2.yaml` > ./specification.html
python -m pip install
Installs the package to the user's local directory (iirc)
So, you can access it from ~/.local/bin/, like this:
~/.local/bin/openapi-cli-tool bundle -t html file1.json file2.yaml` > ./specification.html
You can add ~/.local/bin to your path by
export PATH=$PATH:$HOME/.local/bin
and probably add that line to your .bashrc or it's equivalent.
Then, you can access it by just openapi-cli-tool bundle
I'm trying to automate the deployment of my Python-Flask app on Ubuntu 18.04 using Bash by going through the motion of preparing all the necessary files/directories and cloning the source code from Github followed by creating the virtual environment, installing the pre-requisite modules and etc.
Now because I have to execute my Bash script using sudo, this means that the entire script will be executed as root except where I specify otherwise using sudo -u myuser and when it comes to activating my virtual environment, I get the following output: sudo: source: command not found and my subsequent pip installs are all installed outside of the virtual environment. Excerpts of my code below:
#!/bin/bash
...
sudo -u "$user" python3 -m venv .env
sudo -u $SUDO_USER source /srv/www/www.mydomain.com/.env/bin/activate
sudo -u "$user" pip install wheel
sudo -u "$user" pip install uwsgi
sudo -u "$user" pip install -r requirements.txt
...
Now for the life of me, I can't figure out how to activate the virtual environment in the context of the virtual environment if this makes any sense.
I've scoured the web and most of the questions/answers I found revolves around how to activate the virtual environment in a Bash script but not how to activate the virtual environment as a separate user within a Bash script that was executed as sudo.
That's because source is not an executable file, but a built-in bash command. It won't work with sudo, since the latter accepts a program name (i.e. executable file) as argument.
P.S. It's not clear why you have to execute the whole script as root. If you need to execute only a number of commands as root (e.g. for starting/stopping a service) and run a remaining majority as a regular user, you can use sudo only for these commands. E.g. the following script
#!/bin/bash
# The `whoami` command outputs the current username. Unlike `source`, this is
# a full-fledged executable file, not a built-in command
whoami
sudo whoami
sudo -u postgres whoami
on my machine outputs
trolley813
root
postgres
P.P.S. You probably don't need to activate an environment as root.
In my bitbucket-pipelines.yml file, I have this:
- step:
image: python:3.7.2-stretch
name: upload to s3
script:
- export S3_BUCKET="elasticbeanstalk-us-east-1-133233433288"
- export VERSION_LABEL=$(cat VERSION_LABEL)
- sudo apt-get install -y zip # required for packaging up the application
- pip install boto3==1.3.0 # required for upload_to_s3.py
- zip --exclude=*.git* -r /tmp/artifact.zip . # package up the application for deployment
- python upload_to_s3.py # run the deployment script
But when I run this pipeline in Bitbucket, I get an error, which the output:
+ sudo apt-get install -y zip
bash: sudo: command not found
Why would it not know what sudo means? Isn't this common to all Linux machines?
The "command not found" error is printed in stderr when it does not find the binary in the folders configured in env $PATH
first you need to found out if it exists with :
find /usr/bin -name "sudo"
if you find the binary try to set the PATH variable with :
export PATH=$PATH:/usr/bin/
then try to run sudo again.
No, sudo is not available everywhere.
But you don’t have to bother with it, anyway. When running the image, you are root, so you can simply run apt-get without spending a thought on permissions.