Getting "sudo: command not found" error in bitbucket-pipelines - python

In my bitbucket-pipelines.yml file, I have this:
- step:
image: python:3.7.2-stretch
name: upload to s3
script:
- export S3_BUCKET="elasticbeanstalk-us-east-1-133233433288"
- export VERSION_LABEL=$(cat VERSION_LABEL)
- sudo apt-get install -y zip # required for packaging up the application
- pip install boto3==1.3.0 # required for upload_to_s3.py
- zip --exclude=*.git* -r /tmp/artifact.zip . # package up the application for deployment
- python upload_to_s3.py # run the deployment script
But when I run this pipeline in Bitbucket, I get an error, which the output:
+ sudo apt-get install -y zip
bash: sudo: command not found
Why would it not know what sudo means? Isn't this common to all Linux machines?

The "command not found" error is printed in stderr when it does not find the binary in the folders configured in env $PATH
first you need to found out if it exists with :
find /usr/bin -name "sudo"
if you find the binary try to set the PATH variable with :
export PATH=$PATH:/usr/bin/
then try to run sudo again.

No, sudo is not available everywhere.
But you don’t have to bother with it, anyway. When running the image, you are root, so you can simply run apt-get without spending a thought on permissions.

Related

Unsolved Error during Docker installation

i have a problem while trying to install Docker on my computer.
I am following this tutorial : https://docs.docker.com/desktop/install/ubuntu/
But when I get to step 3 and i try this command : sudo apt-get install ./docker-desktop--.deb with version : 4.16.2 and arch : amd64 i have this problem :
" E: Unsupported file ./docker-desktop-4.16.2-amd64.deb given on commandline ".
I tried to check on google if someone had the same problem once but i didn't find out anything.
Thank you for your help.
Move your downloaded file to the home directory, then run the command again in the terminal
Copy the file to your Home directory and run sudo dpkg -i docker-desktop-4.16.2-amd64 && sudo apt install -f after installing docker-ce-cli and uidmap on my case

How to install package with apt-get in Gitlab CI pipeline using micromamba image

I'm using the micromamba image in a Gitlab CI pipeline. I need to install an additional package with apt-get (libgl1-mesa-glx).
With the miniconda image this was working:
image: continuumio/miniconda3:latest
before_script:
- apt-get update && apt-get install -y libgl1-mesa-glx
With micromamba, it does not work anymore:
image: mambaorg/micromamba:1.1.0-bullseye
before_script:
- apt-get update && apt-get install -y libgl1-mesa-glx
results in
Reading package lists...
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied)
Is this possible at all? Or do I need to generate a custom docker image?
This is because the user you are running the command from is root or non-sudo user. In the first repo, the user automatically comes with root privileges. That's why you can run commands that require sudo authority.
This is explained on the official Dockerhub pages:
Changing the user id or name The default username is stored in the
environment variable MAMBA_USER, and is currently mambauser. (Before
2022-01-13 it was micromamba, and before 2021-06-30 it was root.)
Micromamba-docker can be run with any UID/GID by passing the docker
run ... command the --user=UID: GID parameters. Running with
--user=root is supported....
Please look at the "Changing the user id or name" section of this page.
This problem has several solution, it might be help;
https://gitlab.com/gitlab-org/gitlab-runner/-/issues/248/designs

Have problem in fabric When updating sudo apt

I'm trying to make a script that installs python on my VPS's automatically using Fabric library, But it gives me an error when I do this sudo command:
server.sudo("apt -y update")
and it says bash: sudo: command not found.
I hope I made the problem clear.

Add Miniconda binaries to path in Docker container

I'm using the following commands in my Dockerfile to install Miniconda. After I install it, I want to use the binaries in ~/miniconda3/bin like python and conda. I tried exporting the PATH with this new path prepended to it, but the subsequent pip command fails (pip is located in ~/miniconda3/bin.
Curiously, if I run the container in interactive terminal mode, the path is set correctly and I'm able to call the binaries as expected. It seems as though the issue is only when building the container itself.
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install -y python3.7
RUN apt-get install -y curl
RUN curl https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh --output miniconda.sh
RUN bash miniconda.sh -b
RUN export PATH="~/miniconda3/bin:$PATH"
RUN pip install pydub # errors out when building
Here's the result of echo $PATH
~/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Here's the error I get
/bin/sh: 1: pip: not found
export won't work. Try ENV
Replace
RUN export PATH="~/miniconda3/bin:$PATH"
with
ENV PATH="~/miniconda3/bin:$PATH"
Even though Miniconda is located in ~, it default installs to the root directory unless otherwise specified.
Here's the right command.
RUN export PATH="/root/miniconda3/bin:$PATH"
It looks like your export PATH ... command is putting the literal symbol ~ into the path. Try this:
ENV PATH="$HOME/miniconda3/bin:$PATH"

how to use multiple dockerfiles for single application

I am using a centos os base image and installing python3 with the following dockerfile
FROM centos:7
ENV container docker
ARG USER=dsadmin
ARG HOMEDIR=/home/${USER}
RUN yum clean all \
&& yum update -q -y -t \
&& yum install file -q -y
RUN useradd -s /bin/bash -d ${HOMEDIR} ${USER}
RUN export LC_ALL=en_US.UTF-8
# install Development Tools to get gcc
RUN yum groupinstall -y "Development Tools"
# install python development so that pip can compile packages
RUN yum -y install epel-release && yum clean all \
&& yum install -y python34-setuptools \
&& yum install -y python34-devel
# install pip
RUN easy_install-3.4 pip
# install virtualenv or virtualenvwrapper
RUN pip3 install virtualenv \
&& pip3 install virtualenvwrapper \
&& pip3 install pandas
# # install django
# RUN pip3 install django
USER ${USER}
WORKDIR ${HOMEDIR}
I build and tag the above as follows:
docker build . --label validation --tag validation
I then need to add a .tar.gz file to the home directory. This file contains all the python scripts I maintain. This file will change frequently. If I add it to the dockerfile above, python is installed every time I change the .gz file. This adds a lot of time to development. As a workaround, I tried creating a second dockerfile file that uses the above image as the base and then just adds the .tar.gz file on it.
FROM validation:latest
ARG USER=dsadmin
ARG HOMEDIR=/home/${USER}
ADD code/validation_utility.tar.gz ${HOMEDIR}/.
USER ${USER}
WORKDIR ${HOMEDIR}
After that if I run docker image and do an ls, all the files in the folder have a owner of games.
-rw-r--r-- 1 501 games 35785 Nov 2 21:24 Validation_utility.py
To fix the above, I added the following lines to the second docker file:
ADD code/validation_utility.tar.gz ${HOMEDIR}/.
RUN chown -R ${USER}:${USER} ${HOMEDIR} \
&& chmod +x ${HOMEDIR}/Validation_utility.py
but I get the error:
chown: changing ownership of '/home/dsadmin/Validation_utility.py': Operation not permitted
The goal is to have two docker files. The users will run the first docker file to install centos and python dependencies. The second dockerfile will install the custom python scripts. If the scripts change, they will just run the second docker file again. Is that the right way to think about docker? Thank you.
Is that the right way to think about docker?
This is the easy part of your question. Yes. You're thinking about the proper way to structure your Dockerfiles, reuse them, and keep your image builds efficient. Good job.
As for the error you're receiving, I'm less confident in answering why the ADD command is un-tarballing your tar.gz as the games user. I'm not nearly as familiar with CentOS. That's the start of the problem. dsadmin, as a regular non-privileged user, can't change ownership of files he doesn't own. Since this un-tarballed script is owned by games, the chown command fails.
I used your Dockerfiles and got the same issue on MacOS.
You can get around this by, well, not using ADD. Which is funny because local tarball extraction is the one use case where Docker thinks you should prefer ADD over COPY.
COPY code/validation_utility.tar.gz ${HOMEDIR}/.
RUN tar -xvf validation_utility.tar.gz
Properly extracts the tarball and, since dsadmin was the user at the time, the contents come out properly owned by dsadmin.
(An uglier route might be to switch the USER to root to set permissions, then set it back to dsadmin. I think this is icky, but it's an option.)

Categories

Resources