Building cassandra-website with docker yields a python3 permissions error - python

I am trying to build https://github.com/apache/cassandra-website
Python3 is installed, I started the docker daemon, git pull and then run ./run.sh website preview but it yields the following permissions issue even though I am running as root.
[root#localhost cassandra-website]# ./run.sh website preview
Server Docker Engine version: 1.13.1
Executing docker command:
docker run --rm --name website_content -p 5151:5151/tcp -v /root/cassandra-website:/home/build/cassandra-website -v /root/cassandra-website/site-ui/build/ui-bundle.zip:/home/build/ui-bundle.zip -e ANTORA_CONTENT_SOURCES_CASSANDRA_WEBSITE_URL=/home/build/cassandra-website -e ANTORA_UI_BUNDLE_URL=/home/build/ui-bundle.zip apache/cassandra-website:latest preview
container: INFO: Entering preview mode!
container: INFO: Building site.yaml
python3: can't open file './bin/site_yaml_generator.py': [Errno 13] Permission denied

[SOLVED]
Update to latest docker from docker official repo
Update python3 and install yum install -y python36*
Make sure apache ant is installed (which explains the yaml issue)
Run docs build target
Run preview build target
Access 127.0.0.7:port/path/to/docs and Voila

Related

Docker mounting image error executable file not found in $PATH: unknown

I have a directory in which code files and subdirectories are, i want to mount these files to the docker image and run the index.py
My Dockerfile looks like this:
# Selected base python version
FROM python:3.9.6
COPY requirements.txt ./
# Install all packages - see readme to create the requirements.txt
RUN pip install -r requirements.txt
# Port the container listens
EXPOSE 5000
CMD ["python3", "index.py"]
My build process is like this:
docker build -t demo .
docker run -it -p 127.0.0.1:5000:5000 demo -v "$(pwd)":/.
However, the following errors occurs:
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "-v": executable file not found in $PATH: unknown.
ERRO[0000] error waiting for container: context canceled
What is wrong?
I tried different paths, but they all lead to the same errors.
Google the error didn't lead to any solution.
The solution is to change the file:
# Selected base python version
FROM python:3.9.6
COPY requirements.txt ./
# Install all packages - see readme to create the requirements.txt
RUN pip install -r requirements.txt
# Port the container listens
EXPOSE 5000
CMD ["python3", "app/index.py"]
and run:
docker run -it -p 127.0.0.1:5000:5000 -v "$(pwd)"/:/app demo

Unable to run Flask App using Docker in Windows-10

I've installed and configured docker (as per documentation) and I am trying to build a flask application using tiangolo/uwsgi-nginx-flask:python3.8. I've built a hello-world application, and have tested it locally by running python manage.py and the application runs successfully. Link to full Code-File.
My docker version and installation is as below:
Dockerfile:
FROM tiangolo/uwsgi-nginx-flask:python3.8
ENV INSTALL_PATH /usr/src/helloworld
RUN mkdir -p $INSTALL_PATH
# install net-tools
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y \
net-tools \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# set working directory
WORKDIR $INSTALL_PATH
# setup flask environment
# install all requirements
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
# copy all files and folder to docker
COPY . .
# run the application in docker environment
CMD [ "python", "./manage.py" ]
I built the application with docker build --tag hello-world:test . and running the application as: docker run -d -p 5000:5000 hello-world:test successfully.
However, I'm unable to open the application in localhost:5000 or 0.0.0.0:5000 or any other port. The application is running, as I can see it from the CLI:
But, from browser the page is not reachable:
The question mentions to check the IP address:
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" hungry_engelbart
>> <no value>
Found another solution at this link, but docker-machine is currently deprecated.
I'm new to docker, but I have tried to run the same thing following this tutorial, but faced similar issues.
Finally, I am able to solve this. I had to configure a new inbound rules under Windows Firewall > Advanced Settings > Inbound Rules > New Inbound Rules. Create a new rule that will allow a range of local IP addresses, which in my case was 198.168.0.1:198.168.0.100. Finally, you need to run the application at 0.0.0.0 as pointed by #tentative in the comments. :)

CKAN docker compose with custom extensions

I have been trying to customize the default CKAN docker image to include my prefered extensions. The problem is that the precedence of installing pip packages, configuring the ini files and, building and running the main CKAN image is not clear for me.
I have tried adding a layer to the Dockerfile (see below) but extensions are not available after running docker-compose up. The add-extensions.sh file contains pip install commands for each extension.
RUN sh $CKAN_VENV/src/ckan/contrib/docker/add-extensions.sh
I have also tried to include my commands in the docker-compose file itself, inside the ckan service, as follows.
command: >
sh -c "
pip install ckanext-geoview &&
pip install ckanext-datarequests &&
ckan config-tool "/etc/ckan/production.ini" -f "/etc/ckan/custom-config.ini" &&
ckan config-tool "/etc/ckan/production.ini" -s app:main -e ckan.plugins='stats text_view image_view recline_view datastore datapusher resource_proxy geo_view datarequests' &&
ckan -c /etc/ckan/production.ini run --host 0.0.0.0"
But this is not working either. So what is the recommended way of including custom plugins inside the default CKAN image?

Create a AWS Lambda layer using Docker

I am trying to follow the instructions on this page:
How do I create a Lambda layer using a simulated Lambda environment with Docker?
on a Windows 7 environment.
I followed all of these steps:
installed Docker Toolbox
created a local folder 'C:/Users/Myname/mylayer' containing requirements.txt and python 3.8 folder structure
run the following commands in docker toolbox:
cd c:/users/myname/mylayer
docker run -v "$PWD":/var/task "lambci/lambda:build-python3.8" /bin/sh -c "pip install -r requirements.txt -t python/lib/python3.8/site-packages/; exit"
It returns the following error:
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
I don't understand what I am doing wrong. Maybe something obvious (I'm a beginner) but I spent the whole day trying to figure it out and it is getting quite frustrating. Appreciate the help!
I ran the following in Windows 10 Powershell and it worked
docker run -v ${pwd}:/var/task "amazon/aws-sam-cli-build-image-python3.8" /bin/sh -c "pip install -r requirements.txt -t python/lib/python3.8/site-packages; exit"

No EXPOSE in aws docker fails deployment

I have a scrapy project run continously by cron hosted inside a docker image.
When I run and deploy this locally everything works fine. If I try to deploy the same to AWS I get the following error inside the logs:
No EXPOSE directive found in Dockerfile, abort deployment (ElasticBeanstalk::ExternalInvocationError)
The console shows that my container was build correctly but I can not use it without an EXPOSED port.
INFO: Successfully pulled python:2.7
WARN: Failed to build Docker image aws_beanstalk/staging-app, retrying...
INFO: Successfully built aws_beanstalk/staging-app
ERROR: No EXPOSE directive found in Dockerfile, abort deployment
ERROR: [Instance: i-6eebaeaf] Command failed on instance. Return code: 1 Output: No EXPOSE directive found in Dockerfile, abort deployment.
Hook /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
But why is it not possible?
My Dockerfile looks like the following:
FROM python:2.7
MAINTAINER XDF
ENV DIRECTORY /opt/the-flat
# System
##########
RUN apt-get update -y && apt-get upgrade -y && apt-get install -y ntp vim apt-utils
WORKDIR $DIRECTORY
# GIT
##########
# http://stackoverflow.com/questions/23391839/clone-private-git-repo-with-dockerfile
RUN apt-get install -y git
RUN mkdir /root/.ssh/
ADD deploy/git-deply-key /root/.ssh/id_rsa
RUN chmod 0600 /root/.ssh/id_rsa
RUN touch /root/.ssh/known_hosts
RUN ssh-keyscan -t rsa bitbucket.org >> /root/.ssh/known_hosts
RUN ssh -T -o 'ConnectionAttempts=1' git#bitbucket.org
RUN git clone --verbose git#bitbucket.org:XDF/the-flat.git .
# Install
##########
RUN pip install scrapy
RUN pip install MySQL-python
# not working
# apt-get install -y wkhtmltopdf && pip install pdfkit
# else
# https://pypi.python.org/pypi/pdfkit
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y openssl build-essential xorg libssl-dev
RUN wget http://wkhtmltopdf.googlecode.com/files/wkhtmltopdf-0.10.0_rc2-static-amd64.tar.bz2
RUN tar xvjf wkhtmltopdf-0.10.0_rc2-static-amd64.tar.bz2
RUN chown root:root wkhtmltopdf-amd64
RUN mv wkhtmltopdf-amd64 /usr/bin/wkhtmltopdf
RUN pip install pdfkit
# Cron
##########
# http://www.ekito.fr/people/run-a-cron-job-with-docker/
# http://www.corntab.com/pages/crontab-gui
RUN apt-get install -y cron
RUN crontab "${DIRECTORY}/deploy/crontab"
CMD ["cron", "-f"]
It's by design. You need to have an EXPOSE port directive in your Dockerfile to tell beanstalk what port your app will be listening on. Do you have a usecase where you cannot or do not want to have EXPOSE in your Dockerfile?
ElasticBeanstalk is designed for web applications, hence the EXPOSE requirement. The use case you demonstrated is that of a jobs (workers) server, which Elastic Beanstalk doesn't handle well.
For your case, either expose a dummy port number or launch an EC2 instance yourself to bypass the EB overload.

Categories

Resources