Python application in a container using Podman - python

I'd like to build a container using Podman which would contains the following:
a Python application
the Python modules I developed but which are not stored at the same place than the Python application
the Python environment (made with miniconda/mambaforge)
a mounted folder for input data
a mounted folder for output data
To do that, I've added a Dockerfile in my home directory. Below is the content of this Dockerfile:
FROM python:3
# Add the Python application
ADD /path/to/my_python_app /my_python_app
# Add the Python modules used by the Python application
ADD /path/to/my_modules /my_modules
# Add the whole mambaforge folder (contains the virtual envs) with the exact same path than the local one
ADD /path/to/mambaforge /exact/same/path/to/mambaforge
# Create a customized .bashrc (contains 'export PATH' to add mamba path and 'export PYTHONPATH' to add my_modules path)
ADD Dockerfile_bashrc /root/.bashrc
Then, I build the container with:
podman build -t python_app .
And run it with:
podman run -i -t -v /path/to/input/data:/mnt/input -v /path/to/output/data:/mnt/output python_app /bin/bash
In the Dockerfile, note I add the whole mambaforge (it is like miniconda). Is it possible to only add the virtual environment? I found I needed to add the whole mambaforge because I need to activate the virtual environment with mamba/conda activate my_env. Which I do in a .bashrc (with the conda initialization) that I put in /root/.bashrc. In this file, I also do export PYTHONPATH="/my_modules:$PYTHONPATH".
I'd also like to add the following line in my Dockerfile to execute automatically the Python application when running the container.
CMD ["python", "/path/to/my_python_app/my_python_app.py"]
However, this doesn't work because it seems the container needs to be run interactively in order to load the .bashrc first.
All of this is kludge and I'd like to know if there is a simpler and better way to do that?
Many thanks for your help!

Related

How to run a ROS2 node inside docker image?

I have a ros2 package and successfully create a docker image of it. Then when im inside the container i would like to run only one single node of the ros2 package. So first create the environment with PATH=$PATH:/home/user/.local/bin then vcs import . <system_integration/ros.repos then docker pull ghcr.io/test-inc/base_images:foxy. Im running and executing the docker with
docker run --name test -d --rm -v $(pwd):/home/ros2/foxy/src ghcr.io/company-inc/robot1_vnc_ros2:foxy
docker exec -it test /bin/bash
Then when Im inside the docker I build the package with
colcon build --symlink-install --event-handlers console_cohesion+ --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-up-to system_integration
So now im inside the docker in the root#1942eef8d977:~/ros2/foxy and would like to run one python node. But ros2 run package_name node_name would not work right? Im not familiar much with docker so not sure how to run the node. Any help
Thanks
Before you can use ros2 run to run packages, you have to source the correct workspace. Otherwise you could not use auto-completion tab to find any packages and thus no package can be run
To do so:
cd into your the root path of your workspace
colcon build your workspace (which your packages should under the src directory)
run this line to source it source install/setup.bash
you can check it with echo $COLCON_PATH_PREFIX to see the path is correctly sourced
Try rerun the ros2 command to start your node
Have you sourced the setup file within the container?
Where ever the package source is located, you need to run source ./install/setup.bash

ADD multiple Files to a docker but just RUN one of them

It's just a theoretical question. Is there a way to run the docker but only run one specific script without changing the Dockerfile? Maybe with the docker run [container] command?
Dockerfile:
FROM python:3.8
ADD main1.py
ADD main2.py
ADD main3.py
ADD main4.py
ADD main5.py
Theoretical Command:
docker run docker-test main2.py
There is nothing "theoretical" about this. Docker copies into place the files, and if they are working executables, you can execute them with docker run image executable ... but
it requires the files to be properly executable (or you will need to explicitly say docker run image python executable to run a Python script which is not executable)
it requires the files to be in your PATH for you to be able to specify their names without a path; or you will need to specify the full path within the container, or perhaps a relative path (./executable if they are in the container's default working directory)
docker run image /path/to/executable
you obviously need the container to contain python in its PATH in order for it to find python; or you will similarly need to specify the full path to the interpreter
docker run image /usr/bin/python3 /path/to/executable
In summary, probably make sure you have chmod +x the script files and that they contain (the moral equivalent of) #!/usr/bin/env python3 (or python if that's what the binary is called) on their first line.
(And obviously, don't use DOS line feeds in files you want to be able to execute in a Linux container. Python can cope but the Linux kernel will look for /usr/bin/env python3^M if that's exactly what it says on the shebang line.)
In the specific case of a Python application, the standard Python setuptools package has some functionality that can simplify this.
In your application's setup.cfg file you can declare entry points (different from the similarly-named Docker concept) which provide simple scripts to launch a specific part of your application.
[options.entry_points]
console_scripts =
main1 = app.main1:main
main2 = app.main2:main
where the scripts app/main1.py look like normal top-level Python scripts
#!/usr/bin/env python3
# the console_scripts call this directly
def main():
...
# for interactive use
if __name__ == '__main__':
main()
Now in your Dockerfile, you can use a generic Python application recipe and install this; all of the console_scripts will be automatically visible in the standard $PATH.
FROM python:3.8
WORKDIR /app
COPY . .
RUN pip install .
CMD ["main1"]
docker run --rm my-image main2
It's worth noting that, up until the last part, we've been using generic Python utilities, and you can do the same thing without Docker
# directly on the host, without Docker
python3 -m venv ./virtual_environment
. ./virtual_environment/bin/activate
pip install .
# then run any of the scripts directly
main3
# technically activating the virtual environment is optional
deactivate
./virtual_environment/bin/main4
The fundamental point here is that the same rules apply for running a command on the host, in a Dockerfile CMD, or in a docker run command override (must be on $PATH, executable, have the correct interpreter, etc.). See #tripleee's answer for a more generic, non-Python-specific approach.

GitHub Action I wrote doesn't have access to repo's files that is calling the action

A sample repo with the directory structure of what I'm working on is on GitHub here. To run the GitHub Action, you just need to go to the Action tab of the repo and run the Action manually.
I have a custom GitHub Action I've written as well with python as the base image in the Docker container but want the python version to be an input for the GitHub Action. In order to do so, I am creating a second intermediate Docker container to run with the python version input argument.
The problem I'm running into is I don't have access to the original repo's files that is calling the GitHub Action. For example, say the repo is called python-sample-project and has folder structure:
python-sample-project
│ main.py
│ file1.py
│
└───folder1
│ │ file2.py
I see main.py, file1.py, and folder1/file2.py in entrypoint.sh. However, in docker-action/entrypoint.sh I only see the linux folder structure and the entrypoint.sh file copied over in docker-action/Dockerfile.
In the Alpine example I'm using, the action entrypoint.sh script looks like this:
#!/bin/sh -l
ALPINE_VERSION=$1
cd /docker-action
docker build -t docker-action --build-arg alpine_version="$ALPINE_VERSION" . && docker run docker-action
In docker-action/ I have a Dockerfile and entrypoint.sh script that should run for the inner container with the dynamic version of Alpine (or Python)
The docker-action/Dockerfile is as follows:
# Container image that runs your code
ARG alpine_version
FROM alpine:${alpine_version}
# Copies your code file from your action repository to the filesystem path `/` of the container
COPY entrypoint.sh /entrypoint.sh
RUN ["chmod", "+x", "/entrypoint.sh"]
# Code file to execute when the docker container starts up (`entrypoint.sh`)
ENTRYPOINT ["/entrypoint.sh"]
In the docker-action/entrypoint I run ls but I do not see the repository files.
Is it possible to access the main.py, file1.py, and folder1/file2.py in entrypoint.sh in the docker-action/entrypoint.sh?
There's generally two ways to get files from your repository available to a docker container you build and run. You either (1) add the files to the image when you build it or (2) mount the files into the container when you run it. There are some other ways, like specifying volumes, but that's probably out of scope for this case.
The Dockerfile docker-action/Dockerfile does not copy any files except for the entrypoint.sh script. Your entrypoint.sh also does not provide any mount points when running the container. Hence, the outcome you observe is the expected outcome based on these facts.
In order to resolve this, you must either (1) add COPY/ADD statements to your Dockerfile to copy files into the image (and set appropriate build context) OR (2) mount the files into the container when it runs by adding -v /source-path:/container-path to the docker run command in your entrypoint.sh.
See references:
COPY reference
Docker run reference
Though, this approach of building another container just to get a user-provided python version is a highly questionable practice for GitHub Actions and should probably be avoided. Consider leaning on the setup-python action instead.
The docker-in-docker problem
Nevertheless, if you continue this route and want to go about mounting the directory, you'll have to keep in mind that, when invoking docker from within a docker action on GitHub, the filesystem in the mount specification refers to the filesystem of the docker host, not the filesystem of the container.
It works on my machine?!
Counter to what you might experience running docker on a local system for example, this does not work in GitHub -- the working directory is not mounted:
docker run -v $(pwd):/opt/workspace \
--workdir /opt/workspace \
--entrypoint /bin/ls \
my-container "-R"
This doesn't work either:
docker run -v $GITHUB_WORKSPACE:$GITHUB_WORKSPACE \
--workdir $GITHUB_WORKSPACE \
--entrypoint /bin/ls \
my-container "-R"
This kind of thing would work perfectly fine if you tried it on a system running docker locally. What gives?
Dealing with the devil (daemon)
In Actions, the starting working directory where files are checked out into $GITHUB_WORKSPACE. In docker actions, that's /github/workspace. The workspace files populate into the workspace when your action runs by the Actions runner mounting the workspace from the host where the docker daemon is running.
You can see that in the command run when your action starts:
/usr/bin/docker run --name f884202608aa2bfab75b6b7e1f87b3cd153444_f687df --label f88420 --workdir /github/workspace --rm -e INPUT_ALPINE-VERSION -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_RUN_ATTEMPT -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_NAME -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/my-repo/my-repo":"/github/workspace" f88420:2608aa2bfab75b6b7e1f87b3cd153444 "3.9.5"
The important bits are this:
-v "/home/runner/work/my-repo/my-repo":"/github/workspace"
-v "/var/run/docker.sock":"/var/run/docker.sock"
/home/runner/work/my-repo/my-repo is the path on the host, where the repository files are. As mentioned, that first line is what gets it mounted into /github/workspace in your action container when it gets run.
The second line is mounting the docker socket from the host to the action container. This means any time you call docker within your action, you're actually talking to the docker daemon outside of your container. This is important because that means when you use the -v argument inside your action, the arguments need to reflect directories that exist outside of the container.
So, what you would actually need to do instead is this:
docker run -v /home/runner/work/my-repo/my-repo:/opt/workspace \
--workdir /opt/workspace \
--entrypoint /bin/ls \
my-container "-R"
Becoming useful to others
And that works. If you only use it for the project itself. However, you have (among others) a remaining problem if you want this action to be consumable by other projects. How do you know where the workspace is on the host? This path will change for each repository, after all. GitHub does not guarantee these paths, either. They may be different on different platforms, or your action may be running on a self-hosted runner.
So how do you content with that problem? There is no inbuilt environment variable that points to this directory you need specifically, unfortunately. However, by relying on implementation detail, you might be able to get away with using the $RUNNER_WORKSPACE variable, which will point, in this case to /home/runner/work/your-project. This is not the same place as the origin of $GITHUB_WORKSPACE but it's close. You can use the GITHUB_REPOSITORY variable to build the path, though this isn't guaranteed to always be the case afaik:
PROJECT_NAME="$(basename ${GITHUB_REPOSITORY})"
WORKSPACE="${RUNNER_WORKSPACE}/${PROJECT_NAME}"
You also have some other things to fix like the working directory form which you build.
TL;DR
You need to mount files in the container when you run it. In GitHub, you're running docker-in-docker, so paths you need to use to mount files work different, so you need to find the correct paths to pass to docker when called from within your action container.
A minimally working solution for the example project you linked is this entrypoint.sh in the root of the repo looks like this:
#!/usr/bin/env sh
ALPINE_VERSION=$1
docker build -t docker-action \
-f ./docker-action/Dockerfile \
--build-arg alpine_version="$ALPINE_VERSION" \
./docker-action
PROJECT_NAME="$(basename ${GITHUB_REPOSITORY})"
WORKSPACE="${RUNNER_WORKSPACE}/${PROJECT_NAME}"
docker run --workdir=$GITHUB_WORKSPACE \
-v $WORKSPACE:$GITHUB_WORKSPACE \
docker-action "$#"
There are probably further concerns with your action, depending on what it does, like making available all the default and user-defined environment variables for the action to the 'inner' container, if that's important.
So, is this possible? Sure. Is it reasonable just to get a dynamic version of alpine/python? I don't think so. There's probably better ways of accomplishing what you want to do, like using setup-python, but that sounds like a different question.

How can I install common local python libraries into docker container? [duplicate]

How can I include files from outside of Docker's build context using the "ADD" command in the Docker file?
From the Docker documentation:
The path must be inside the context of the build; you cannot ADD
../something/something, because the first step of a docker build is to
send the context directory (and subdirectories) to the docker daemon.
I do not want to restructure my whole project just to accommodate Docker in this matter. I want to keep all my Docker files in the same sub-directory.
Also, it appears Docker does not yet (and may not ever) support symlinks: Dockerfile ADD command does not follow symlinks on host #1676.
The only other thing I can think of is to include a pre-build step to copy the files into the Docker build context (and configure my version control to ignore those files). Is there a better workaround for than that?
The best way to work around this is to specify the Dockerfile independently of the build context, using -f.
For instance, this command will give the ADD command access to anything in your current directory.
docker build -f docker-files/Dockerfile .
Update: Docker now allows having the Dockerfile outside the build context (fixed in 18.03.0-ce). So you can also do something like
docker build -f ../Dockerfile .
I often find myself utilizing the --build-arg option for this purpose. For example after putting the following in the Dockerfile:
ARG SSH_KEY
RUN echo "$SSH_KEY" > /root/.ssh/id_rsa
You can just do:
docker build -t some-app --build-arg SSH_KEY="$(cat ~/file/outside/build/context/id_rsa)" .
But note the following warning from the Docker documentation:
Warning: It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc. Build-time variable values are visible to any user of the image with the docker history command.
I spent a good time trying to figure out a good pattern and how to better explain what's going on with this feature support. I realized that the best way to explain it was as follows...
Dockerfile: Will only see files under its own relative path
Context: a place in "space" where the files you want to share and your Dockerfile will be copied to
So, with that said, here's an example of the Dockerfile that needs to reuse a file called start.sh
Dockerfile
It will always load from its relative path, having the current directory of itself as the local reference to the paths you specify.
COPY start.sh /runtime/start.sh
Files
Considering this idea, we can think of having multiple copies for the Dockerfiles building specific things, but they all need access to the start.sh.
./all-services/
/start.sh
/service-X/Dockerfile
/service-Y/Dockerfile
/service-Z/Dockerfile
./docker-compose.yaml
Considering this structure and the files above, here's a docker-compose.yml
docker-compose.yaml
In this example, your shared context directory is the runtime directory.
Same mental model here, think that all the files under this directory are moved over to the so-called context.
Similarly, just specify the Dockerfile that you want to copy to that same directory. You can specify that using dockerfile.
The directory where your main content is located is the actual context to be set.
The docker-compose.yml is as follows
version: "3.3"
services:
service-A
build:
context: ./all-service
dockerfile: ./service-A/Dockerfile
service-B
build:
context: ./all-service
dockerfile: ./service-B/Dockerfile
service-C
build:
context: ./all-service
dockerfile: ./service-C/Dockerfile
all-service is set as the context, the shared file start.sh is copied there as well the Dockerfile specified by each dockerfile.
Each gets to be built their own way, sharing the start file!
On Linux you can mount other directories instead of symlinking them
mount --bind olddir newdir
See https://superuser.com/questions/842642 for more details.
I don't know if something similar is available for other OSes.
I also tried using Samba to share a folder and remount it into the Docker context which worked as well.
If you read the discussion in the issue 2745 not only docker may never support symlinks they may never support adding files outside your context. Seems to be a design philosophy that files that go into docker build should explicitly be part of its context or be from a URL where it is presumably deployed too with a fixed version so that the build is repeatable with well known URLs or files shipped with the docker container.
I prefer to build from a version controlled source - ie docker build
-t stuff http://my.git.org/repo - otherwise I'm building from some random place with random files.
fundamentally, no.... -- SvenDowideit, Docker Inc
Just my opinion but I think you should restructure to separate out the code and docker repositories. That way the containers can be generic and pull in any version of the code at run time rather than build time.
Alternatively, use docker as your fundamental code deployment artifact and then you put the dockerfile in the root of the code repository. if you go this route probably makes sense to have a parent docker container for more general system level details and a child container for setup specific to your code.
I believe the simpler workaround would be to change the 'context' itself.
So, for example, instead of giving:
docker build -t hello-demo-app .
which sets the current directory as the context, let's say you wanted the parent directory as the context, just use:
docker build -t hello-demo-app ..
You can also create a tarball of what the image needs first and use that as your context.
https://docs.docker.com/engine/reference/commandline/build/#/tarball-contexts
This behavior is given by the context directory that the docker or podman uses to present the files to the build process.
A nice trick here is by changing the context dir during the building instruction to the full path of the directory, that you want to expose to the daemon.
e.g:
docker build -t imageName:tag -f /path/to/the/Dockerfile /mysrc/path
using /mysrc/path instead of .(current directory), you'll be using that directory as a context, so any files under it can be seen by the build process.
This example you'll be exposing the entire /mysrc/path tree to the docker daemon.
When using this with docker the user ID who triggered the build must have recursively read permissions to any single directory or file from the context dir.
This can be useful in cases where you have the /home/user/myCoolProject/Dockerfile but want to bring to this container build context, files that aren't in the same directory.
Here is an example of building using context dir, but this time using podman instead of docker.
Lets take as example, having inside your Dockerfile a COPY or ADDinstruction which is copying files from a directory outside of your project, like:
FROM myImage:tag
...
...
COPY /opt/externalFile ./
ADD /home/user/AnotherProject/anotherExternalFile ./
...
In order to build this, with a container file located in the /home/user/myCoolProject/Dockerfile, just do something like:
cd /home/user/myCoolProject
podman build -t imageName:tag -f Dockefile /
Some known use cases to change the context dir, is when using a container as a toolchain for building your souce code.
e.g:
podman build --platform linux/s390x -t myimage:mytag -f ./Dockerfile /tmp/mysrc
or it can be a path relative, like:
podman build --platform linux/s390x -t myimage:mytag -f ./Dockerfile ../../
Another example using this time a global path:
FROM myImage:tag
...
...
COPY externalFile ./
ADD AnotherProject ./
...
Notice that now the full global path for the COPY and ADD is omitted in the Dockerfile command layers.
In this case the contex dir must be relative to where the files are, if both externalFile and AnotherProject are in /opt directory then the context dir for building it must be:
podman build -t imageName:tag -f ./Dockerfile /opt
Note when using COPY or ADD with context dir in docker:
The docker daemon will try to "stream" all the files visible on the context dir tree to the daemon, which can slowdown the build. And requires the user to have recursively permission from the context dir.
This behavior can be more costly specially when using the build through the API. However,with podman the build happens instantaneously, without needing recursively permissions, that's because podman does not enumerate the entire context dir, and doesn't use a client/server architecture as well.
The build for such cases can be way more interesting to use podman instead of docker, when you face such issues using a different context dir.
Some references:
https://docs.docker.com/engine/reference/commandline/build/
https://docs.podman.io/en/latest/markdown/podman-build.1.html
As is described in this GitHub issue the build actually happens in /tmp/docker-12345, so a relative path like ../relative-add/some-file is relative to /tmp/docker-12345. It would thus search for /tmp/relative-add/some-file, which is also shown in the error message.*
It is not allowed to include files from outside the build directory, so this results in the "Forbidden path" message."
Using docker-compose, I accomplished this by creating a service that mounts the volumes that I need and committing the image of the container. Then, in the subsequent service, I rely on the previously committed image, which has all of the data stored at mounted locations. You will then have have to copy these files to their ultimate destination, as host mounted directories do not get committed when running a docker commit command
You don't have to use docker-compose to accomplish this, but it makes life a bit easier
# docker-compose.yml
version: '3'
services:
stage:
image: alpine
volumes:
- /host/machine/path:/tmp/container/path
command: bash -c "cp -r /tmp/container/path /final/container/path"
setup:
image: stage
# setup.sh
# Start "stage" service
docker-compose up stage
# Commit changes to an image named "stage"
docker commit $(docker-compose ps -q stage) stage
# Start setup service off of stage image
docker-compose up setup
Create a wrapper docker build shell script that grabs the file then calls docker build then removes the file.
a simple solution not mentioned anywhere here from my quick skim:
have a wrapper script called docker_build.sh
have it create tarballs, copy large files to the current working directory
call docker build
clean up the tarballs, large files, etc
this solution is good because (1.) it doesn't have the security hole from copying in your SSH private key (2.) another solution uses sudo bind so that has another security hole there because it requires root permission to do bind.
I think as of earlier this year a feature was added in buildx to do just this.
If you have dockerfile 1.4+ and buildx 0.8+ you can do something like this
docker buildx build --build-context othersource= ../something/something .
Then in your docker file you can use the from command to add the context
ADD –from=othersource . /stuff
See this related post https://www.docker.com/blog/dockerfiles-now-support-multiple-build-contexts/
Workaround with links:
ln path/to/file/outside/context/file_to_copy ./file_to_copy
On Dockerfile, simply:
COPY file_to_copy /path/to/file
I was personally confused by some answers, so decided to explain it simply.
You should pass the context, you have specified in Dockerfile, to docker when
want to create image.
I always select root of project as the context in Dockerfile.
so for example if you use COPY command like COPY . .
first dot(.) is the context and second dot(.) is container working directory
Assuming the context is project root, dot(.) , and code structure is like this
sample-project/
docker/
Dockerfile
If you want to build image
and your path (the path you run the docker build command) is /full-path/sample-project/,
you should do this
docker build -f docker/Dockerfile .
and if your path is /full-path/sample-project/docker/,
you should do this
docker build -f Dockerfile ../
An easy workaround might be to simply mount the volume (using the -v or --mount flag) to the container when you run it and access the files that way.
example:
docker run -v /path/to/file/on/host:/desired/path/to/file/in/container/ image_name
for more see: https://docs.docker.com/storage/volumes/
I had this same issue with a project and some data files that I wasn't able to move inside the repo context for HIPAA reasons. I ended up using 2 Dockerfiles. One builds the main application without the stuff I needed outside the container and publishes that to internal repo. Then a second dockerfile pulls that image and adds the data and creates a new image which is then deployed and never stored anywhere. Not ideal, but it worked for my purposes of keeping sensitive information out of the repo.
In my case, my Dockerfile is written like a template containing placeholders which I'm replacing with real value using my configuration file.
So I couldn't specify this file directly but pipe it into the docker build like this:
sed "s/%email_address%/$EMAIL_ADDRESS/;" ./Dockerfile | docker build -t katzda/bookings:latest . -f -;
But because of the pipe, the COPY command didn't work. But the above way solves it by -f - (explicitly saying file not provided). Doing only - without the -f flag, the context AND the Dockerfile are not provided which is a caveat.
How to share typescript code between two Dockerfiles
I had this same problem, but for sharing files between two typescript projects. Some of the other answers didn't work for me because I needed to preserve the relative import paths between the shared code. I solved it by organizing my code like this:
api/
Dockerfile
src/
models/
index.ts
frontend/
Dockerfile
src/
models/
index.ts
shared/
model1.ts
model2.ts
index.ts
.dockerignore
Note: After extracting the shared code into that top folder, I avoided needing to update the import paths because I updated api/models/index.ts and frontend/models/index.ts to export from shared: (eg export * from '../../../shared)
Since the build context is now one directory higher, I had to make a few additional changes:
Update the build command to use the new context:
docker build -f Dockerfile .. (two dots instead of one)
Use a single .dockerignore at the top level to exclude all node_modules. (eg **/node_modules/**)
Prefix the Dockerfile COPY commands with api/ or frontend/
Copy shared (in addition to api/src or frontend/src)
WORKDIR /usr/src/app
COPY api/package*.json ./ <---- Prefix with api/
RUN npm ci
COPY api/src api/ts*.json ./ <---- Prefix with api/
COPY shared usr/src/shared <---- ADDED
RUN npm run build
This was the easiest way I could send everything to docker, while preserving the relative import paths in both projects. The tricky (annoying) part was all the changes/consequences caused by the build context being up one directory.
One quick and dirty way is to set the build context up as many levels as you need - but this can have consequences.
If you're working in a microservices architecture that looks like this:
./Code/Repo1
./Code/Repo2
...
You can set the build context to the parent Code directory and then access everything, but it turns out that with a large number of repositories, this can result in the build taking a long time.
An example situation could be that another team maintains a database schema in Repo1 and your team's code in Repo2 depends on this. You want to dockerise this dependency with some of your own seed data without worrying about schema changes or polluting the other team's repository (depending on what the changes are you may still have to change your seed data scripts of course)
The second approach is hacky but gets around the issue of long builds:
Create a sh (or ps1) script in ./Code/Repo2 to copy the files you need and invoke the docker commands you want, for example:
#!/bin/bash
rm -r ./db/schema
mkdir ./db/schema
cp -r ../Repo1/db/schema ./db/schema
docker-compose -f docker-compose.yml down
docker container prune -f
docker-compose -f docker-compose.yml up --build
In the docker-compose file, simply set the context as Repo2 root and use the content of the ./db/schema directory in your dockerfile without worrying about the path.
Bear in mind that you will run the risk of accidentally committing this directory to source control, but scripting cleanup actions should be easy enough.

How to run a docker image in IBM Cloud functions?

I have a simple Python program that I want to run in IBM Cloud functions. Alas it needs two libraries (O365 and PySnow) so I have to Dockerize it and it needs to be able to accept a Json feed from STDIN. I succeeded in doing this:
FROM python:3
ADD requirements.txt ./
RUN pip install -r requirements.txt
ADD ./main ./main
WORKDIR /main
CMD ["python", "main.py"]
This runs with: cat env_var.json | docker run -i f9bf70b8fc89
I've added the Docker container to IBM Cloud Functions like this:
ibmcloud fn action create e2t-bridge --docker [username]/e2t-bridge
However when I run it, it times out.
Now I did see a possible solution route, where I dockerize it as an Openwhisk application. But for that I need to create a binary from my Python application and then load it into a rather complicated Openwhisk skeleton, I think?
But having a file you can simply run was is the whole point of my Docker, so to create a binary of an interpreted language and then adding it into a Openwhisk docker just feels awfully clunky.
What would be the best way to approach this?
It turns out you don't need to create a binary, you just need to edit the OpenWhisk skeleton like so:
# Dockerfile for example whisk docker action
FROM openwhisk/dockerskeleton
ENV FLASK_PROXY_PORT 8080
### Add source file(s)
ADD requirements.txt /action/requirements.txt
RUN cd /action; pip install -r requirements.txt
# Move the file to
ADD ./main /action
# Rename our executable Python action
ADD /main/main.py /action/exec
CMD ["/bin/bash", "-c", "cd actionProxy && python -u actionproxy.py"]
And make sure that your Python code accepts a Json feed from stdin:
json_input = json.loads(sys.argv[1])
The whole explaination is here: https://github.com/iainhouston/dockerPython

Categories

Resources