How to run a ROS2 node inside docker image? - python

I have a ros2 package and successfully create a docker image of it. Then when im inside the container i would like to run only one single node of the ros2 package. So first create the environment with PATH=$PATH:/home/user/.local/bin then vcs import . <system_integration/ros.repos then docker pull ghcr.io/test-inc/base_images:foxy. Im running and executing the docker with
docker run --name test -d --rm -v $(pwd):/home/ros2/foxy/src ghcr.io/company-inc/robot1_vnc_ros2:foxy
docker exec -it test /bin/bash
Then when Im inside the docker I build the package with
colcon build --symlink-install --event-handlers console_cohesion+ --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-up-to system_integration
So now im inside the docker in the root#1942eef8d977:~/ros2/foxy and would like to run one python node. But ros2 run package_name node_name would not work right? Im not familiar much with docker so not sure how to run the node. Any help
Thanks

Before you can use ros2 run to run packages, you have to source the correct workspace. Otherwise you could not use auto-completion tab to find any packages and thus no package can be run
To do so:
cd into your the root path of your workspace
colcon build your workspace (which your packages should under the src directory)
run this line to source it source install/setup.bash
you can check it with echo $COLCON_PATH_PREFIX to see the path is correctly sourced
Try rerun the ros2 command to start your node

Have you sourced the setup file within the container?
Where ever the package source is located, you need to run source ./install/setup.bash

Related

Run python script using a docker image

I downloaded a python script and a docker image containing commands to install all the dependencies. How can I run the python script using the docker image?
Copy python file in Docker image then execute -
docker run image-name PATH-OF-SCRIPT-IN-IMAGE/script.py
Or you can also build the DockerFile by using the RUN python PATH-OF-SCRIPT-IN-IMAGE/script.py inside DockerFile.
How to copy container to host
docker cp <containerId>:/file/path/within/container /host/path/target
How to copy host to the container
docker cp /host/local/path/file <containerId>:/file/path/in/container/file
Run in interactive mode:
docker run -it image_name python filename.py
or if you want host and port to be specified:
docker run -it -v filename.py:filename.py -p 8888:8888 image_name python filename.py
Answer
First, Copy your python script and other required files to your docker container.
docker cp /path_to_file <containerId>:/path_where_you_want_to_save
Second, open the container cli using docker desktop and run your python script.
The best way, I think, is to make your own image that contains the dependencies and the script.
When you say you've been given an image, I'm guessing that you've been given a Dockerfile, since you talk about it containing commands.
Place the Dockerfile and the script in the same directory. Add the following lines to the bottom of the Dockerfile.
# Existing part of Dockerfile goes here
COPY my-script.py .
CMD ["python", "my-script.py"]
Replace my-script.py with the name of the script.
Then build and run it with these commands
docker build -t my-image .
docker run my-image

Python application in a container using Podman

I'd like to build a container using Podman which would contains the following:
a Python application
the Python modules I developed but which are not stored at the same place than the Python application
the Python environment (made with miniconda/mambaforge)
a mounted folder for input data
a mounted folder for output data
To do that, I've added a Dockerfile in my home directory. Below is the content of this Dockerfile:
FROM python:3
# Add the Python application
ADD /path/to/my_python_app /my_python_app
# Add the Python modules used by the Python application
ADD /path/to/my_modules /my_modules
# Add the whole mambaforge folder (contains the virtual envs) with the exact same path than the local one
ADD /path/to/mambaforge /exact/same/path/to/mambaforge
# Create a customized .bashrc (contains 'export PATH' to add mamba path and 'export PYTHONPATH' to add my_modules path)
ADD Dockerfile_bashrc /root/.bashrc
Then, I build the container with:
podman build -t python_app .
And run it with:
podman run -i -t -v /path/to/input/data:/mnt/input -v /path/to/output/data:/mnt/output python_app /bin/bash
In the Dockerfile, note I add the whole mambaforge (it is like miniconda). Is it possible to only add the virtual environment? I found I needed to add the whole mambaforge because I need to activate the virtual environment with mamba/conda activate my_env. Which I do in a .bashrc (with the conda initialization) that I put in /root/.bashrc. In this file, I also do export PYTHONPATH="/my_modules:$PYTHONPATH".
I'd also like to add the following line in my Dockerfile to execute automatically the Python application when running the container.
CMD ["python", "/path/to/my_python_app/my_python_app.py"]
However, this doesn't work because it seems the container needs to be run interactively in order to load the .bashrc first.
All of this is kludge and I'd like to know if there is a simpler and better way to do that?
Many thanks for your help!

Facing the following error while trying to create a Docker Container using a DockerFile -> "error from sender: open .Trash: operation not permitted"

Im completely new to Docker and I'm trying to create and run a very simple example using instructions defined in a DockerFile.
DockerFile->
FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y python3 pip
COPY ./ .
RUN python3 test.py
contents of test.py ->
import pandas as pd
import numpy as np
print('test code')
command being used to create a Docker Container ->
docker build --no-cache . -t intro_to_docker -f abs/path/to/DockerFile
folder structure -> (both files are present at abs/path/to)
abs/path/to:
-DockerFile
-test.py
Error message ->
error from sender: open .Trash: operation not permitted
(using sudo su did not resolve the issue, which i believe is linked to the copy commands)
I'm using a Mac.
any help in solving this will be much appreciated!
The Dockerfile should be inside a folder. Navigate to that folder and then run docker build command. I was also facing the same issue but got resovled when moved the docker file inside a folder
Usually the error would look like:
error: failed to solve: failed to read dockerfile: error from sender: open .Trash: operation not permitted
And in my case, it's clearly saying that it is unable to find the dockerfile.
Also, in your command, I see a . after --no-cache, I think that's not required?
So better, try navigating to the specified path and then run the build command replacing the -f option with a ., which specifies the build command to consider the current folder for its build process.
In your case
cd abs/path/to/
docker build --no-cache -t intro_to_docker .
It seems the system policies are not allowing the application to execute this command. The application "Terminal" might not have approval to access the entire file system.
Enable full disk access to terminal. Change it using "System Preferences > Security & Privacy > Privacy > Full Disk Access"
I had the same error message and my Dockerfile was located in the HOME directory, I moved the Docker file to a different location and executed the docker build from that newly moved location and it successfully executed.

Running .env files within a docker container

I have been struggling to add env variables into my container for the past 3 hrs :( I have looked through the docker run docs but haven't managed to get it to work.
I have built my image using docker build -t sellers_json_analysis . which works fine.
I then go to run it with: docker run -d --env-file ./env sellers_json_analysis
As per the docs: $ docker run --env-file ./env.list ubuntu bash but I get the following error:
docker: open ./env: no such file or directory.
The .env file is in my root directory
But when running docker run --help I am unable to find anything about env variables, but it doesn't provide the following:
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
So not sure I am placing things incorrectly. I could add my variables into the dockerfile but I want to keep it as a public repo as it's a project I would like to display.
Your problem is wrong path, either use .env or ./.env, when you use ./env it mean a file named env in current directory
docker run -d --env-file .env sellers_json_analysis

Run ansible-playbook from code/command line connecting to docker container

I am using this:
Specify docker containers in /etc/ansible/hosts file
to run my ansible playbooks against a docker container.
But is there any way to avoid having a physical /etc/ansible/hosts file with the information about the container? E.g. run it from code where this information can be configured?
I looked at:
Running ansible-playbook using Python API
but when looking at the answers I see variables pointing to physical files, e.g.:
inventory = Inventory(loader=loader, sources='/home/slotlocker/hosts2')
playbook_path = '/home/slotlocker/ls.yml'
So not really sure why that is better than simply just calling from command line without using the Python ansible API.
May be install Ansible in Docker container and then run it locally inside the container. For example in the Dockerfile, include:
# Install Ansible
RUN pip install ansible
COPY ansible /tmp/ansible
# Run Ansible to configure the machine
RUN cd /tmp/ansible && ansible-playbook -i inventory/docker example_playbook.yml

Categories

Resources