I have made a simple Python file called index.py with this relevant piece of code:
def main():
server = HTTPServer(('127.0.0.1', PORT), HomeHandler)
print("Server online, listening for requests.\n")
server.serve_forever()
if __name__ == "__main__":
main()
I then installed the Docker extension and chose to add Docker files. I now have these files:
Dockerfile
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim
EXPOSE 8000
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 8000 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "index.py"]
And docker-compose.debug.yml:
version: '3.4'
services:
serverlessproject:
image: serverlessproject
build:
context: .
dockerfile: ./Dockerfile
command: ["sh", "-c", "pip install debugpy -t /tmp && python /tmp/debugpy --wait-for-client --listen localhost:8000 index.py "]
ports:
- 8000:8000
As well as two files the .vscode folder:
launch.json:
{
"configurations": [
{
"name": "Docker: Python - General",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug",
"python": {
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/app"
}
],
"projectType": "general"
}
}
]
And tasks.json:
{
"version": "2.0.0",
"tasks": [
{
"type": "docker-build",
"label": "docker-build",
"platform": "python",
"dockerBuild": {
"tag": "serverlessproject:latest",
"dockerfile": "${workspaceFolder}/Dockerfile",
"context": "${workspaceFolder}",
"pull": true
}
},
{
"type": "docker-run",
"label": "docker-run: debug",
"dependsOn": [
"docker-build"
],
"python": {
"file": "index.py"
}
}
]
}
When debugging index.py it will print and start serving as expected. However, the Docker container will not be available via 127.0.0.1:8000 - and inspecting the terminal shows the last command ran looks like:
docker exec -d projectname-dev python3 /debugpy/launcher 172.17.0.1:43423 -- index.py
I can't find where this IP and port are actually actually defined - it's definitely not using the port I exposed in the Dockerfile or the portranges defined in docker-compose.yml or docker-compose.debug.yml. I'm certain this is just a simple line of configuration I can add to one of the above files, but I'm having trouble finding documentation for this specific case - most docs assume I'm using Django, for example.
Related
I am trying to add to my launch configuration two tasks to automatically build run and remove a docker environment for my debug.
So far I was always able to debug my code, but I needed before to manually start the docker environment with a separate script.
Here my launch.json
{
"configurations": [
{
"name": "Python: Remote Attach",
"type": "python",
"request": "attach",
"connect": {
"host": "127.0.0.1",
"port": 8889
},
"preLaunchTask": "docker-compose up",
"postDebugTask": "docker-compose down",
"pathMappings": [
{
"localRoot": "<host_path_to_app>",
"remoteRoot": "/app/"
},
]
}
]
}
my tasks.json:
{
"version": "2.0.0",
"tasks": [
{
"label": "docker-compose up",
"type": "docker-compose",
"dockerCompose": {
"up": {
"detached": true,
"build": true,
},
"files": [
"./docker-compose.yaml",
],
}
},
{
"label": "docker-compose down",
"type": "docker-compose",
"dockerCompose": {
"down": {
},
"files": [
"./docker-compose.yaml",
]
}
},
]
}
my docker file:
FROM python:3.10.7-slim-bullseye as base
RUN pip install debugpy
COPY ./app /app
WORKDIR /app
FROM base as debug
# CMD ["python", "-m", "debugpy", "--listen", "0.0.0.0:8889", "--wait-for-client", "main.py"]
CMD ["python", "-m", "debugpy", "--listen", "0.0.0.0:8889", "main.py"]
FROM base as prod
CMD ["python", "main.py"]
and finally my docker-compose file:
version: '3'
services:
scipy-notebook:
ports:
- "8889:8889"
volumes:
- "<local_path_to_app>:/app/"
- "<path_to_local_storage>:/mnt/permanent_storage/"
environment:
- 'PMT_STG_PATH=/mnt/permanent_storage/'
- PYTHONUNBUFFERED=1
build:
context: .
dockerfile: Dockerfile
target: debug
image: test_image:beta
The python application is irrelevant, but I tried the following:
the docker image can be built and run normally
I can debug the application from visual studio code if I first run the debug image, using the flag --wait-for-client for debugpy within the docker CMD, i.e. I can set breakpoints in my local mapping and normally debug
the docker-compose down and docker-compose up tasks seem to work properly
if I remove the --wait-for-client flag from the debugpy CMD, then the docker image is built, exposes the correct port, it runs with any command within my app (e.g. writing a file with timestamp in my local storage), and is teared down after the application is done. No breakpoint is hit at any point until the docker container is teared down
with the procedure from point 4., but with the --wait-for-client flag on, then the image is built, but the up process is stopped before the debugger makes it to attach.
What can I do to make it work? Is there anything conceptually wrong? From the documentation I could find mainly procedures to debug frameworks like flask or django, which are not relevant for my case.
I am unfamiliar with VS Code.
While trying to run a task (Terminal --> Run Task... --> Dev All --> Continue without scanning the task output) associated with a given code workspace (module.code-workspace), I ran into the following error, even though uvicorn was installed onto my computer:
/usr/bin/python: No module named uvicorn
I fixed this issue by setting the PATH directly into the code workspace. For instance, I went from:
"command": "cd ../myapi/ python -m uvicorn myapi.main:app --reload --host 127.x.x.x --port xxxx"
to:
"command": "cd ../myapi/ && export PATH='/home/sheldon/anaconda3/bin:$PATH' && python -m uvicorn myapi.main:app --reload --host 127.x.x.x --port xxxx"
While this worked, it seems like a clunky fix.
Is it a good practice to specify the path in the code workspace?
My understanding is that the launch.json file is meant to help debug the code. I tried creating such a file instead of fiddling with the workspace but still came across the same error.
In any case, any input on how to set the path in VS code will be appreciated!
You can modify the tasks like this:
"tasks": [
{
"label": "Project Name",
"type": "shell",
"command": "appc ti project id -o text --no-banner",
"options": {
"env": {
"PATH": "<mypath>:${env:PATH}"
}
}
}
]
I'm using this workflow to debug a generic Python application inside a Docker container with vscode.
If i introduce an error to my main.py, such as importing a package that doesn't exist, and press F5 to debug in VSCode, after building the docker image, it fails silently and doesn't provide any useful error message.
e.g:
Executing task: docker-run: debug <
docker run -dt -P --name "projectname-dev" --label "com.microsoft.created-by=visual-studio-code" -v "c:\Users\tim.vscode\extensions\ms-python.python-2021.10.1317843341\pythonFiles\lib\python\debugpy:/debugpy:ro" --entrypoint "python3" "projectname:latest" <
add88efdff111ae904a38f3d52a52d8191a95f1d53c931a032326ed2958218b3
Terminal will be reused by tasks, press any key to close it.
If i remove the error, I have working code, and by running it manually
docker run projectname
I can see the code works.
However it still fails to debug in vscode, failing silently. A breakpoint set on the first line is never reached. There is no message produced at all from vscode.
How can I see the error message in this case?
Launch.json:
version: '3.4'
services:
projectname:
image: projectname
build:
context: .
dockerfile: ./Dockerfile
command: ["sh", "-c", "pip install debugpy -t /tmp && python /tmp/debugpy --wait-for-client --listen 0.0.0.0:5678 main.py "]
ports:
- 5678:5678
docker-compose.debug.yml
version: '3.4'
services:
projectname:
image: projectname
build:
context: .
dockerfile: ./Dockerfile
command: ["sh", "-c", "pip install debugpy -t /tmp && python /tmp/debugpy --wait-for-client --listen 0.0.0.0:5678 main.py "]
ports:
- 5678:5678
main.py (Minimum viable):
import time
import requests
import json
#import package_that_doesnt_exist
time_look_back = 60*15 # last 15min
time_to_fetch=int(time.time()-time_look_back)
print(time_to_fetch)
Dockerfile:
FROM python:3.9.7-alpine3.14
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
RUN apk add postgresql-dev
RUN apk add musl-dev
RUN apk add gcc
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "main.py"]
I tried this approach and hope it's suitable for you; if it's not, I will be happy to fix it.
launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Attach (Remote Debug)",
"type": "python",
"request": "attach",
"port": 5678,
"host": "localhost",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/app"
}
],
"justMyCode": false,
"subProcess": true,
"redirectOutput": true
}
]
}
docker-compose.yml
(I usually use ptvsd for debugging inside containers)
version: '3.4'
services:
projectname:
environment:
- DEBUG=1
image: projectname
build:
context: .
dockerfile: ./Dockerfile
command: ["sh", "-c", "pip install ptvsd && python main.py"]
ports:
- 5678:5678
In the Dockerfile, I exposed the port 5678 before the CMD (see only the relevant part)
USER appuser
EXPOSE 5678
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "main.py"]
main.py
import time
import requests
import json
import os
if bool(int(os.environ.get('DEBUG',0))):
import ptvsd
ptvsd.enable_attach(address=('0.0.0.0', 5678))
print("--- PTVS ATTACHED, press F5 on VSCode to start Debugging")
ptvsd.wait_for_attach()
#import package_that_doesnt_exist
time_look_back = 60*15 # last 15min
time_to_fetch=int(time.time()-time_look_back)
print(time_to_fetch)
Now, launching the compose up with the error package uncommented and env DEBUG=0:
Launching the compose up with the error package commented and env DEBUG=0:
Launching the compose up with the error uncommented and env DEBUG=1:
then using F5 to start debugging
I'm trying to mount a volume to a docker container, but I have some problems with it. I have a simple python script in a docker container that creates file "links.json" and I would like to access this file from the filesystem.
This is my Dockerfile:
FROM python:3.6-slim
COPY . /srv/app
WORKDIR /srv/app
RUN pip install -r requirements.txt --src /usr/local/src
CMD [ "python", "main.py" ]
I have created volume with:
docker volume create my-data
And I'm running this container with command:
docker run --mount source=my-data,target=/srv/app 3743b8d3b043
I've tried it on MacOS.
When I wrote: docker volume inspect my-data, I got this result:
[
{
"CreatedAt": "2019-08-15T08:30:48Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/my-data/_data",
"Name": "my-data",
"Options": {},
"Scope": "local"
}
]
But all directories like /var/lib/docker/volumes, and directories of this code are empty.. Do you have any ideas where's the problem?
Thanks a lot!
you are overwriting all you data in /srv/app that you add during the build process . you may change your mount to use other target as /srv/app.
Update
start your container using:
docker run -v /ful/path/folder:/srv/app/data IMAGE
Trying to run collectstatic on deloyment, but running into the following error:
pipeline.exceptions.CompressorError: /usr/bin/env: yuglify: No such
file or directory
When I run collectstatic manually, everything works as expected:
Post-processed 'stylesheets/omnibase-v1.css' as
'stylesheets/omnibase-v1.css' Post-processed 'js/omnijs-v1.js' as
'js/omnijs-v1.js'
I've installed Yuglify globally. If I run 'heroku run yuglify', the interface pops up and runs as expected. I'm only running into an issue with deployment. I'm using the multibuildpack, with NodeJS and Python. Any help?
My package, just in case:
{
"author": "One Who Sighs",
"name": "sadasd",
"description": "sadasd Dependencies",
"version": "0.0.0",
"homepage": "http://sxaxsaca.herokuapp.com/",
"repository": {
"url": "https://github.com/heroku/heroku-buildpack-nodejs"
},
"dependencies": {
"yuglify": "~0.1.4"
},
"engines": {
"node": "0.10.x"
}
}
Should maybe mention that Yuglify is not in my requirements.txt, just in my package.json.
I ran into the same problem and ended up using a custom buildpack such as this and writing a bash script to install node and yuglify:
https://github.com/heroku/heroku-buildpack-python
After setting the buildpack, I created a few bash scripts to install node and yuglify. the buildpack has hooks to call these post compile scripts. Here's a good example how to do this which I followed:
https://github.com/nigma/heroku-django-cookbook
These scripts are placed under bin in your root folder.
In the post_compile script, I added a script to install yuglify.
post_compile script
if [ -f bin/install_nodejs ]; then
echo "-----> Running install_nodejs"
chmod +x bin/install_nodejs
bin/install_nodejs
if [ -f bin/install_yuglify ]; then
echo "-----> Running install_yuglify"
chmod +x bin/install_yuglify
bin/install_yuglify
fi
fi
install_yuglify script
#!/usr/bin/env bash
set -eo pipefail
npm install -g yuglify
If that doesn't work, you can have a look at this post:
Yuglify compressor can't find binary from package installed through npm