Python multiprocessing: AttributeError: Can't pickle local object - python

I wrote a ChatOps bot for the collaboration tool Mattermost using this framework. Now I'm trying to write and run integration tests and I used their examples. By cloning the git repository you can run the tests by yourself. Their docker-compose.yml file will only work on a Linux machine. If you want to reproduce it on a Mac machine, you'll have to edit the docker-compose.yml to:
version: "3.7"
services:
app:
container_name: "mattermost-bot-test"
build: .
command: ./mm/docker-entry.sh
ports:
- "8065:8065"
extra_hosts:
- "dockerhost:127.0.0.1"
After running the command docker-compose up -d Mattermost is available at localhost:8065. I only took one simple test from their project and copied it in base-test.py. You can see my source code here. After starting the test by running the command pytest --capture=no --log-cli-level=DEBUG . it will return the following error: AttributeError: Can't pickle local object 'start_bot.<locals>.run_bot'. This error also shows up on the same test case in their project. The error happens at line 92 in the utils.py file
What am I doing wrong here?

I don't know if you already went down this road but I think you might get past the pickling error by making run_bot take the bot that it does bot.run() with as an argument and then pass it to the process.

Take a look at the Action tab on that GitHub repository. Pytest seems to execute correctly (ignoring the exceptions on the webhook test)
Here is a recent run you can use to compare your environment set-up: https://github.com/attzonko/mmpy_bot/runs/4289644769?check_suite_focus=true

Related

terraform not found in bitbucket

So i am trying to create a pipeline on bitbucket. On my local computer, I navigate to the folder cd terraform/environments/devand run terraform init without an issue. However, when I run the test pipeline on bitbucket, it stops on the second action because
bash: terraform: command not found
How can I fix this? I believe I need to install terraform on bitbucket somehow but I am not sure how to do so. Do I use python pip commands? If so, how and why?
image: atlassian/default-image:2
pipelines:
branches:
test:
- step:
name: 'Navigate to Dev'
script:
- cd terraform/environments/dev
condition:
changesets:
includePaths:
- "terraform/modules"
- "terraform/environments/dev"
- step:
name: 'Initialize Terraform'
script:
- terraform init
You need the correct image for your build agent. In this situation, the agent basically only needs terraform installed and accessible:
image: hashicorp/terraform
This will fix your issue. You can also of course set the tag for the image to your specific version of Terraform.

How to make docker container running continuously?

I have a Docker image that is actually a server for a device. It is started from a Python script, and I made .sh to run it. However, whenever I run it, it says that it is executed and it ends (server exited with code 0). The only way I made it work is via docker-compose when I run it as detached container, then enter the container via bin/bash and execute the run script (beforementioned .sh) from it manually, then exit the container.
After that everything works as intended, but the issue arises when the server is rebooted. I have to do it manually all over again.
Did anyone else experience anything similar? If yes how can I fix this?
File that starts server (start.sh):
#!/bin/sh
python source/server/main.pyc &
python source/server/main_socket.pyc &
python source/server/main_monitor_server.pyc &
python source/server/main_status_server.pyc &
python source/server/main_events_server.pyc &
Dockerfile:
FROM ubuntu:trusty
RUN mkdir -p /home/server
COPY server /home/server/
EXPOSE 8854
CMD [ /home/server/start.sh ]
Docker Compose:
version: "3.9"
services:
server:
tty: yes
image: deviceserver:latest
container_name: server
restart: always
ports:
- "8854:8854"
deploy:
resources:
limits:
memory: 3072M
It's not a problem with docker-compose. Your docker container should not return (i.e block) even when launched with a simple docker run.
For that your CMD should run in the foreground.
I think the issue is that you're start.sh returns instead of blocking. Have you tried to remove the last '&' from your script (I'm not familiar with python and what these different processes are)?
In general you should run only one process per container. If you have five separate processes you need to run, you would typically run five separate containers.
The corollaries to this are that the main container command should be a foreground process; but also that you can run multiple containers off of the same image with different commands. In Compose you can override the command: separately for each container. So, for example, you can specify:
version: '3.8'
services:
main:
image: deviceserver:latest
command: ./main.py
socket:
image: deviceserver:latest
command: ./main_socket.py
et: cetera
If you're trying to copy-and-paste this exact docker-compose.yml file, make sure to set a WORKDIR in the Dockerfile so that the scripts are in the current directory, make sure the scripts are executable (chmod +x in your source repository), and make sure they start with a "shebang" line #!/usr/bin/env python3. You shouldn't need to explicitly say python anywhere.
FROM python:3.9 # not a bare Ubuntu image
WORKDIR /home/server # creates the directory too
COPY server ./ # don't need to duplicate the directory name here
RUN pip install -r requirements.txt
EXPOSE 8854 # optional, does almost nothing
CMD ["./main.py"] # valid JSON-array syntax; can be overridden
There are two major issues in the setup you show. The CMD is not a syntactically valid JSON array (the command itself is not "quoted") and so Docker will run it as a shell command; [ is an alias for test(1) and will exit immediately. If you do successfully run the script, the script launches a bunch of background processes and then exits, but since the script is the main container command, that will cause the container to exit as well. Running a set of single-process containers is generally easier to manage and scale than trying to squeeze multiple processes into a single container.
You can add sleep command in the end of your start.sh.
#!/bin/sh
python source/server/main.pyc &
python source/server/main_socket.pyc &
python source/server/main_monitor_server.pyc &
python source/server/main_status_server.pyc &
python source/server/main_events_server.pyc &
while true
do
sleep 1;
done

How to set up Gitlab CI for a .NET HTTP Server and testing with Python?

I'm working on a Project where I'm trying to setup a HTTP Server in C#. The responds from the server are tested using the pytest Module.
This is what I've done so far:
Define the API using swagger editor
generate base code using swagger generator
write some python Tests which are sending requests to the server and testing whether or not the responds fulfill certain requirements
I now want to set up CI on gitlab before I start actually writing the functions that correspond to the routes I've defined earlier. I set up a Runner on my local machine (it's later going to be on a dedicated server) using Docker.
As I am new to CI, there are a few questions I'm struggling with:
As I need both Python and .NET for testing, should I choose .NET as a base image and then install Python or Python as base image and then install .NET? What would be easier? I tried the latter but it doesn't seem very elegant...
Do I build before I push to the remote repository and include the /bin folder in my repository to execute those files or would I rather build during the CI and therefore not have to push anything but sourcecode?
I know those questions are a little bit wild but as I am new to CI and also Docker I'm looking for advise on how to follow best practices (if there are any).
The base image for a runner is just the default if you don't specify one in your .gitlab-ci.yml file. You can override the runner's default image by using a "pipeline default" image at the top of your .gitlab-ci.yml file (outside of any jobs), or you can specify the image for each job individually.
Using a "pipeline default" image:
image: python:latest
stages:
- build
...
In this example, all jobs will use the python:latest image unless the job specifies its own image like this example:
stages:
- build
- test
Build Job:
stage: build
image: python:latest
script:
- ...
Here, this job overrides the runner's default.
image: python:latest
stages:
- build
- db_setup
Build Job:
stage: build
script:
- # run some build steps
Database Setup Job:
stage: db_setup
image: mysql:latest
script:
- mysql -h my-host.example.com -u my-user -pmy-password -e "create database my-database;"
In this example, we have a "pipeline default" image that the "Build Job" uses since it doesn't specify its own image, but the "Database Setup Job" uses the mysql:latest image.
Here's an example where the runner's default image is ruby:latest
stages:
- build
- test
Build Job:
stage: build
script:
- # run some build steps
Test Job:
stage: test
image: golang:latest
script:
- # run some tests
In this last example, the "Build Job" uses the runner's base image, ruby:latest, but the "Test Job" uses golang:latest.
For you second question, it's up to you but the convention is to only commit source code and not dependencies/compiled resources, but again it's just a convention.
build:

gitlab runner pytest fails but it shows job success

I have searched for this all over the internet and couldn't find an answer.
The output of the job is something like this:
test/test_something.py:25: AssertionError
========================= 1 failed, 64 passed in 2.10s =========================
Job succeeded
my .gitlab-ci.yml file for the test:
run_tests:
stage: test
tags:
- tests
script:
- echo "Running tests"
- ./venv/bin/python -m pytest
I'm using shell executor.
anyone faced this problem before? as I understand that gitlab CI depends on the exit code of the pytest and it should fail if the exit code is not zero, but in this case pytest should have exit code 1 since a test failed.
It's not something about your gitlab-ci script but rather your pytest script (the script or module you are using to run your tests).
Following I included an example for you, assuming that you might use something like Flask CLI to manage your tests.
You can use SystemExit to raise the exit code. If anything other than 0 is returned, it will fail the process. In a nutshell, GitLab stages are going to succeed if the exit code that is returned is 0.
Pytest only runs the tests but doesn't return the exit code. You can implement this into your code:
your manage.py (assuming you are using flask CLI) will look like something as follows:
import pytest
import click
from flask import Flask
app = Flask(__name__)
#app.cli.command("tests")
#click.argument("option", required=False)
def run_test_with_option(option: str = None):
if option is None:
raise SystemExit(pytest.main())
Note how the above code is raiseing and defining a flask CLI command of tests. To run your code you can simply add the following to your gitlab-ci script:
run_tests:
stage: test
tags:
- tests
variables:
FLASK_APP: manage.py
script:
- echo "Running tests"
- ./venv/bin/activate
- flask tests
The script that will run your test will be flask tests which is raising SystemExit as shown.
FYI: you may not use Flask CLI to manage your tests script, and simply want to run a test script. In that case, this answer might also help.

How to run python code from linux via a docker containing a specific python version

I have a linux server running in which I want to be able to run some python scripts. To do so, I created a docker image of python (3.6.8) with some specific dependencies to run my code.
I am new to linux command line and need help on how to write a line that would run a given python script based on my docker (python 3.6.8)
My server's directory structure looks like this :
My docker is named geomatique_python and its image is located in docker_image.
As for the structure of the code itself, I am starting from scratch and am looking for some hints and advices.
Thanks
I'm very much for a all the things in docker approach. Ref. your mention of having specific versions set in stone, the declaritive nature of docker is great for that. You can extend an official python docker image with your libraries then bind-mount the folders into your container during the run. A minimal project might look like:
.
├── app.py
└── Dockerfile
My app.py is a simple requests script:
#!/usr/bin/env python3
import requests
r = requests.get('https://api.github.com')
if r.status_code == 200:
print("HTTP {}".format(r.status_code))
My Dockerfile contains the runtime dependencies for my app:
FROM python:3.6-slim
RUN python3 -m pip install requests
Note: I'm extending the official python image in this example.
After building the docker image (i.e. docker build --rm -t so:57697538 .) you can run a container from the image bind-mounting the directory that contains the scripts inside the container and execute them: docker run --rm -it -v ${PWD}:/src --entrypoint python3 so:57697538 /src/app.py
Admittedly for python virtualenv / virtualenvwrapper can be convenient however it's very much python-only whereas docker is language agnostic.

Categories

Resources