python locust pass custom arguments to workers - python

I need to run locust in distributed mode. I'd like to use custom arguments, and it is needed to add specific value of argument to each worker.
Here is sample python code:
Locust test runner
"""
from locust import HttpUser, task, events, constant_pacing, tag
#events.init_command_line_parser.add_listener
def add_custom_parameters(parser):
"""Set arguments which can be passed also via web ui"""
parser.add_argument(
"--property",
type=str,
env_var="PROPERTY",
default="",
help="set name or id",
)
class AwesomeUser(HttpUser):
"""
One AwesomeUser class to rule them all...
"""
host = "EMPTY"
wait_time = constant_pacing(1)
def on_start(self):
"""
On start procedure.
"""
print(f"HERE: {self.environment.parsed_options.property}")
#task(10)
#tag("test_it")
def test_it(self):
"""
Test if custom parameters can be used in that way.
"""
print(f"property: {self.environment.parsed_options.property}")
if __name__ == "__main__":
AwesomeUser.tasks = [AwesomeUser.test_it]
I'd like to use docker-compose.yaml, there were many attempts, but it looks that I cananot manage it. Sample code that is not working:
version: '3'
services:
master:
build:
context: .
volumes:
- type: bind
source: "./tests"
target: "/home/locust/tests"
ports:
- "8089:8089"
command: -f /home/locust/tests/load_test.py --master -u 3 -r 1
worker:
build:
context: .
volumes:
- type: bind
source: "./tests"
target: "/home/locust/tests"
command: -f /home/locust/tests/load_test.py --worker --master-host master --property "SOSN_1"
worker2:
build:
context: .
volumes:
- type: bind
source: "./tests"
target: "/home/locust/tests"
command: -f /home/locust/tests/load_test.py --worker --master-host master --property "SOSN_2"
worker3:
build:
context: .
volumes:
- type: bind
source: "./tests"
target: "/home/locust/tests"
command: -f /home/locust/tests/load_test.py --worker --master-host master --property "SOSN_3"
There is workaround - run it in each screen, each worker as a master (in that case I'm able to run parallel many locust scripts):
killall screen
source venv/bin/activate
for i in {1..3}; do
sleep 2
echo create worker screen worker_$i
screen -dmS "worker_$i" locust -f tests/load_test.py --property "SOSN_$i" -t 10m --headless
done
However I hope that it can be done via docker-compose. An ideal it will be if I can use command like it: docker-compose up --scale worker=3 and as a result of it it will be run locust in distributed mode with 3 workers, each will use different value of my custom argument.
Is it possible?

Related

Need to execute a curl statement in OCP Cron

I am very new with working on OCP . I have a task to schedule a curl statement via crontab but I'm unable to figure out where to pass the curl statement.
Not sure how to even start. I looked up some examples but do not see anything that matches my requirement
OCP is based on Kubernetes. In Kubernetes you have the CronJob resource, which seems to be what you're looking for for your resource and allows you to run a job on a specific schedule.
As you need to use curl, you can use the curlimage/curl image, which has the curl binary included:
apiVersion: batch/v1
kind: CronJob
metadata:
name: my-curl-job
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: curl-job
image: curlimages/curl
imagePullPolicy: IfNotPresent
args:
- "http://your.url.you.want.to.curl"
restartPolicy: Never

Failed to execute script docker-compose error

I am having trouble getting my docker-compose.yml file to run, I am running the command docker-compose -f docker-compose.yml and am getting the error "TypeError: You must specify a directory to build in path
[7264] Failed to execute script docker-compose"
The file I have this in is called Labsetup and inside the file I have
docker-compose.yml,dockerClient.yml,dokcerServer.yml,and
A file called Volumes that have Server.py and Client.py
To this point I have edit my docker-compose.yml file and tried running "docker-compose -f docker-compose.yml up" to run the yml below is my docker-compose.yml
version: "3"
services:
# new code to execute python scripts in /volumes
server:
image: 85345aa61801 # from DockerServer.yml
container_name: server
build: DockerServer.yml
command: python3 ./volumes/Server.py
ports:
- 8080:8080
client:
image: 700c22b0d9d6 # from DockerClient.yml
container_name: client
build: DockerClient.yml
command: python3 ./volumes/Client.py
network_mode: host
depends_on:
- server
# below is original code for containers
malicious-router:
image: handsonsecurity/seed-ubuntu:large
container_name: malicious-router-10.9.0.111
tty: true
cap_add:
- ALL
sysctls:
- net.ipv4.ip_forward=1
- net.ipv4.conf.all.send_redirects=0
- net.ipv4.conf.default.send_redirects=0
- net.ipv4.conf.eth0.send_redirects=0
privileged: true
volumes:
- ./volumes:/volumes
networks:
net-10.9.0.0:
ipv4_address: 10.9.0.111
command: bash -c "
ip route add 192.168.60.0/24 via 10.9.0.11 &&
tail -f /dev/null "
HostB1:
image: handsonsecurity/seed-ubuntu:large
container_name: host-192.168.60.5
tty: true
cap_add:
- ALL
volumes:
- ./volumes:/volumes
networks:
net-192.168.60.0:
ipv4_address: 192.168.60.5
command: bash -c "python3 Server.py"
HostB2:
image: handsonsecurity/seed-ubuntu:large
container_name: host-192.168.60.6
tty: true
cap_add:
- ALL
volumes:
- ./volumes:/volumes
networks:
net-192.168.60.0:
ipv4_address: 192.168.60.6
command: bash -c "python3 Client.py"
HostB3:
image: handsonsecurity/seed-ubuntu:large
container_name: host-192.168.60.7
tty: true
cap_add:
- ALL
volumes:
- ./volumes:/volumes
networks:
net-192.168.60.0:
ipv4_address: 192.168.60.7
command: bash -c "python3 Client.py 8080"
Router:
image: handsonsecurity/seed-ubuntu:large
container_name: router
tty: true
cap_add:
- ALL
sysctls:
- net.ipv4.ip_forward=1
volumes:
- ./volumes:/volumes
networks:
net-10.9.0.0:
ipv4_address: 10.9.0.11
net-192.168.60.0:
ipv4_address: 192.168.60.11
command: bash -c "
ip route del default &&
ip route add default via 10.9.0.1 &&
tail -f /dev/null "
networks:
net-192.168.60.0:
name: net-192.168.60.0
ipam:
config:
- subnet: 192.168.60.0/24
net-10.9.0.0:
name: net-10.9.0.0
ipam:
config:
- subnet: 10.9.0.0/24
My DockerServer.yml
# Dockerfile.
# FROM ubuntu:20.04 # FROM python:latest
# ... most recent "image" created since DockerServer build = f48ea80eae5a <-- needed in FROM
# after running command: docker build -f DockerServer.yml .
# ... it output this upon completion ...
# ---> Running in 533f2cbd3135
# Removing intermediate container 533f2cbd3135 ---> 85345aa61801 Successfully built 85345aa61801
# FROM requires this syntax:
# FROM [--platform=<platorm>] <image> [AS <name>]
FROM python3:3.8.5 85345aa61801 AS serverImg
ADD Server.py /volumes/
WORKDIR /volumes/
My DokcerClient.yml
# Dockerfile.
# FROM ubuntu:20.04 server # python:latest #FROM ubuntu:latest ?
# step1/3 image --> f48ea80eae5a ... Successfully built 700c22b0d9d6
# image needed for 2nd arg
# after running command: docker build -f DockerClient.yml
# ... it output this upon completion ...
# Removing intermediate container 29037f3d23fb ---> 700c22b0d9d6 Successfully built 700c22b0d9d6
# FROM requires this syntax:
# FROM [--platform=<platorm>] <image> [AS <name>]
FROM python3:3.8.5 700c22b0d9d6 AS clientImg
ADD Client.py /volumes/
WORKDIR /volumes/
I am creating a project for a networking class that connects users to chat on a terminal. I have two python codes one being called Server.py that starts the server and tells who is in the chat room and another called Client.py that asks the user to put in a name and they will be able to chat. I am running this all on a virtual box Ubuntu20.04.
Code for Server.py
import socket
import _thread
import threading
from datetime import datetime
class Server():
def __init__(self):
# For remembering users
self.users_table = {}
# Create a TCP/IP socket and bind it the Socket to the port
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.server_address = ('localhost', 8080)
self.socket.bind(self.server_address)
self.socket.setblocking(1)
self.socket.listen(10)
print('Starting up on {} port {}'.format(*self.server_address))
self._wait_for_new_connections()
def _wait_for_new_connections(self):
while True:
connection, _ = self.socket.accept()
_thread.start_new_thread(self._on_new_client, (connection,))
def _on_new_client(self, connection):
try:
# Declare the client's name
client_name = connection.recv(64).decode('utf-8')
self.users_table[connection] = client_name
print(f'{self._get_current_time()} {client_name} joined the room !!')
while True:
data = connection.recv(64).decode('utf-8')
if data != '':
self.multicast(data, owner=connection)
else:
return
except:
print(f'{self._get_current_time()} {client_name} left the room !!')
self.users_table.pop(connection)
connection.close()
def _get_current_time(self):
return datetime.now().strftime("%H:%M:%S")
def multicast(self, message, owner=None):
for conn in self.users_table:
data = f'{self._get_current_time()} {self.users_table[owner]}: {message}'
conn.sendall(bytes(data, encoding='utf-8'))
if __name__ == "__main__":
Server()
Code for Client.py
import sys
import socket
import threading
class Client():
def __init__(self, client_name):
client_name = input("Choose Your Nickname:")
# Create a TCP/IP socket and connect the socket to the port
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.server_address = ('localhost', 8080)
self.socket.connect(self.server_address)
self.socket.setblocking(1)
print("{} is connected to the server, you may now chat.".format(client_name))
self.client_name = client_name
send = threading.Thread(target=self._client_send)
send.start()
receive = threading.Thread(target=self._client_receive)
receive.start()
def _client_send(self):
self.socket.send(bytes(self.client_name, encoding='utf-8'))
while True:
try:
c = input()
sys.stdout.write("\x1b[1A\x1b[2K") # Delete previous line
self.socket.send(bytes(c, encoding='utf-8'))
except:
self.socket.close()
return
def _client_receive(self):
while True:
try:
print(self.socket.recv(1024).decode("utf-8"))
except:
self.socket.close()
return
if __name__ == "__main__":
client_name = sys.argv[1]
Client(client_name)
Edit: I found a example online and followed it and changed a few
[11/26/21]seed#VM:~/.../Labsetup$ docker build -f DockerServer.yml .
Sending build context to Docker daemon 33.28kB
Step 1/3 : FROM python:latest
latest: Pulling from library/python
647acf3d48c2: Pull complete
b02967ef0034: Pull complete
e1ad2231829e: Pull complete
5576ce26bf1d: Pull complete
a66b7f31b095: Pull complete
05189b5b2762: Pull complete
af08e8fda0d6: Pull complete
287d56f7527b: Pull complete
dc0580965fb6: Pull complete
Digest: sha256:f44726de10d15558e465238b02966a8f83971fd85a4c4b95c263704e6a6012e9
Status: Downloaded newer image for python:latest
---> f48ea80eae5a
Step 2/3 : ADD Server.py /volumes/
---> df60dab46969
Step 3/3 : WORKDIR /volumes/
---> Running in 533f2cbd3135
Removing intermediate container 533f2cbd3135
---> 85345aa61801
Successfully built 85345aa61801
[11/26/21]seed#VM:~/.../Labsetup$ docker build -f DockerClient.yml .
Sending build context to Docker daemon 33.28kB
Step 1/3 : FROM python:latest
---> f48ea80eae5a
Step 2/3 : ADD Client.py /volumes/
---> 76af351c1c7a
Step 3/3 : WORKDIR /volumes/
---> Running in 29037f3d23fb
Removing intermediate container 29037f3d23fb
---> 700c22b0d9d6
Successfully built 700c22b0d9d6

Celery task is always PENDING inside Docker container (Flask + Celery + RabbitMQ + Docker)

I'm creating a basic project to test Flask + Celery + RabbitMQ + Docker.
For some reason, that I do not know, when I call the celery, the task seems to call RabbitMQ, but it stays at the PENDING state always, it never changes to another state. I try to use task.get(), but the code freezes. Example:
The celery worker (e.g. worker_a.py) is something like this:
from celery import Celery
# Initialize Celery
celery = Celery('worker_a',
broker='amqp://guest:guest#tfcd_rabbit:5672//',
backend='rpc://')
[...]
#celery.task()
def add_nums(a, b):
return a + b
While docker-compose.yml is something like this:
[...]
tfcd_rabbit:
container_name: tfcd_rabbit
hostname: tfcd_rabbit
image: rabbitmq:3.8.11-management
environment:
- RABBITMQ_ERLANG_COOKIE=test
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
ports:
- 5672:5672
- 15672:15672
networks:
- tfcd
tfcd_worker_a:
container_name: tfcd_worker_a
hostname: tfcd_worker_1
image: test_flask_celery_docker
entrypoint: celery
command: -A worker_a worker -l INFO -Q worker_a
volumes:
- .:/app
links:
- tfcd_rabbit
depends_on:
- tfcd_rabbit
networks:
- tfcd
[...]
The repository with all the files and instructions to run it can be found here.
Would anyone know what might be going on?
Thank you in advance.
After a while, a friend of mine discovered the problem:
The correct queue name was missing when creating the task, because Celery was using the default name "celery" instead of the correct queue name.
The final code is this:
[...]
#celery.task(queue='worker_a')
def add_nums(a, b):
return a + b

How to get celery to prioritize tasks?

I am trying to prioritize certain tasks using celery (v5.0.0) but it seems I am missing something fundamental. According to the documentation, task priority should be available for RabbitMQ. However, whenever I try to add the relevant lines to the configuration file, task execution stops working. To illustrate my problem, I will first outline my setup without the prioritization.
My setup consists of a configuration file celeryconfig.py
from kombu import Exchange, Queue
broker_url = 'amqp://user:password#some-rabbit//'
result_backend = 'rpc://'
Then I have a file for task definition, prio.py, that contains three tasks intended for execution timing:
import time
from celery import Celery
app = Celery('tasks')
app.config_from_object('celeryconfig')
#app.task
def get_time():
t0 = time.time()
return t0
#app.task
def wait(t0):
time.sleep(0.2)
return t0
#app.task
def time_delta(t0):
time.sleep(0.2)
t1 = time.time()
return t1 - t0
And finally, the code to start the tasks (in start_tasks.py). Here, I have created two identical paths (getting time t0, wait two times, get time t1, and calculate t1-t0) that I later want to prioritize in order to let the first one finish before the second starts.
from celery import chain, group, Celery
from prio import get_time, time_delta, wait
tasks = chain(get_time.s(),
group([
chain(wait.s(), wait.s(), time_delta.s()),
chain(wait.s(), wait.s(), time_delta.s())
]))
res = tasks.apply_async()
print(res.get())
This will output
>>> [1.0178837776184082, 1.2237505912780762]
I interpret the results such that the first and the second path tasks run alternately, which results in the small difference of execution times. In the end, I would like to achieve a result like [0.6, 1.2].
Now, to introduce prioritization, I have modified the celeryconfig.py according to the documentation:
from kombu import Exchange, Queue
broker_url = 'amqp://user:password#some-rabbit//'
result_backend = 'rpc://'
task_queues = [
Queue('tasks', Exchange('tasks'), routing_key='tasks', queue_arguments={'x-max-priority': 10}),
]
task_queue_max_priority = 10
task_default_priority = 5
With this change, however, the tasks do not seem to be executed at all, and I have no clue what to change in order to make it work. I have already tried to pass the queue name to apply_async (res = tasks.apply_async(queue='tasks')) but that did not solve the problem. Any hints are welcome!
Edit:
Since I still cannot get it to work, I tried to make the setup cleaner and clearer using docker containers.
I created a docker image via the following minimalistic Dockerfile:
FROM python:latest
RUN pip install --quiet celery
The image is then built via docker build --tag minimalcelery ..
I start RabbitMQ with:
docker run --network celery_network --interactive --tty --hostname my-rabbit --name some-rabbit --env RABBITMQ_DEFAULT_USER=user --env RABBITMQ_DEFAULT_PASS=password --rm rabbitmq
The client is started with:
docker run --name "client" --interactive --tty --rm --volume %cd%:/home/work --hostname localhost --network celery_network --env PYTHONPATH=/home/work/ minimalcelery /bin/bash -c "cd /home/work/ && python start_tasks.py"
The worker is started with:
docker run --name "worker" --interactive --tty --rm --volume %cd%:/home/work --hostname localhost --network celery_network --env PYTHONPATH=/home/work/ minimalcelery /bin/bash -c "cd /home/work/ && celery --app=prio worker"
prio.py was modified according to #ItayB's answer:
from celery import Celery
app = Celery('tasks', backend='rpc://', broker='amqp://user:password#some-rabbit//')
app.config_from_object('celeryconfig')
#app.task
def get_time(**kwargs):
t0 = time.time()
return t0
#app.task
def wait(t0, **kwargs):
time.sleep(0.2)
return t0
#app.task
def time_delta(t0, **kwargs):
time.sleep(0.2)
t1 = time.time()
return t1 - t0
With this setup and the "unpriorized" config, I get a result similar to the one I previously obtained: [0.6443462371826172, 0.6746957302093506].
However, as before, if I use the additional infos for queueing/priorization, I don't get any reply. These are the files I used:
celeryconfig.py:
task_queues = [
Queue('tasks', Exchange('tasks'), routing_key='tasks', queue_arguments={'x-max-priority': 10}),
]
task_queue_max_priority = 10
task_default_priority = 5
start_tasks.py:
from celery import Celery, chain, group
from prio import get_time, time_delta, wait
tasks = chain(get_time.s(), group([chain(wait.s(), wait.s(), time_delta.s()), chain(wait.s(), wait.s(), time_delta.s())]))
print("about to start the tasks.")
res = tasks.apply_async(queue='tasks', routing_key='tasks', priority=5)
print("waiting...")
print(res.get())
Also, I tried to modify the command, by which the worker is started:
docker run --name "worker" --interactive --tty --rm --volume %cd%:/home/work --hostname localhost --network celery_network --env PYTHONPATH=/home/work/
minimalcelery /bin/bash -c "cd /home/work/ && celery --app=prio worker --hostname=worker.tasks#%h --queues=tasks"
I am aware that this setup does not prioritize the "paths" differently. Currently, I just want to get an answer from the worker.
It's seems like you're almost there. It's been a while since I did that but I'll try (based on some snippets I have):
change your celery tasks to support kwargs:
def get_time(**kwargs):
pass the priority when you set the signature:
get_time.s(kwargs={"priority": 3}) # set value below x-max-priority
I'm not sure if it's a must but I've also defined the task as immutable:
get_time.s(immutable=True, kwargs={"priority": 3})
set the worker_prefetch_multiplier to 1. The default is 4, which means that 4 tasks are prefetched together so I think there won't be prioritization between them (AFAIR).
Good luck!

How can run docker image in kubernetes initiate from another and pass arguments

I am having two dockerized application which needs to run in kubernetes.
Here is the scenario which needs to achieve.
Docker-1 which is flask application.
Docker-2 which is python script will take input from the Docker-1 and execute and need to write some file in a shared volume of the Docker-1 container.
Here is the flask web-app code.
from flask import Flask, request, Response, jsonify
app = Flask(__name__)
#app.route('/')
def root():
return "The API is working fine"
#app.route('/run-docker')
def run_docker_2():
args = "input_combo"
query = <sql query>
<initiate the docker run and pass params>
exit
#No return message need run as async
if __name__ == "__main__":
app.run(debug=True, host='0.0.0.0', port=8080, threaded=True)
Docker file
FROM ubuntu:latest
MAINTAINER Abhilash KK "abhilash.kk#searshc.com"
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential python-tk
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["/usr/bin/python"]
CMD ["app.py"]
requirements.txt
flask
Python script for the second docker. start_docker.py
import sys
input_combo = sys.argv[1]
query = sys.argv[2]
def function_to_run(input_combination,query):
#starting the model final creating file
function_to_run(input_combo, query)
Docker file 2
FROM python
COPY . /script
WORKDIR /script
CMD ["python", "start_docker.py"]
Please help me to connect with the docker images. or let me know any other way to achieve this problem. The basic requirement is to add a message to some queue and that queue listens for in time interval and starts the process with FIFO manner.
Any other approach in GCP service to initiate an async job will take input from the client and create a file which is accessible from web-app python.
First, create a Pod running "Docker-1" application. Then Kubernetes python client to spawn a second pod with "Docker-2".
You can share a volume between your pods in order to return the data to Docker1. In my code sample I'm using a host_path volume but you need to ensure that both pods are on the same node. I did add that code for readability.
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: docker1
labels:
app: docker1
spec:
replicas: 1
selector:
matchLabels:
app: docker1
template:
metadata:
labels:
app: docker1
spec:
containers:
- name: docker1
image: abhilash/docker1
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /shared
name: shared-volume
volumes:
- name: shared-volume
hostPath:
path: /shared
The code of run_docker_2 handler:
from kubernetes import client, config
...
args = "input_combo"
config.load_incluster_config()
pod = client.V1Pod()
pod.metadata = client.V1ObjectMeta(name="docker2")
container = client.V1Container(name="docker2")
container.image = "abhilash/docker2"
container.args = [args]
volumeMount = client.V1VolumeMount(name="shared", mount_path="/shared")
container.volume_mounts = [volumeMount]
hostpath = client.V1HostPathVolumeSource(path = "/shared")
volume = client.V1Volume(name="shared")
volume.host_path = hostpath
spec = client.V1PodSpec(containers = [container])
spec.volumes = [volume]
pod.spec = spec
v1.create_namespaced_pod(namespace="default", body=pod)
return "OK"
A handler to read the returned results:
#app.route('/read-results')
def run_read():
with open("/shared/results.data") as file:
return file.read()
Note that it could be useful to add a watcher to wait for the pod to finish the job and then do some cleanup (delete the pod for instance)
From what I can understand you'd want the so called "sidecar pattern", you can run multiple containers in one pod and share a volume, e.g.:
apiVersion: v1
kind: Pod
metadata:
name: www
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /srv/www
name: www-data
readOnly: true
- name: git-monitor
image: kubernetes/git-monitor
env:
- name: GIT_REPO
value: http://github.com/some/repo.git
volumeMounts:
- mountPath: /data
name: www-data
volumes:
- name: www-data
emptyDir: {}
You could also benefit from getting to know the basics of how Kubernetes work: Kubernetes Basics

Categories

Resources