Need to execute a curl statement in OCP Cron - python

I am very new with working on OCP . I have a task to schedule a curl statement via crontab but I'm unable to figure out where to pass the curl statement.
Not sure how to even start. I looked up some examples but do not see anything that matches my requirement

OCP is based on Kubernetes. In Kubernetes you have the CronJob resource, which seems to be what you're looking for for your resource and allows you to run a job on a specific schedule.
As you need to use curl, you can use the curlimage/curl image, which has the curl binary included:
apiVersion: batch/v1
kind: CronJob
metadata:
name: my-curl-job
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: curl-job
image: curlimages/curl
imagePullPolicy: IfNotPresent
args:
- "http://your.url.you.want.to.curl"
restartPolicy: Never

Related

python locust pass custom arguments to workers

I need to run locust in distributed mode. I'd like to use custom arguments, and it is needed to add specific value of argument to each worker.
Here is sample python code:
Locust test runner
"""
from locust import HttpUser, task, events, constant_pacing, tag
#events.init_command_line_parser.add_listener
def add_custom_parameters(parser):
"""Set arguments which can be passed also via web ui"""
parser.add_argument(
"--property",
type=str,
env_var="PROPERTY",
default="",
help="set name or id",
)
class AwesomeUser(HttpUser):
"""
One AwesomeUser class to rule them all...
"""
host = "EMPTY"
wait_time = constant_pacing(1)
def on_start(self):
"""
On start procedure.
"""
print(f"HERE: {self.environment.parsed_options.property}")
#task(10)
#tag("test_it")
def test_it(self):
"""
Test if custom parameters can be used in that way.
"""
print(f"property: {self.environment.parsed_options.property}")
if __name__ == "__main__":
AwesomeUser.tasks = [AwesomeUser.test_it]
I'd like to use docker-compose.yaml, there were many attempts, but it looks that I cananot manage it. Sample code that is not working:
version: '3'
services:
master:
build:
context: .
volumes:
- type: bind
source: "./tests"
target: "/home/locust/tests"
ports:
- "8089:8089"
command: -f /home/locust/tests/load_test.py --master -u 3 -r 1
worker:
build:
context: .
volumes:
- type: bind
source: "./tests"
target: "/home/locust/tests"
command: -f /home/locust/tests/load_test.py --worker --master-host master --property "SOSN_1"
worker2:
build:
context: .
volumes:
- type: bind
source: "./tests"
target: "/home/locust/tests"
command: -f /home/locust/tests/load_test.py --worker --master-host master --property "SOSN_2"
worker3:
build:
context: .
volumes:
- type: bind
source: "./tests"
target: "/home/locust/tests"
command: -f /home/locust/tests/load_test.py --worker --master-host master --property "SOSN_3"
There is workaround - run it in each screen, each worker as a master (in that case I'm able to run parallel many locust scripts):
killall screen
source venv/bin/activate
for i in {1..3}; do
sleep 2
echo create worker screen worker_$i
screen -dmS "worker_$i" locust -f tests/load_test.py --property "SOSN_$i" -t 10m --headless
done
However I hope that it can be done via docker-compose. An ideal it will be if I can use command like it: docker-compose up --scale worker=3 and as a result of it it will be run locust in distributed mode with 3 workers, each will use different value of my custom argument.
Is it possible?

Celery+SQS don't receive messages

I'm trying to make a queue system using celery+sqs.
Still in my local environment with localstack I'm not able to receive messages in worker. It just doesn't show anything. There is a question some time ago, but I'm ok in their config.
I'm using all other SQS/SNS activities from other function, but isn't working from celery.
My current setup is like this:
Docker config:
services:
localstack:
image: localstack/localstack:latest
environment:
- SERVICES=sqs,sns
- HOSTNAME=localstack
- HOSTNAME_EXTERNAL=localstack
ports:
- '4566:4566'
networks:
- platform_default
volumes:
- "${TMPDIR:-/tmp}/localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
And the celery instantiation is down, used a get_queue directly to be sure of its link.
return Celery(
"server",
task_default_queue=config.sqs_celery.queue_name,
broker=f"sqs://",
broker_url=f"sqs://{config.sqs_celery.aws_access_key_id}:{config.sqs_celery.aws_secret_access_key}#{config.sqs_celery.broker_site_port}",
# in my case: sqs://localstack:localstack#localhost:4566
broker_transport_options={
'region': config.sqs_celery.region,
"predefined_queues": {
config.sqs_celery.queue_name: {
"url": get_queue_url(config.sqs_celery.queue_name),
# in my case: http://localstack:4566/000000000000/tasks
'region': config.sqs_celery.region,
}
}
}
)
Please maybe you have some ideas to start, because I lost half of the day trying to figure out what is wrong.
SOLVED:
The queue url can't be solved, you have to check everything. I set it to "localhost" and it worked.

Celery task is always PENDING inside Docker container (Flask + Celery + RabbitMQ + Docker)

I'm creating a basic project to test Flask + Celery + RabbitMQ + Docker.
For some reason, that I do not know, when I call the celery, the task seems to call RabbitMQ, but it stays at the PENDING state always, it never changes to another state. I try to use task.get(), but the code freezes. Example:
The celery worker (e.g. worker_a.py) is something like this:
from celery import Celery
# Initialize Celery
celery = Celery('worker_a',
broker='amqp://guest:guest#tfcd_rabbit:5672//',
backend='rpc://')
[...]
#celery.task()
def add_nums(a, b):
return a + b
While docker-compose.yml is something like this:
[...]
tfcd_rabbit:
container_name: tfcd_rabbit
hostname: tfcd_rabbit
image: rabbitmq:3.8.11-management
environment:
- RABBITMQ_ERLANG_COOKIE=test
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
ports:
- 5672:5672
- 15672:15672
networks:
- tfcd
tfcd_worker_a:
container_name: tfcd_worker_a
hostname: tfcd_worker_1
image: test_flask_celery_docker
entrypoint: celery
command: -A worker_a worker -l INFO -Q worker_a
volumes:
- .:/app
links:
- tfcd_rabbit
depends_on:
- tfcd_rabbit
networks:
- tfcd
[...]
The repository with all the files and instructions to run it can be found here.
Would anyone know what might be going on?
Thank you in advance.
After a while, a friend of mine discovered the problem:
The correct queue name was missing when creating the task, because Celery was using the default name "celery" instead of the correct queue name.
The final code is this:
[...]
#celery.task(queue='worker_a')
def add_nums(a, b):
return a + b

Python Flask-Restful application with Kubernetes - Connection refused

I first ssh into the Master Node.
When I run kubectl get svc
I get the output for NAME, TYPE, CLUSTER-IP, EXTERNAL-IP, PORT(S), AGE:
python-app-service LoadBalancer 10.110.157.42 <pending> 5000:30008/TCP 68m
I then run curl 10.110.157.52:5000
and I get the following message:
curl: (7) Failed connect to 10.110.157.42:5000; Connection refused
Below I posted my Dockerfile, deployment file, service file, and python application file. When I run the docker image, it works fine. However when I try to apply a Kubernetes service to the pod, I am unable to make calls. What am I doing wrong? Also please let me know if I left out any necessary information. Thank you!
Kubernetes was created with KubeAdm using Flannel CNI
Deployment yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-api
labels:
app: my-python-app
type: back-end
spec:
replicas: 1
selector:
matchLabels:
app: my-python-app
type: backend
template:
metadata:
name: python-api-pod
labels:
app: my-python-app
type: backend
spec:
containers:
- name: restful-python-example
image: mydockerhub/restful-python-example
ports:
- containerPort: 5000
Service yaml file:
apiVersion: v1
kind: Service
metadata:
name: python-app-service
spec:
type: LoadBalancer
ports:
- port: 5000
targetPort: 5000
nodePort: 30008
selector:
app: my-python-app
type: backend
Python application source - restful.py:
#!/usr/bin/python3
from flask import Flask, jsonify, request, abort
from flask_restful import Api, Resource
import jsonpickle
app = Flask(__name__)
api = Api(app)
# Creating an empty dictionary and initializing user id to 0.. will increment every time a person makes a POST request.
# This is bad practice but only using it for the example. Most likely you will be pulling this information from a
# database.
user_dict = {}
user_id = 0
# Define a class and pass it a Resource. These methods require an ID
class User(Resource):
#staticmethod
def get(path_user_id):
if path_user_id not in user_dict:
abort(400)
return jsonify(jsonpickle.encode(user_dict.get(path_user_id, "This user does not exist")))
#staticmethod
def put(path_user_id):
update_and_add_user_helper(path_user_id, request.get_json())
#staticmethod
def delete(path_user_id):
user_dict.pop(path_user_id, None)
# Get all users and add new users
class UserList(Resource):
#staticmethod
def get():
return jsonify(jsonpickle.encode(user_dict))
#staticmethod
def post():
global user_id
user_id = user_id + 1
update_and_add_user_helper(user_id, request.get_json())
# Since post and put are doing pretty much the same thing, I extracted the logic from both and put it in a separate
# method to follow DRY principles.
def update_and_add_user_helper(u_id, request_payload):
name = request_payload["name"]
age = request_payload["age"]
address = request_payload["address"]
city = request_payload["city"]
state = request_payload["state"]
zip_code = request_payload["zip"]
user_dict[u_id] = Person(name, age, address, city, state, zip_code)
# Represents a user's information
class Person:
def __init__(self, name, age, address, city, state, zip_code):
self.name = name
self.age = age
self.address = address
self.city = city
self.state = state
self.zip_code = zip_code
# Add a resource to the api. You need to give the class name and the URI.
api.add_resource(User, "/users/<int:path_user_id>")
api.add_resource(UserList, "/users")
if __name__ == "__main__":
app.run()
Dockerfile:
FROM python:3
WORKDIR /usr/src/app
RUN pip install flask
RUN pip install flask_restful
RUN pip install jsonpickle
COPY . .
CMD python restful.py
kubectl describe svc python-app-service
Name: python-app-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=my-python-app,type=backend
Type: LoadBalancer
IP: 10.110.157.42
Port: <unset> 5000/TCP
TargetPort: 5000/TCP
NodePort: <unset> 30008/TCP
Endpoints: 10.244.3.24:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
So the reason I was unable to connect was because I never exposed the port in my Dockerfile.
My Dockerfile should have been:
FROM python:3
WORKDIR /usr/src/app
RUN pip install flask
RUN pip install flask_restful
RUN pip install jsonpickle
COPY . .
EXPOSE 5000
CMD python restful.py

How can run docker image in kubernetes initiate from another and pass arguments

I am having two dockerized application which needs to run in kubernetes.
Here is the scenario which needs to achieve.
Docker-1 which is flask application.
Docker-2 which is python script will take input from the Docker-1 and execute and need to write some file in a shared volume of the Docker-1 container.
Here is the flask web-app code.
from flask import Flask, request, Response, jsonify
app = Flask(__name__)
#app.route('/')
def root():
return "The API is working fine"
#app.route('/run-docker')
def run_docker_2():
args = "input_combo"
query = <sql query>
<initiate the docker run and pass params>
exit
#No return message need run as async
if __name__ == "__main__":
app.run(debug=True, host='0.0.0.0', port=8080, threaded=True)
Docker file
FROM ubuntu:latest
MAINTAINER Abhilash KK "abhilash.kk#searshc.com"
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential python-tk
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["/usr/bin/python"]
CMD ["app.py"]
requirements.txt
flask
Python script for the second docker. start_docker.py
import sys
input_combo = sys.argv[1]
query = sys.argv[2]
def function_to_run(input_combination,query):
#starting the model final creating file
function_to_run(input_combo, query)
Docker file 2
FROM python
COPY . /script
WORKDIR /script
CMD ["python", "start_docker.py"]
Please help me to connect with the docker images. or let me know any other way to achieve this problem. The basic requirement is to add a message to some queue and that queue listens for in time interval and starts the process with FIFO manner.
Any other approach in GCP service to initiate an async job will take input from the client and create a file which is accessible from web-app python.
First, create a Pod running "Docker-1" application. Then Kubernetes python client to spawn a second pod with "Docker-2".
You can share a volume between your pods in order to return the data to Docker1. In my code sample I'm using a host_path volume but you need to ensure that both pods are on the same node. I did add that code for readability.
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: docker1
labels:
app: docker1
spec:
replicas: 1
selector:
matchLabels:
app: docker1
template:
metadata:
labels:
app: docker1
spec:
containers:
- name: docker1
image: abhilash/docker1
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /shared
name: shared-volume
volumes:
- name: shared-volume
hostPath:
path: /shared
The code of run_docker_2 handler:
from kubernetes import client, config
...
args = "input_combo"
config.load_incluster_config()
pod = client.V1Pod()
pod.metadata = client.V1ObjectMeta(name="docker2")
container = client.V1Container(name="docker2")
container.image = "abhilash/docker2"
container.args = [args]
volumeMount = client.V1VolumeMount(name="shared", mount_path="/shared")
container.volume_mounts = [volumeMount]
hostpath = client.V1HostPathVolumeSource(path = "/shared")
volume = client.V1Volume(name="shared")
volume.host_path = hostpath
spec = client.V1PodSpec(containers = [container])
spec.volumes = [volume]
pod.spec = spec
v1.create_namespaced_pod(namespace="default", body=pod)
return "OK"
A handler to read the returned results:
#app.route('/read-results')
def run_read():
with open("/shared/results.data") as file:
return file.read()
Note that it could be useful to add a watcher to wait for the pod to finish the job and then do some cleanup (delete the pod for instance)
From what I can understand you'd want the so called "sidecar pattern", you can run multiple containers in one pod and share a volume, e.g.:
apiVersion: v1
kind: Pod
metadata:
name: www
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /srv/www
name: www-data
readOnly: true
- name: git-monitor
image: kubernetes/git-monitor
env:
- name: GIT_REPO
value: http://github.com/some/repo.git
volumeMounts:
- mountPath: /data
name: www-data
volumes:
- name: www-data
emptyDir: {}
You could also benefit from getting to know the basics of how Kubernetes work: Kubernetes Basics

Categories

Resources