Camunda /External Task/ Connecting Python with BPM - python

I would like to create a simple Python Script and use it to perform a service task in my BPMN process. Does anyone know how I can use a Python script in a service task?

I'm attaching the references that you could use with the specific configuration.
Note : from BPMN perspective you just need to give the task type to the service task which you will use in script to identify the task.
For Camunda -7 and local setup to execute the service task you can follow https://medium.com/#klauke.peter/implementing-an-external-task-worker-for-camunda-in-python-566b5ebff488
For Camunda -8 and zeebe setup you will have to make a slight change while creating the channel you will have to use "from pyzeebe import create_camunda_cloud_channel" for the functional implementation you can find it in the ref url
https://pyzeebe.readthedocs.io/en/latest/channels.html#camunda-cloud
You can also refer once you created the channel and started the process https://forum.camunda.io/t/boundary-event-error-handler/37272
In this url you will have code for handling service task and then also the boundary task

You can't use a python script IN a service task, but you can use a python script as an external task worker, this repo will probably be a good starting point for you.
dg

Related

Deliver a message from google cloud functions to virtual machine

currently I automatically start a VM after running a cloud function via this code:
def start_vm(context, event):
compute = googleapiclient.discovery.build('compute', 'v1')
result = compute.instances().start(project='PROJECT', zone='ZONE', instance='NAME').execute()
Now I am looking for a way to deliver a message or a parameter at the same time. After the VM starts and based on the added message/parameter, a different code runs. Does anyone know how to achieve this?
Appreciate every help.
Thank you.
You can use the Guest attributes. The Cloud Functions add the guest attribute and then run the VM.
In the startup script, you read the data in the guest attributes and then you use them to perform stuff.
The other solution is to start a webserver in the VM and then to POST a request to this webserver.
This solution is better is you have several task to perform on the VM. But, take care of the security is you expose a webserver. Expose it only internally and use a VPC connector on your Cloud Function to reach your VM.

Google App Engine - run task on publish

I have been looking for a solution for my app that does not seem to be directly discussed anywhere. My goal is to publish an app and have it reach out, automatically, to a server I am working with. This just needs to be a simple Post. I have everything working fine, and am currently solving this problem with a cron job, but it is not quite sufficient - I would like the job to execute automatically once the app has been published, not after a minute (or whichever the specified time it may be set to).
In concept I am trying to have my app register itself with my server and to do this I'd like for it to run once on publish and never be ran again.
Is there a solution to this problem? I have looked at Task Queues and am unsure if it is what I am looking for.
Any help will be greatly appreciated.
Thank you.
Personally, this makes more sense to me as a responsibility of your deploy process, rather than of the app itself. If you have your own deploy script, add the post request there (after a successful deploy). If you use google's command line tools, you could wrap that in a script. If you use a 3rd party tool for something like continuous integration, they probably have deploy hooks you could use for this purpose.
The main question will be how to ensure it only runs once for a particular version.
Here is an outline on how you might approach it.
You create a HasRun module, which you use store each the version of the deployed app and this indicates if the one time code has been run.
Then make sure you increment your version, when ever you deploy your new code.
In you warmup handler or appengine_config.py grab the version deployed,
then in a transaction try and fetch the new HasRun entity by Key (version number).
If you get the Entity then don't run the one time code.
If you can not find it then create it and run the one time code, either in a task (make sure the process is idempotent, as tasks can be retried) or in the warmup/front facing request.
Now you will probably want to wrap all of that in memcache CAS operation to provide a lock or some sort. To prevent some other instance trying to do the same thing.
Alternately if you want to use the task queue, consider naming the task the version number, you can only submit a task with a particular name once.
It still needs to be idempotent (again could be scheduled to retry) but there will only ever be one task scheduled for that version - at least for a few weeks.
Or a combination/variation of all of the above.

python multiprocess - launching jobs on availble slots

I'm launching jobs on a server.
The server can process only one job at a time
So, I use the trick of using several user accounts on the server : userA , userB , userC , userD
for the moment I launch job with a function
run_job_on_server(some_args , user_name)
my question is quite simple : how can , using multiprocess (or another module), launch many jobs using the different users available, and when a job finished, make the user re-available and immediately after launch a new job with this user
Thanks for your help !
I think your question goes into the library selection (multiprocessing) too quickly. The first thing to do is to establish the design pattern. As a start, I think you could look at the dispatcher or mailbox pattern, and the active object pattern.
As for libraries, you're not stuck with the python standard lib. pip has many nice options too. I personally love ZeroMQ for distributed systems, but that's step two. Maybe the standard lib like Queue and multiprocessing will do.

Simple approach to launching background task in Django

I have a Django website, and one page has a button (or link) that when clicked will launch a somewhat long running task. Obviously I want to launch this task as a background task and immediately return a result to the user. I want to implement this using a simple approach that will not require me to install and learn a whole new messaging architecture like Celery for example. I do not want to use Celery! I just want to use a simple approach that I can set up and get running over the next half hour or so. Isn't there a simple way to do this in Django without having to add (yet another) 3rd party package?
Just use a thread.
import threading
t = threading.Thread(target=long_process,
args=args,
kwargs=kwargs)
t.setDaemon(True)
t.start()
return HttpResponse()
See this question for more details:
Can Django do multi-thread works?
Have a look at django-background-tasks - it does exactly what you need and doesn't need any additional services to be running like RabbitMQ or Redis. It manages a task queue in the database and has a Django management command which you can run once or as a cron job.
If you're willing to install a 3rd party library, but you want something a whole lot simpler than Celery, check out Redis Queue. It does require Redis, which is pretty easy in itself, but that can provide a lot of other benefits as well.
RQ itself has almost zero configuration. It's startlingly simple.
References:
http://python-rq.org/
http://nvie.com/posts/introducing-rq/
https://devcenter.heroku.com/articles/python-rq (RQ on Heroku)

Cluster job scheduler: tools

we are trying to solve a problem related to cluster job scheduler.
The problem is the following we have a set of python scripts which are executed in a cluster, the launching process is currently done by means of the human interaction, I mean to start the test we have a bash script which interact with the cluster to request the resources needed for the execution. What we are intending to do is to build an automatic launching process (which should be sound in the sense that it realizes the job status and based on that wait the job ending, restart the execution, etc...). Basically we have to implement a layer between the user workstation and the cluster.
Another additional difficulty is that our layer must be clever enough to interact with the different cluster job schedulers. We wonder if there exists a tool or framework which help us to interact with the cluster without having to deal with each cluster scheduler details. We have searched in the web but we did not find anything suitable for our needs.
By the way the programming language we use is Python.
Thanks in advance!
Br.-
Use supervisor: http://supervisord.org/
and celery http://www.celeryproject.org/
together
Take a look at the ipcluster_tools. The documentation is sparse but it is easy to use.

Categories

Resources