Can a buildbot-step run python code on the worker? - python

I am using buildbot. Is it possible to write my own build-step class that executes Python code on the worker?
The build-step will consist of
find all files of a certain type in the source
start a 3rd-party application that is installed on the worker via PythonCOM
command the started app to do some checks for the files found in step 1
close the app
Unfortunately the app does not support command line parameters for performing the required operation.
I know I could write my own shell script and have that run on the worker via the RemoteCommand class. But I'd prefer to have all code in one place (in the new build-step) and not having to place such a script on each worker.

Related

Scheduled Python script in task scheduler not working

I have a python script that I am trying to schedule to run in the task scheduler in my VM but it doesn't seem to be running, it returns (0x2) for last run result. I am able to run the script manually and it works. I even created a batch file to execute the script which works and tried scheduling that in Task Scheduler but it also gave the same error. My only guess is that it's not working because it uses the Google Sheets API and reads the credentials from a JSON file in the project folder but I'm still unsure as to why it wouldn't run when scheduled. If you have any ideas I would greatly appreciate it. In the task scheduler, I am using the path Z:\Python\PythonGSAPI\executePy.bat to execute the batch file. The content of the batch file is
#echo off
"C:\Python27\python.exe" "Z:\Python\PythonGSAPI\TF_Invoice.py"
pause
This occur due the PATH Enviroment variable, for exemple, if you use Anaconda Python it's needed to choose the first option during the installation or even configure this after.
enter image description here

Pyinstaller executable appears as two processes

I have an executable (main.exe) that I've packaged with pyinstaller, appears to be functioning as expected. I execute main.exe from a nodejs server as a child_process and in task manager I can see 2 main.exe processes running.
It looks like this is a result of the bootloader: https://pyinstaller.readthedocs.io/en/stable/advanced-topics.html#the-bootstrap-process-in-detail
" It begins the setup and then returns itself in another process. This approach of using two processes allows a lot of flexibility and is used in all bundles except one-folder mode in Windows. So do not be surprised if you will see your bundled app as two processes in your system task manager."
My issue is, how can I cleanly access and terminate this second process from within nodejs? Currently I terminate the original child process but am left with a single main.exe process running.

How to schedule a shell command to run in VM instance on GCP?

I am wanting to schedule a shell command within a VM instance to run on a weekly basis.
How it would work:
Once a week, Cloud Scheduler invokes pub sub trigger
Pub sub then pushes message to VM instance's HTTP endpoint
This in turn causes the shell command to run
I have no problem with steps one and two but I am struggling with how to get the shell command to execute.
One thing I have considered is downloading Python to the VM instance and then creating a Python script that runs an os system command.
import os
cmd = "some command"
os.system(cmd)
But again though my problem is how do I get the HTTP POST request to cause the Python script to run?
I would do it differently:
Cloud Scheduler calls a Cloud Function (or Cloud Run)
The Cloud Function starts an instance with a startup script that runs the batch process, and shuts down.
If you need to pass arguments to the script, you can do it using instance metadata when you create the instance (or while it is already running).

How run a command (python file) on boot on AWS EC2 server

I'm having some problem making a python file run everytime the AWS server boots.
I am trying to run a python file to start a web server on Amazon Webservice EC2 server.
But I am limited to edit systemd folder and other folders such as init.d
Is there anything wrong?
Sorry I don't really understand EC2's OS, it seems a lot of methods are not working on it.
What I usually do via ssh to start my server is:
python hello.py
Can anyone tell me how to run this file automatically every time system reboots?
It depends on your linux OS but you are on the right track (init.d). This is exactly where you'd want to run arbitrary shell scripts on start up.
Here is a great HOWTO and explanation:
https://www.tldp.org/HOWTO/HighQuality-Apps-HOWTO/boot.html
and another stack overflow specific to running a python script:
Run Python script at startup in Ubuntu
if you want to share you linux OS I can be more specific.
EDIT: This may help, looks like they have some sort of launch wizard:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
When you launch an instance in Amazon EC2, you have the option of
passing user data to the instance that can be used to perform common
automated configuration tasks and even run scripts after the instance
starts. You can pass two types of user data to Amazon EC2: shell
scripts and cloud-init directives. You can also pass this data into
the launch wizard as plain text, as a file (this is useful for
launching instances using the command line tools), or as
base64-encoded text (for API calls).

jython killing parent process that spawns subprocess breaks subprocess stdout to file?

Let me start with what I'm really trying to do. We want a platform independent startup script for invoking a JVM with some system properties and a dynamically generated classpath. We picked Jython in particular because we only need to depend on the standalone jython.jar in our startup script. We decided we could write a jython script that uses subprocess.Popen to launch our application's jvm and then terminates.
One more thing. Our application uses a lot of legacy debug code that prints to standard out. So the startup script typically has been redirecting stdout/stderr to a log file. I attempted to reproduce that with our jython script like this:
subprocess.Popen(args,stdout=logFile,stderr=logFile)
After this line the launcher script and hosting jvm for jython terminates. The problem is nothing shows up in the logFile. If I instead do this:
subprocess.Popen(args,stdout=logFile,stderr=logFile).wait()
then we get logs. So the parent process needs to run parallel to the application process launched via subprocess? I want to avoid having two running jvms.
Can you invoke subprocess in such a way that the stdout file will be written even if the parent process terminates? Is there a better way to launch the application jvm from jython? Is Jython a bad solution anyway?
We want a platform independent startup script for invoking a JVM with some system properties and a dynamically generated classpath.
You could use a platform independent script to generate a platform specific startup script either at installation time or before each invocation. In the latter case, additionally, you need a simple static platform specific script that invokes your platform independent startup-script-generating script and then the generated script itself. In both cases you start your application by calling a static platform specific script.
Can you invoke subprocess in such a way that the stdout file will be written even if the parent process terminates?
You could open file/redirect in a child process e.g., using shell:
Popen(' '.join(args+['>', 'logFile', '2>&1']), # shell specific cmdline
shell=True) # on Windows see _cmdline2list to understand what is going on

Categories

Resources