Trigger python script on ec2 instance via lambda function? - python

I currently have an ec2 set up that runs a script when booted up (I do this by calling the script in the user data field for the ec2 instance). Then I just send a command via lambda to start the ec2 instance and that causes the script to run. I would now like to run multiple scripts - ideally I'd have something like a lambda that starts the ec2 instance, then sends a notification to a second lambda when it is up and running to run various scripts on there before shutting it back down. How can I trigger a python script on a running ec2 instance via lambda?
Thanks
EDIT:
I believe I've found a quick solution. in the user data I point to a script like "startup.py"
in this script I can just import whatever series of scripts I want to execute. I just have to figure out the paths as the user data script is executes in a different directoy from /home/ec2-user/

To run commands on EC2 instances from outside of them, you should consider using AWS Systems Manager Run Command. This allows you to run multiple, arbitrary commands across a number of instances that you choose (by instance Id, by resource group, or by tags). You can do this from the AWS console, CLI, or SDK.
Note that to use this, your instances need to be configured correctly.

Using Scheduling SSH jobs Trigger python script on ec2 instance.
This is the link that can help you.

Related

How can I keep an AWS EC2 VM alive until a long-term script ends?

I have a Selenium Python (3.9) renderer script that will perform a bunch of fetching tasks that I'm trying to utilize a AWS EC2 virtual machine for a runtime on the cloud. I am running it by using SSH to access the VM, adding the scripts and dependencies, and then running it with a python3 <script-name>.py.
But, I need help preserving the runtime instance till my script is completely (or indefinitely till I manually delete the instance). Currently, the script seems tied to my local CLI, and when I leave it be for a while or shut my lid, the AWS VM runtime quits with error:
Client Loop: Send Disconnect: Broken Pipe
How can I preserve the runtime indefinitely or till end of script, and untie it from any local runtimes? Apologies for any idiosyncrasy, I'm new to DevOps and deploying stuff outside of local runtimes.
I used ClientAliveInterval=30, ServerAliveInterval=30, ClientAliveCountMax=(arbitrarily high amount), and ServerAliveCountMax=(arbitrarily high amount). I tried to use nohup inside my VM, but it did not prevent the process from ending. I have observed that when ps -a returns my ssh session, it is running, else it is not.
I am using a M1 Mac on Ventura. The AMI I am using to create the VM is ami-08e9419448399d936. This is Selenium-Webdriver-on-Headless-Ubuntu, using Ubuntu 20.04.

Is there a way to stop and restart a self hosted Python script using Github Actions?

I have a project in which one of the tests consists of running a process indefinitely in order to collect data on the program execution.
It's a Python script that runs locally on a Linux machine, but I'd like for other people in my team to have access to it as well because there are specific moments where the process needs to be restarted.
Is there a way to set up a workflow on this machine that when dispatched, stops and restarts the process?
You can execute commands on your Linux host via GH Actions and SSH. Take a look at this action.

How to schedule a shell command to run in VM instance on GCP?

I am wanting to schedule a shell command within a VM instance to run on a weekly basis.
How it would work:
Once a week, Cloud Scheduler invokes pub sub trigger
Pub sub then pushes message to VM instance's HTTP endpoint
This in turn causes the shell command to run
I have no problem with steps one and two but I am struggling with how to get the shell command to execute.
One thing I have considered is downloading Python to the VM instance and then creating a Python script that runs an os system command.
import os
cmd = "some command"
os.system(cmd)
But again though my problem is how do I get the HTTP POST request to cause the Python script to run?
I would do it differently:
Cloud Scheduler calls a Cloud Function (or Cloud Run)
The Cloud Function starts an instance with a startup script that runs the batch process, and shuts down.
If you need to pass arguments to the script, you can do it using instance metadata when you create the instance (or while it is already running).

.py file in EC2 instance to execute on event from S3 Bucket

i have a .py file thats on an EC2 instance. Im trying to have the .py file run when an event(file uploaded to S3 Bucket) occurs.
I currently have an event notification that is sent to a AWS Lambda function that starts the EC2 instance, here is that code from the AWS console:
import boto3
id = [ec2-ID]
def lambda_handler(event, context):
ec2 = boto3.client('ec2')
ec2.start_instances(InstanceIds=id)
i can manually go into PuTTY and type in "python test.py" to run my program and it works, but i want to get rid of the "having to do it manually part" and have it just run itself whenever there is an event.
I am stumped as to how to progress.
I thought by "starting" my EC2 instance it would run that .py file and get to work processing whats in the S3 bucket
no error messages...it just doesnt do anything at all. Its suppose to work once a file is uploaded to the S3 bucket it should send a notification to the lambda to have the EC2 start processing the file with the .py file that is on it.
Kind regards
This is a nice trick you can try - https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
This should override the fact User Data is executed only on instance first creation. This method will allow you to execute User Data scripts on every boot. Just update the bash from:
/bin/echo "Hello World" >> /tmp/testfile.txt
to:
python /file_path/python_file.py &
Ttake a look at AWS Systems Manager Run Command as a way to run arbitrary scripts on EC2. You can do that from your boto3 client, but you'll probably have to use a boto3 waiter to wait for the EC2 instance to restart.
Note that if you're only starting the EC2 instance and running this script infrequently then it might be more cost-effective to simply launch a new EC2 instance, run your script, then terminate EC2. While the EC2 instance is stopped, you are charged for EBS storage associated with the instance and any unused Elastic IP addresses.
Use Cron:
$ sudo apt-get install cron
$ crontab -e
# option 3 vim
#Type "i" to insert text below
#reboot python /path_directory/python_test.py &
#Type ":wq" to save and exit
To find the .py file, run:
sudo find / -type f -iname "python_test.py"
Then add the path to Cron.
If all you need is to run some python code and the main limitation is running time, it might be a better idea to use lambda to listen to the S3 event, and Fargate to execute the task. The main advantage is you don't have to worry about starting/stopping your instance, and scaling out would be easier.
There is a nice write-up of a working use case at the serverless blog

How run a command (python file) on boot on AWS EC2 server

I'm having some problem making a python file run everytime the AWS server boots.
I am trying to run a python file to start a web server on Amazon Webservice EC2 server.
But I am limited to edit systemd folder and other folders such as init.d
Is there anything wrong?
Sorry I don't really understand EC2's OS, it seems a lot of methods are not working on it.
What I usually do via ssh to start my server is:
python hello.py
Can anyone tell me how to run this file automatically every time system reboots?
It depends on your linux OS but you are on the right track (init.d). This is exactly where you'd want to run arbitrary shell scripts on start up.
Here is a great HOWTO and explanation:
https://www.tldp.org/HOWTO/HighQuality-Apps-HOWTO/boot.html
and another stack overflow specific to running a python script:
Run Python script at startup in Ubuntu
if you want to share you linux OS I can be more specific.
EDIT: This may help, looks like they have some sort of launch wizard:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
When you launch an instance in Amazon EC2, you have the option of
passing user data to the instance that can be used to perform common
automated configuration tasks and even run scripts after the instance
starts. You can pass two types of user data to Amazon EC2: shell
scripts and cloud-init directives. You can also pass this data into
the launch wizard as plain text, as a file (this is useful for
launching instances using the command line tools), or as
base64-encoded text (for API calls).

Categories

Resources