i have a .py file thats on an EC2 instance. Im trying to have the .py file run when an event(file uploaded to S3 Bucket) occurs.
I currently have an event notification that is sent to a AWS Lambda function that starts the EC2 instance, here is that code from the AWS console:
import boto3
id = [ec2-ID]
def lambda_handler(event, context):
ec2 = boto3.client('ec2')
ec2.start_instances(InstanceIds=id)
i can manually go into PuTTY and type in "python test.py" to run my program and it works, but i want to get rid of the "having to do it manually part" and have it just run itself whenever there is an event.
I am stumped as to how to progress.
I thought by "starting" my EC2 instance it would run that .py file and get to work processing whats in the S3 bucket
no error messages...it just doesnt do anything at all. Its suppose to work once a file is uploaded to the S3 bucket it should send a notification to the lambda to have the EC2 start processing the file with the .py file that is on it.
Kind regards
This is a nice trick you can try - https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
This should override the fact User Data is executed only on instance first creation. This method will allow you to execute User Data scripts on every boot. Just update the bash from:
/bin/echo "Hello World" >> /tmp/testfile.txt
to:
python /file_path/python_file.py &
Ttake a look at AWS Systems Manager Run Command as a way to run arbitrary scripts on EC2. You can do that from your boto3 client, but you'll probably have to use a boto3 waiter to wait for the EC2 instance to restart.
Note that if you're only starting the EC2 instance and running this script infrequently then it might be more cost-effective to simply launch a new EC2 instance, run your script, then terminate EC2. While the EC2 instance is stopped, you are charged for EBS storage associated with the instance and any unused Elastic IP addresses.
Use Cron:
$ sudo apt-get install cron
$ crontab -e
# option 3 vim
#Type "i" to insert text below
#reboot python /path_directory/python_test.py &
#Type ":wq" to save and exit
To find the .py file, run:
sudo find / -type f -iname "python_test.py"
Then add the path to Cron.
If all you need is to run some python code and the main limitation is running time, it might be a better idea to use lambda to listen to the S3 event, and Fargate to execute the task. The main advantage is you don't have to worry about starting/stopping your instance, and scaling out would be easier.
There is a nice write-up of a working use case at the serverless blog
Related
I have 3 systemd services I created that run a Python script and pass a key argument because the scripts download datasets that require that key. When I enable the service it works just fine, but if I reboot the EC2 instance its on it fails on launch with:
(code=exited, status=255)
I want it to run on launch because I have a workflow that uses EventBridge to trigger a Lambda function that turns on the instance at specific time to download the dataset to S3 and begin the ETL process. Why would the service run as intended with $sudo systemctl start service-name.service but fails on startup?
Hmmm, this will depend on how you're running the EC2. Basically, there are two ways.
Via Cloud-init / EC2 userdata
You can specify if the script will be executed on the first boot (when the EC2 is created), every (re)boot, etc.
You can check the officials' docs for that:
Cloud-init: Event and Updates
AWS - Run commands on your Linux instance at launch
How can I utilize user data to automatically run a script with every restart of my Amazon EC2 Linux instance?
Via Linux systemd
You can use the example below (just remove the comments and/or just/add the Requires and After if it's needed.)
## doc here: https://man7.org/linux/man-pages/man5/systemd.unit.5.html#[UNIT]_SECTION_OPTIONS
[Unit]
Description=Startup script
# Requires=my-other-service.target
# After=After=network.target my-other-service.target
## doc here: https://man7.org/linux/man-pages/man5/systemd.service.5.html#OPTIONS
[Service]
Type=oneshot
ExecStart=/opt/myscripts/startup.sh
## doc here: https://man7.org/linux/man-pages/man5/systemd.unit.5.html#[INSTALL]_SECTION_OPTIONS
[Install]
WantedBy=multi-user.target
I am wanting to schedule a shell command within a VM instance to run on a weekly basis.
How it would work:
Once a week, Cloud Scheduler invokes pub sub trigger
Pub sub then pushes message to VM instance's HTTP endpoint
This in turn causes the shell command to run
I have no problem with steps one and two but I am struggling with how to get the shell command to execute.
One thing I have considered is downloading Python to the VM instance and then creating a Python script that runs an os system command.
import os
cmd = "some command"
os.system(cmd)
But again though my problem is how do I get the HTTP POST request to cause the Python script to run?
I would do it differently:
Cloud Scheduler calls a Cloud Function (or Cloud Run)
The Cloud Function starts an instance with a startup script that runs the batch process, and shuts down.
If you need to pass arguments to the script, you can do it using instance metadata when you create the instance (or while it is already running).
I currently have an ec2 set up that runs a script when booted up (I do this by calling the script in the user data field for the ec2 instance). Then I just send a command via lambda to start the ec2 instance and that causes the script to run. I would now like to run multiple scripts - ideally I'd have something like a lambda that starts the ec2 instance, then sends a notification to a second lambda when it is up and running to run various scripts on there before shutting it back down. How can I trigger a python script on a running ec2 instance via lambda?
Thanks
EDIT:
I believe I've found a quick solution. in the user data I point to a script like "startup.py"
in this script I can just import whatever series of scripts I want to execute. I just have to figure out the paths as the user data script is executes in a different directoy from /home/ec2-user/
To run commands on EC2 instances from outside of them, you should consider using AWS Systems Manager Run Command. This allows you to run multiple, arbitrary commands across a number of instances that you choose (by instance Id, by resource group, or by tags). You can do this from the AWS console, CLI, or SDK.
Note that to use this, your instances need to be configured correctly.
Using Scheduling SSH jobs Trigger python script on ec2 instance.
This is the link that can help you.
I'm having some problem making a python file run everytime the AWS server boots.
I am trying to run a python file to start a web server on Amazon Webservice EC2 server.
But I am limited to edit systemd folder and other folders such as init.d
Is there anything wrong?
Sorry I don't really understand EC2's OS, it seems a lot of methods are not working on it.
What I usually do via ssh to start my server is:
python hello.py
Can anyone tell me how to run this file automatically every time system reboots?
It depends on your linux OS but you are on the right track (init.d). This is exactly where you'd want to run arbitrary shell scripts on start up.
Here is a great HOWTO and explanation:
https://www.tldp.org/HOWTO/HighQuality-Apps-HOWTO/boot.html
and another stack overflow specific to running a python script:
Run Python script at startup in Ubuntu
if you want to share you linux OS I can be more specific.
EDIT: This may help, looks like they have some sort of launch wizard:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
When you launch an instance in Amazon EC2, you have the option of
passing user data to the instance that can be used to perform common
automated configuration tasks and even run scripts after the instance
starts. You can pass two types of user data to Amazon EC2: shell
scripts and cloud-init directives. You can also pass this data into
the launch wizard as plain text, as a file (this is useful for
launching instances using the command line tools), or as
base64-encoded text (for API calls).
I am newbie to EC2 and boto. I have an EC2 running instance and I want to execute a shell command like e.g. apt-get update through boto.
I searched a lot and found a solution using user_data in the run_instances command, but what if the instance is already launched?
I don't even know if it is possible. Any clue in this reference will be a great help.
The boto.manage.cmdshell module can be used to do this. To use it, you must have the paramiko package installed. A simple example of it's use:
import boto.ec2
from boto.manage.cmdshell import sshclient_from_instance
# Connect to your region of choice
conn = boto.ec2.connect_to_region('us-west-2')
# Find the instance object related to my instanceId
instance = conn.get_all_instances(['i-12345678'])[0].instances[0]
# Create an SSH client for our instance
# key_path is the path to the SSH private key associated with instance
# user_name is the user to login as on the instance (e.g. ubuntu, ec2-user, etc.)
ssh_client = sshclient_from_instance(instance,
'<path to SSH keyfile>',
user_name='ec2-user')
# Run the command. Returns a tuple consisting of:
# The integer status of the command
# A string containing the output of the command
# A string containing the stderr output of the command
status, stdout, stderr = ssh_client.run('ls -al')
That was typed from memory but I think it's correct.
You could also check out Fabric (http://docs.fabfile.org/) which has similar functionality but also has much more sophisticated features and capabilities.
I think you can use fabric for your requirements. Just check the fabric wrapper once . You can execute the command on remote server shell through fabric library.
It is very easy to use and you can integrate both boto and fabric . Together they work brilliant.
Plus the same command can executed to n number of nodes. Which I believe could be your requirements
Just check it out.
Yes, you can do this with AWS Systems manager. AWS Systems Manager Run Command allows you to remotely and securely run set of commands on EC2 as well on-premise server. Below are high-level steps to achieve this.
Attach Instance IAM role:
The ec2 instance must have IAM role with policy AmazonSSMFullAccess. This role enables the instance to communicate with the Systems Manager API.
Install SSM Agent:
The EC2 instance must have SSM agent installed on it. The SSM Agent process the run command requests & configure the instance as per command.
Execute command :
Example usage via AWS CLI:
Execute the following command to retrieve the services running on the instance. Replace Instance-ID with ec2 instance id.
aws ssm send-command --document-name "AWS-RunShellScript" --comment "listing services" --instance-ids "Instance-ID" --parameters commands="service --status-all" --region us-west-2 --output text
More detailed information: https://www.justdocloud.com/2018/04/01/run-commands-remotely-ec2-instances/