I am newbie to EC2 and boto. I have an EC2 running instance and I want to execute a shell command like e.g. apt-get update through boto.
I searched a lot and found a solution using user_data in the run_instances command, but what if the instance is already launched?
I don't even know if it is possible. Any clue in this reference will be a great help.
The boto.manage.cmdshell module can be used to do this. To use it, you must have the paramiko package installed. A simple example of it's use:
import boto.ec2
from boto.manage.cmdshell import sshclient_from_instance
# Connect to your region of choice
conn = boto.ec2.connect_to_region('us-west-2')
# Find the instance object related to my instanceId
instance = conn.get_all_instances(['i-12345678'])[0].instances[0]
# Create an SSH client for our instance
# key_path is the path to the SSH private key associated with instance
# user_name is the user to login as on the instance (e.g. ubuntu, ec2-user, etc.)
ssh_client = sshclient_from_instance(instance,
'<path to SSH keyfile>',
user_name='ec2-user')
# Run the command. Returns a tuple consisting of:
# The integer status of the command
# A string containing the output of the command
# A string containing the stderr output of the command
status, stdout, stderr = ssh_client.run('ls -al')
That was typed from memory but I think it's correct.
You could also check out Fabric (http://docs.fabfile.org/) which has similar functionality but also has much more sophisticated features and capabilities.
I think you can use fabric for your requirements. Just check the fabric wrapper once . You can execute the command on remote server shell through fabric library.
It is very easy to use and you can integrate both boto and fabric . Together they work brilliant.
Plus the same command can executed to n number of nodes. Which I believe could be your requirements
Just check it out.
Yes, you can do this with AWS Systems manager. AWS Systems Manager Run Command allows you to remotely and securely run set of commands on EC2 as well on-premise server. Below are high-level steps to achieve this.
Attach Instance IAM role:
The ec2 instance must have IAM role with policy AmazonSSMFullAccess. This role enables the instance to communicate with the Systems Manager API.
Install SSM Agent:
The EC2 instance must have SSM agent installed on it. The SSM Agent process the run command requests & configure the instance as per command.
Execute command :
Example usage via AWS CLI:
Execute the following command to retrieve the services running on the instance. Replace Instance-ID with ec2 instance id.
aws ssm send-command --document-name "AWS-RunShellScript" --comment "listing services" --instance-ids "Instance-ID" --parameters commands="service --status-all" --region us-west-2 --output text
More detailed information: https://www.justdocloud.com/2018/04/01/run-commands-remotely-ec2-instances/
Related
I have 3 systemd services I created that run a Python script and pass a key argument because the scripts download datasets that require that key. When I enable the service it works just fine, but if I reboot the EC2 instance its on it fails on launch with:
(code=exited, status=255)
I want it to run on launch because I have a workflow that uses EventBridge to trigger a Lambda function that turns on the instance at specific time to download the dataset to S3 and begin the ETL process. Why would the service run as intended with $sudo systemctl start service-name.service but fails on startup?
Hmmm, this will depend on how you're running the EC2. Basically, there are two ways.
Via Cloud-init / EC2 userdata
You can specify if the script will be executed on the first boot (when the EC2 is created), every (re)boot, etc.
You can check the officials' docs for that:
Cloud-init: Event and Updates
AWS - Run commands on your Linux instance at launch
How can I utilize user data to automatically run a script with every restart of my Amazon EC2 Linux instance?
Via Linux systemd
You can use the example below (just remove the comments and/or just/add the Requires and After if it's needed.)
## doc here: https://man7.org/linux/man-pages/man5/systemd.unit.5.html#[UNIT]_SECTION_OPTIONS
[Unit]
Description=Startup script
# Requires=my-other-service.target
# After=After=network.target my-other-service.target
## doc here: https://man7.org/linux/man-pages/man5/systemd.service.5.html#OPTIONS
[Service]
Type=oneshot
ExecStart=/opt/myscripts/startup.sh
## doc here: https://man7.org/linux/man-pages/man5/systemd.unit.5.html#[INSTALL]_SECTION_OPTIONS
[Install]
WantedBy=multi-user.target
I am trying to use azcopy from python, I have already used this from CLI and it is working!
I have successfully executted the following commands:
for upload :
set AZCOPY_SPA_CLIENT_SECRET=<my client secret>
azcopy login --service-principal --application-id=<removed> --tenant-id=<removed>
azcopy copy "D:\azure\content" "https://dummyvalue.blob.core.windows.net/container1/result4" --overwrite=prompt --follow-symlinks --recursive --from-to=LocalBlob --blob-type=Detect
Similarly for download
azcopy copy "https://dummyvalue.blob.core.windows.net/container1/result4" "D:\azure\azcopy_windows_amd64_10.4.3\temp\result2" --recursive
Now, I want to automate these commands using python, I know that azcopy can also be used using SAS keys but that is out of scope for my working
First attempt:
from subprocess import call
call(["azcopy", "login", "--service-principal", "--application-id=<removed>", "--tenant-id=<removed>"])
Second attempt:
import os
os.system("azcopy login --service-principal --application-id=<removed> --tenant-id=<removed>")
I have already set AZCOPY_SPA_CLIENT_SECRET in my environment.
I am using python 3 on windows.
Every time I get this error:
Failed to perform login command: service principal auth requires an
application ID, and client secret/certificate
NOTE: If your credential was created in the last 5 minutes, please
wait a few minutes and try again.
I don't want to use Azure VM to do this job
Could anyone please help me fix this problem?
This is because the set cmd does not set a permanent environment variable, it only takes effect in the current windows cmd prompt.
You should manually set the environment variable via UI or try to use setx command.
I did a test by using your code, and manually set the environment variable of AZCOPY_SPA_CLIENT_SECRET as per UI, then the code can run without issues(it may take a few minutes to take effect).
Test result is as below:
To give you some background I have a bash script being launched from Asterisk via a Python AGI that runs against Amazon Polly and generates a .sln file. I have this working on a CentOS server but am attempting to migrate it to a Debian Server.
This is the line item of code that is giving me problems
aws polly synthesize-speech --output-format pcm --debug --region us-east-2 --profile asterisk --voice-id $voice --text "$1" --sample-rate 8000 $filename.sln >/dev/null
I keep getting this error
ProfileNotFound: The config profile (foo) could not be found
This is an example of my /root/.aws/config
[default]
region = us-east-2
output = json
[profile asterisk]
region = us-east-2
output = json
[asterisk]
region = us-east-2
output = json
The /root/.aws/credentials looks similar but with the keys in them.
I've even tried storing all this data in environment variables and going with default so as to get past this, but then I have the issue where it throws unable to locate credentials, or must define region (got past that by defining the region inline). It's almost like, Asterisk is somehow running this out of some isolated session that I can't get the credentials or config/credentials file to. Which from research and how I set it up it is currently running as Root so that should not be an issue.
Any help is much appreciated, thanks!
Asterisk should be runned under asterisk user for security.
Likly on your prevous install it was under root, so all was working.
Please ensure you have setuped AWS Polly for asterisk user or create sudo entry and use sudo.
If you use System command it also have no shell(bash), so you have start it via bash script and setup PATH and other required variables yourself.
I currently have an ec2 set up that runs a script when booted up (I do this by calling the script in the user data field for the ec2 instance). Then I just send a command via lambda to start the ec2 instance and that causes the script to run. I would now like to run multiple scripts - ideally I'd have something like a lambda that starts the ec2 instance, then sends a notification to a second lambda when it is up and running to run various scripts on there before shutting it back down. How can I trigger a python script on a running ec2 instance via lambda?
Thanks
EDIT:
I believe I've found a quick solution. in the user data I point to a script like "startup.py"
in this script I can just import whatever series of scripts I want to execute. I just have to figure out the paths as the user data script is executes in a different directoy from /home/ec2-user/
To run commands on EC2 instances from outside of them, you should consider using AWS Systems Manager Run Command. This allows you to run multiple, arbitrary commands across a number of instances that you choose (by instance Id, by resource group, or by tags). You can do this from the AWS console, CLI, or SDK.
Note that to use this, your instances need to be configured correctly.
Using Scheduling SSH jobs Trigger python script on ec2 instance.
This is the link that can help you.
i have a .py file thats on an EC2 instance. Im trying to have the .py file run when an event(file uploaded to S3 Bucket) occurs.
I currently have an event notification that is sent to a AWS Lambda function that starts the EC2 instance, here is that code from the AWS console:
import boto3
id = [ec2-ID]
def lambda_handler(event, context):
ec2 = boto3.client('ec2')
ec2.start_instances(InstanceIds=id)
i can manually go into PuTTY and type in "python test.py" to run my program and it works, but i want to get rid of the "having to do it manually part" and have it just run itself whenever there is an event.
I am stumped as to how to progress.
I thought by "starting" my EC2 instance it would run that .py file and get to work processing whats in the S3 bucket
no error messages...it just doesnt do anything at all. Its suppose to work once a file is uploaded to the S3 bucket it should send a notification to the lambda to have the EC2 start processing the file with the .py file that is on it.
Kind regards
This is a nice trick you can try - https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
This should override the fact User Data is executed only on instance first creation. This method will allow you to execute User Data scripts on every boot. Just update the bash from:
/bin/echo "Hello World" >> /tmp/testfile.txt
to:
python /file_path/python_file.py &
Ttake a look at AWS Systems Manager Run Command as a way to run arbitrary scripts on EC2. You can do that from your boto3 client, but you'll probably have to use a boto3 waiter to wait for the EC2 instance to restart.
Note that if you're only starting the EC2 instance and running this script infrequently then it might be more cost-effective to simply launch a new EC2 instance, run your script, then terminate EC2. While the EC2 instance is stopped, you are charged for EBS storage associated with the instance and any unused Elastic IP addresses.
Use Cron:
$ sudo apt-get install cron
$ crontab -e
# option 3 vim
#Type "i" to insert text below
#reboot python /path_directory/python_test.py &
#Type ":wq" to save and exit
To find the .py file, run:
sudo find / -type f -iname "python_test.py"
Then add the path to Cron.
If all you need is to run some python code and the main limitation is running time, it might be a better idea to use lambda to listen to the S3 event, and Fargate to execute the task. The main advantage is you don't have to worry about starting/stopping your instance, and scaling out would be easier.
There is a nice write-up of a working use case at the serverless blog