Use python to shut down instance script runs on - python

I am running machine learning scripts that take a long time to finish. I want to run them on AWS on a faster processor and stop the instance when it finishes.
Can boto be used within the running script to stop its own instance? Is there a simpler way?

If your EC2 instance is running Linux, you can simply issue a halt or shutdown command to stop your EC2 instance. This allows you to shutdown your EC2 instance without requiring IAM permissions.

See Creating a Connection on how to create a connection. Never tried this one before, so use caution. Also make sure the instance is EBS backed. Otherwise the instance will be terminated when you stop it.
import boto.ec2
import boto.utils
conn = boto.ec2.connect_to_region("us-east-1") # or your region
# Get the current instance's id
my_id = boto.utils.get_instance_metadata()['instance-id']
conn.stop_instances(instance_ids=[my_id])

Related

Running a background task continuously in Django

I am running a server in Django,which is taking values continuously. The function used forever loop in it, when I call that function it never gets out of the loop.
My problem - I want to take values continuously from the server and use it afterwords wherever I want.
I tried threading, what I thought I could do is make a background task which keeps on feeding the database and when I want to use I can take values from it. But I dont know how to do this
ip = "192.168.1.15"
port = 5005
def eeg_handler(unused_addr, args, ch1, ch2, ch3, ch4, ch5):
a.append(ch1)
print(a)
from pythonosc import osc_server, dispatcher
dispatcher = dispatcher.Dispatcher()
dispatcher.map("/muse/eeg", eeg_handler, "EEG")
server = osc_server.ThreadingOSCUDPServer(
(ip, port), dispatcher)
# print("Serving on {}".format(server.server_address))
server.serve_forever()
You can create a Management command
With a Management command you can acces to your database in the same way you accesss to it through Django.
Then you can schedule this command from cron or you can make this run forever because it will not block your application.
Another guide to write management command.
You can use django-background-tasks, a database-backed worked queue for django. You can follow their installation instructions from here.
A sample background task for your case would be:
from background_task import background
#background(schedule=60)
def feed_database(some_parameter):
# feed your database here
# you can also pass a parameter to this function
pass
All you need is to call feed_database from regular code to activate your background task, which will create a Task object and stores it in the database and run this function after 60 seconds.
In your case you want to run this function infinitely, so you can do something like this:
feed_database(some_parameter, repeat=60, repeat_until=None)
This will run your function once in 60 seconds, infinitely.
They also provide a django management command, where you can give run commands to your tasks (if you don't want to start your task from your code), by using python manage.py process_tasks.

How to update EC2 instance state with boto3 resource

I'm writing a python function with boto3 that starts an EC2 instance and then needs to wait until the instance is running. I understand how this works with a client, but I'd like to do it with a resource.
I tried using a for loop checking instance.state, but the state never updates. So I guess I'm looking for some sort of refresh method.
I see there is a wait_until_running() waiter, but this is locked to a 15 second delay. I want to poll more often than that.
Apparently the WaiterConfig setting also works for a resource, even though it's documented only for a client.
wait_until_running(WaiterConfig = {'Delay': 2})
After the waiter, you still have to run Instance.reload() to update the state.

Google Compute Engine - Restart at end of start up script

Is there a good way to automatically restart an instance if it reaches the end of a start up script?
I have a Python script that I want to run continuously on Compute Engine which checks the pub/sub from a GAE instance that's running a CRON job. I haven't figured out a good way to catch every possible error and there are many edge cases that are hard to test (e.g. the instance running out of memory). It would be better if I could just restart the instance every time the script finishes (because it should never finish). The autorestart option won't work because the instance doesn't shutdown, it just stops running the script.
A simple shutdown -r now may be enough.
Or if you prefer gcloud:
gcloud compute instances reset $(hostname)
Mind that reset is a real reset, without a proper OS shutdown.
You might also need to check this documentation before performing 'Resetting or Restarting operation in an instance'

Restart python script if not running/stopped/error with simple cron job

Summary: I have a python script which collects tweets using Twitter API and i have postgreSQL database in the backend which collects all the streamed tweets. I have custom code which overcomes the ratelimit issue and i made it to run 24/7 for months.
Issue: Sometimes streaming breaks and sleeps for given secs but it is not helpful. I do not want to check it manually.
def on_error(self,status)://tweepy method
self.mailMeIfError(['me <me#localhost'],'listen.py <root#localhost>','Error Occured on_error method',str(error))
time.sleep(300)
return True
Assume mailMeIfError is a method which takes care of sending me a mail.
I want a simple cron script which always checks the process and restart the python script if not running/error/breaks. I have gone through some answers from stackoverflow where they have used Process ID. In my case process ID still exists because this script sleeps if Error.
Thanks in advance.
Using Process ID is much easier and safer. Try using watchdog.
This can all be done in your one script. Cron would need to be configured to start your script periodically, say every minute. The start of your script then just needs to determine if it is the only copy of itself running on the machine. If it spots that another copy is running, it just silently terminates. Else it continues to run.
This behaviour is called a Singleton pattern. There are a number of ways to achieve this for example Python: single instance of program

EC2 run_instances: Many instances, slightly different startup scripts?

I'm doing an embarrassingly parallel operation on Amazon Web Services, in which I'm spinning up a large number of EC2 instance that all have slightly different scripts to run on startup. Currently, I'm starting up each instance individually within a for loop like so (I'm using the Python boto package to talk to AWS):
for parameters in parameter_list:
#Create this instance's startup script
user_data = startup_script%parameters
#Run this instance
reservation = ec2.run_instances(ami,
key_name=key_name,
security_groups=group_name,
instance_type=instance_type,
user_data=user_data)
However, this takes too long. ec2.run_instances allows one to start many instances at once, using the max_count keyword. I would like to create many instance simultaneously passing each their own unique startup script (user_data). Is there any way to do this? One cannot just pass a list of scripts to user_data.
One option would be to pass the same startup script, but have the script reference another peice of data associated with that instance. EC2's tag system could work, but I don't know of a way to assign tags in a similarly parallel fashion. Is there any kind of instance-specific data I can assign to a set of instances in parallel?
AFAIK, there is no simple solution. How about using Simple Queue Service(SQS)?
Add start-up scripts (aka user-data) to SQS
write user-data as
read a start-up script from SQS and run it
If your script is upper than 256k, you do not add it to SQS directly. So, try this procedure.
Add start-up scripts (aka user-data) to S3
Add the S3 url of the script to SQS
write user-data as
read a url from SQS
download the script from S3
run it
Sorry, It's very complicated. Hope this helps.
Simple. Fork just before you initialize each node.
newPid = os.fork()
if newPid == 0:
is_master = False
# Create the instance
...blah blah blah...
else:
logging.info( 'Launched host %s ...' % hostname )

Categories

Resources