I am trying to launch instances on opensatck setup with multiple networks configured using python-boto.
But I got following error,
EC2ResponseError: EC2ResponseError: 400 Bad Request
<?xml version="1.0"?>
<Response><Errors><Error><Code>NetworkAmbiguous</Code><Message>Multiple possible networks found, use a Network ID to be more specific.</Message></Error></Errors><RequestID>req-28b5a4e8-3838-4111-95db-337c5048716d</RequestID></Response>
My code is like here,
from boto import ec2
ostack = ec2.connection.EC2Connection(
ec2_access_key, ec2_secret_key, is_secure=False, port=8773, region='nova',
path='/services/Cloud'
)
ostack.run_instances('ami-xxxxx', key_name='BotoTest')
The above is working fine for single network configured to openstack.
Note: run_instances doesn't have keyword argument for network-id.
Where I made a mistake or how to fix it? or is it bug in python-boto?
Advance in Thanks.
I believe that this isn't a bug of boto, which was built to communicate with the AWS-API. While most of the EC2-AWS functionality work well with the EC2-OpenStack API, some features are not implemented and are answered with a HTTP-Error 500 or 400.
AWS use the VPC (Virtual Private Cloud) as Network and an Availability Zone as Subnet. Both have a default setting, which is taken if there is no further specification when creating a new instance. But in OpenStack I can't see a possibility to mark a Network and a Subnet as default.
In my attempts, neither private_ip_address nor subnet_id works to specify a network/subnet at run_instances() if there are more than one at OpenStack.
Edit: if you only have one network/subnet, the following code works fine with boto at trystack.org:
import boto
conn = boto.connect_ec2_endpoint("http://8.21.28.222:8773/services/Cloud",aws_access_key_id='...',aws_secret_access_key='...')
new_instance = conn.run_instances("ami-00000020", key_name="trystack", security_groups=["default"], instance_type="m1.small")
Have you tried? :
from boto import ec2
ostack = ec2.connection.EC2Connection(
ec2_access_key, ec2_secret_key, is_secure=False, port=8773, region='nova',
path='/services/Cloud', debug=1
)
then
ostack.run_instances('ami-xxxxx', subnet_id='your network id', key_name='BotoTest')
Amazon uses this for VPC networks? Are you running a VPC?
Related
1) Instantiate an AWS Linux, micro instance using the AWS python API (include authentication to AWS)
2) Update the instance with tags: customer=ACME, environment=PROD
3) Assign a security group to the instance
To program in Python on AWS, you should use the boto3 library.
You will need to do the following:
supply credentials to the library (link)
create an EC2 client (link)
use the EC2 client to launch EC2 instances using run_instances (link)
You can specify both tags and security groups in the run_instances call. Additionally, the boto3 documentation provides some Amazon EC2 examples that will help.
Maybe you want to observe this project:
https://github.com/nchammas/flintrock
This is a hadoop and apache spark clustering project. But, it can inspire you.
Actually, there is many feature that you want like security group or filtering by tag name. Just, look around of code
I have a problem with an AWS Lambda Function which depends upon DynamoDB and SQS to function properly. When I try to run the lambda stack, they time out when trying to connect to the SQS service. The AWS Lambda Function lies inside a VPC with the following setup:
A VPC with four subnets
Two subsets are public, routing their 0.0.0.0/16 traffic to an internet gateway
A MySQL server sits in a public subnet
The other two contain the lambdas and route their 0.0.0.0/16 traffic to a NAT which lives in one of the public subnets.
All route tables have a 10.0.0.0/16 to local rule (is this the problem because Lambdas use private Ip's inside a VPC?)
The main rout table is the one with the NAT, but I explicitly associated the public nets with the internet gateway routing table
The lambdas and the mysql server share a security group which allows for inbound internal access (10.x/16) as well as unrestricted outbound traffic (0.0.0.0/16).
Traffic between lambdas and the mysql instance is no problem (except if I put the lambdas outside the VPC, then they can't access the server even if I open up all ports). Assume the code for the lambdas is also correct, as it worked before I tried to mask it in a private net. Also the lambda execution roles have been set accordingly (or do they need adjustments after moving them to a private net?).
Adding a dynamodb endpoint solved the problems with the database, but there are no VPC endpoints available for some of the other services. Following some answers I found here, here, here and in the announcements / tutorials here and here, I am pretty sure I followed all the recommended steps.
I would be very thankful and glad for any hints where to check next, as I have currently no idea what could be the problem here.
EDIT: The function don't seem to have any internet access at all, since a toy example I checked also timed out:
import urllib.request
def lambda_handler(event, context):
test = urllib.request.urlopen(url="http://www.google.de")
return test.status
Of course the problem was sitting in front of the monitor again. Instead of routing 0.0.0.0/0 (any traffic) to the internet gateway, I had just specified 0.0.0.0/16 (traffic from machines with an 0.0.x.x ip) to the gate. Since no machines with such ip exists any traffic was blocked from entering leaving the VPC.
#John Rotenstein: Thx, though for the hint about lambdash. It seems like a very helpful tool.
Your configuration sounds correct.
You should test the configuration to see whether you can access any public Internet sites, then test connecting to AWS.
You could either write a Lambda function that attempts such connections or you could use lambdash that effectively gives you a remote shell running on Lambda. This way, you can easily test connectivity from the command line, such as curl.
Problem: Given N instances launched as part of VMSS, I would like my application code on each azure instance to discover the IP address of the other peer instances. How do I do this?
The overall intent is to cluster the instances so, as to provide active passive HA or keep the configuration in sync.
Seems like there is some support for REST API based querying : https://learn.microsoft.com/en-us/rest/api/virtualmachinescalesets/
Would like to know any other way to do it, i.e. either python SDK or instance meta data URL etc.
The RestAPI you mentioned has a Python SDK, the "azure-mgmt-compute" client
https://learn.microsoft.com/python/api/azure.mgmt.compute.compute.computemanagementclient
One way to do this would be to use instance metadata. Right now instance metadata only shows information about the VM it's running on, e.g.
curl -H Metadata:true "http://169.254.169.254/metadata/instance/compute?api-version=2017-03-01"
{"compute":
{"location":"westcentralus","name":"imdsvmss_0","offer":"UbuntuServer","osType":"Linux","platformFaultDomain":"0","platformUpdateDomain":"0",
"publisher":"Canonical","sku":"16.04-LTS","version":"16.04.201703300","vmId":"e850e4fa-0fcf-423b-9aed-6095228c0bfc","vmSize":"Standard_D1_V2"},
"network":{"interface":[{"ipv4":{"ipaddress":[{"ipaddress":"10.0.0.4","publicip":"52.161.25.104"}],"subnet":[{"address":"10.0.0.0","dnsservers":[],"prefix":"24"}]},
"ipv6":{"ipaddress":[]},"mac":"000D3AF8BECE"}]}}
You could do something like have each VM send the info to a listener on VM#0, or to an external service, or you could combine this with Azure Files, and have each VM output to a common share. There's an Azure template proof of concept here which outputs information from each VM to an Azure File share.. https://github.com/Azure/azure-quickstart-templates/tree/master/201-vmss-azure-files-linux - every VM has a mountpoint which contains info written by every VM.
I am using Django-Channels to try to get real time features such as chat/messaging, notifications, etc. Right now, I have gotten everything to work fine on my laptop using the settings described in the docs here: http://channels.readthedocs.io/en/latest/. I use a local redis server for testing purposes.
However, when I deploy to my Amazon EC2 Elastic Beanstalk server (using an AWS ElastiCache Redis), the WebSocket functionality fails. I was reading and I think it is due to the fact that Amazon's HTTPS does not support WebSockets, so I need to switch to Secure TCP.
I tried doing that with:
https://blog.jverkamp.com/2015/07/20/configuring-websockets-behind-an-aws-elb/
and
https://medium.com/#Philmod/load-balancing-websockets-on-ec2-1da94584a5e9#.ak2jh5h0q
but to no avail.
Does anyone have any success implementing WebSockets with CentOS/Apache and Django on AWS EB? The Django-Channels package is fairly new so I was wondernig if anyone has experienced and/or overcome this hurdle.
Thanks in advance
AWS has launched new Application Load Balancer that supports web sockets. Change your ELB to Application Load Balancer and that will fix your issue.
https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/
As described here it's possible to run Django Channels on Elastic Beanstalk using an Application Load Balancer.
In a simplified form, it's basically:
Create an ALB
Add 2 target groups, one that points to port 80, and one that points to Daphne port, ie 8080.
Create 2 path rules. Let the default route point to target group 1 (port 80), and set the second to use a relative path, ie. /ws/ and point it to target group 2.
Add Daphne and workers to supervisord (or another init system)
DONE! Access Daphne/websockets through the relative url ws://example.com/ws/.
I suppose ALB is the only way. The reason is because with the SSL protocol listner in the classic LB, the session stickiness and X-Forwaded headers won't be forwarded and will result in the proxy server redirect loop. Doc is here,
http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html
I'll update the answer if I find out a way with the existing CLB.
I've searched on this site and google but have not been able to get an answer for this.
I have code running from an EC2 instance which creates and manager EMR clusters using boto.
I can use this framework to get the flow_id (or cluster_id, not sure which is the right name for it), it start with "j-" and has a fixed amount of chars to identify the cluster.
Using the framework I can establish an emr or ec2 connection, but for the life of me I cannot do the following using boto:
aws emr --list-clusters --cluster-id=j-ASDFGHJKL | json '["instances"].[0].["privateipaddress"]
**The above is a little fudged, I cannot remember the json format and what the json command is or what args it wants, but cli nonetheless.
I've pprint.pprint()'ed and inspected with inspect.getmembers() the connections, getting the conn to the specific cluster_id, but I have yet to see this field/var/attribute with or without method calls.
I've been up and down amazon and boto, how do they do it like
here?
In the
def test_list_instances(self): #line 317
...
self.assertEqual(response.instances[0].privateipaddress , '10.0.0.60')
...
P.S. I've tried this but python complains that "instances" property is not iterable, array accessable (i forget the "var[0]" naming), and something else I tried, including inspecting.
BTW, i can access the publicDNSaddress from here, and many other things, just not the privateIP...
Please tell me if I messup up somewhere and where I can find the answer, i'm using an ugly fix by using subprocess!
If you are asking for taking master ip of emr then below commands will work:
list_intance_resp = boto3.client('emr',region_name='us-east-1').list_instances(ClusterId ='j-XXXXXXX')
print list_intance_resp['Instances'][len(list_intance_resp['Instances'])-1]['PrivateIpAddress']
Check your version of boto using
pip show boto
My guess is your using a version 2.24 or earlier as this version did not parse instance information see https://github.com/boto/boto/blob/2.24.0/tests/unit/emr/test_connection.py#L117
compared to
https://github.com/boto/boto/blob/2.25.0/tests/unit/emr/test_connection.py#L313
If you upgrade your version of boto to 2.25 or newer you'll be able to do the following
from boto.emr.connection import EmrConnection
conn = EmrConnection(<your aws key>, <your aws secret key>)
jobid = 'j-XXXXXXXXXXXXXX' # your job id
response = conn.list_instances(jobid)
for instance in response.instances:
print instance.privateipaddress
You just need to query the master instances from the Master Instance Group with the help of the EMR cluster-ID. If you have more than one master, you can parse the boto3 output and take the IP if the first listed master.
Your Boto3 execution environment should have access to EMR describe a cluster and its instance groups. Here is the
emr_list_instance_rep = boto_client_emr.list_instances(
ClusterId=cluster_id,
InstanceGroupTypes=[
'MASTER',
],
InstanceStates=[
'RUNNING',
]
)
return emr_list_instance_rep["Instances"][0]["PrivateIpAddress"]
You can find the full boto3 script and its explanation here https://scriptcrunch.com/script-retrieve-aws-emr-master-ip/