Troubleshooting Websockets with EC2 on AWS using Django - python

I am using Django-Channels to try to get real time features such as chat/messaging, notifications, etc. Right now, I have gotten everything to work fine on my laptop using the settings described in the docs here: http://channels.readthedocs.io/en/latest/. I use a local redis server for testing purposes.
However, when I deploy to my Amazon EC2 Elastic Beanstalk server (using an AWS ElastiCache Redis), the WebSocket functionality fails. I was reading and I think it is due to the fact that Amazon's HTTPS does not support WebSockets, so I need to switch to Secure TCP.
I tried doing that with:
https://blog.jverkamp.com/2015/07/20/configuring-websockets-behind-an-aws-elb/
and
https://medium.com/#Philmod/load-balancing-websockets-on-ec2-1da94584a5e9#.ak2jh5h0q
but to no avail.
Does anyone have any success implementing WebSockets with CentOS/Apache and Django on AWS EB? The Django-Channels package is fairly new so I was wondernig if anyone has experienced and/or overcome this hurdle.
Thanks in advance

AWS has launched new Application Load Balancer that supports web sockets. Change your ELB to Application Load Balancer and that will fix your issue.
https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/

As described here it's possible to run Django Channels on Elastic Beanstalk using an Application Load Balancer.
In a simplified form, it's basically:
Create an ALB
Add 2 target groups, one that points to port 80, and one that points to Daphne port, ie 8080.
Create 2 path rules. Let the default route point to target group 1 (port 80), and set the second to use a relative path, ie. /ws/ and point it to target group 2.
Add Daphne and workers to supervisord (or another init system)
DONE! Access Daphne/websockets through the relative url ws://example.com/ws/.

I suppose ALB is the only way. The reason is because with the SSL protocol listner in the classic LB, the session stickiness and X-Forwaded headers won't be forwarded and will result in the proxy server redirect loop. Doc is here,
http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html
I'll update the answer if I find out a way with the existing CLB.

Related

how can I connect my ec2 instance to a registered domain?

Hi I'm trying to make a test server to get Facebook authentication working.
am using python flask, current app itself is a copy of this oauth sample. nothing else.
so, I have set up everything I could think of:
an ACM to get https working,
a Load balancer(classic) - have set up cert from ACM i made above and the instance I want to connect to.
domain address registered in route 53, set up an alias target(the ELB)
etc.
and after setting this up I went into my ec2 instance using putty, set up an virtual env. and ran the app using the venv. aaaaand the app itself is working fine, except the ELB health check can't pick it up, nor when I try to access by typing in the domain address.
it only works when typing in EC2 instance's public IP. and now I'm stuck here, now knowing how to 'integrate' the instance to my registered domain...
the instance status in the load balance's instance tab shows OutOfService
^ EDIT: the instance tab works, it was due to PORT number. but domain still doesnt responds...
is there anything else I could check to do this? any help is greatly appreciated.... been stuck with this for over a month now :(
EDIT: currently the ELB gets health checks from ec2 instances properly. but I get infinite loading when I try to access by typing in domain name.
Hard to troubleshoot on here- but I would bet that you need to setup the security group so that the load balancer can hit the correct port on your EC2 instance (That is the most common cause of this)

AWS Lambda Function cannot access other services

I have a problem with an AWS Lambda Function which depends upon DynamoDB and SQS to function properly. When I try to run the lambda stack, they time out when trying to connect to the SQS service. The AWS Lambda Function lies inside a VPC with the following setup:
A VPC with four subnets
Two subsets are public, routing their 0.0.0.0/16 traffic to an internet gateway
A MySQL server sits in a public subnet
The other two contain the lambdas and route their 0.0.0.0/16 traffic to a NAT which lives in one of the public subnets.
All route tables have a 10.0.0.0/16 to local rule (is this the problem because Lambdas use private Ip's inside a VPC?)
The main rout table is the one with the NAT, but I explicitly associated the public nets with the internet gateway routing table
The lambdas and the mysql server share a security group which allows for inbound internal access (10.x/16) as well as unrestricted outbound traffic (0.0.0.0/16).
Traffic between lambdas and the mysql instance is no problem (except if I put the lambdas outside the VPC, then they can't access the server even if I open up all ports). Assume the code for the lambdas is also correct, as it worked before I tried to mask it in a private net. Also the lambda execution roles have been set accordingly (or do they need adjustments after moving them to a private net?).
Adding a dynamodb endpoint solved the problems with the database, but there are no VPC endpoints available for some of the other services. Following some answers I found here, here, here and in the announcements / tutorials here and here, I am pretty sure I followed all the recommended steps.
I would be very thankful and glad for any hints where to check next, as I have currently no idea what could be the problem here.
EDIT: The function don't seem to have any internet access at all, since a toy example I checked also timed out:
import urllib.request
def lambda_handler(event, context):
test = urllib.request.urlopen(url="http://www.google.de")
return test.status
Of course the problem was sitting in front of the monitor again. Instead of routing 0.0.0.0/0 (any traffic) to the internet gateway, I had just specified 0.0.0.0/16 (traffic from machines with an 0.0.x.x ip) to the gate. Since no machines with such ip exists any traffic was blocked from entering leaving the VPC.
#John Rotenstein: Thx, though for the hint about lambdash. It seems like a very helpful tool.
Your configuration sounds correct.
You should test the configuration to see whether you can access any public Internet sites, then test connecting to AWS.
You could either write a Lambda function that attempts such connections or you could use lambdash that effectively gives you a remote shell running on Lambda. This way, you can easily test connectivity from the command line, such as curl.

Discovering peer instances in Azure Virtual Machine Scale Set

Problem: Given N instances launched as part of VMSS, I would like my application code on each azure instance to discover the IP address of the other peer instances. How do I do this?
The overall intent is to cluster the instances so, as to provide active passive HA or keep the configuration in sync.
Seems like there is some support for REST API based querying : https://learn.microsoft.com/en-us/rest/api/virtualmachinescalesets/
Would like to know any other way to do it, i.e. either python SDK or instance meta data URL etc.
The RestAPI you mentioned has a Python SDK, the "azure-mgmt-compute" client
https://learn.microsoft.com/python/api/azure.mgmt.compute.compute.computemanagementclient
One way to do this would be to use instance metadata. Right now instance metadata only shows information about the VM it's running on, e.g.
curl -H Metadata:true "http://169.254.169.254/metadata/instance/compute?api-version=2017-03-01"
{"compute":
{"location":"westcentralus","name":"imdsvmss_0","offer":"UbuntuServer","osType":"Linux","platformFaultDomain":"0","platformUpdateDomain":"0",
"publisher":"Canonical","sku":"16.04-LTS","version":"16.04.201703300","vmId":"e850e4fa-0fcf-423b-9aed-6095228c0bfc","vmSize":"Standard_D1_V2"},
"network":{"interface":[{"ipv4":{"ipaddress":[{"ipaddress":"10.0.0.4","publicip":"52.161.25.104"}],"subnet":[{"address":"10.0.0.0","dnsservers":[],"prefix":"24"}]},
"ipv6":{"ipaddress":[]},"mac":"000D3AF8BECE"}]}}
You could do something like have each VM send the info to a listener on VM#0, or to an external service, or you could combine this with Azure Files, and have each VM output to a common share. There's an Azure template proof of concept here which outputs information from each VM to an Azure File share.. https://github.com/Azure/azure-quickstart-templates/tree/master/201-vmss-azure-files-linux - every VM has a mountpoint which contains info written by every VM.

WLST edit mode issue for managed instance

While I had executed command edit() connecting to managed instance I was ended-up with the following error. How & What I have to do in order to come out of this problem.
wls:/offline> connect('Admin60000','sun1rise','t3://my-comm-app-serv:60001')
Connecting to t3://my-comm-app-serv:60001 with userid Admin60000 ...
Successfully connected to managed Server "MiCommApp" that belongs to domain "MiBeaDir".
Warning: An insecure protocol was used to connect to the
server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.
wls:/MiBeaDir/serverConfig>cd('/Servers/MiCommApp/SSL/MiCommApp')
wls:/MiBeaDir/serverConfig/Servers/MiCommApp/SSL/MiCommApp> edit()
Edit MBeanServer is not enabled on a Managed Server.
60001 is managed instance port which is one among the managed instance that runs in admin server. Admin server runs in 60000 port
That is because for managed servers, WLST functionality is limited to browsing the configuration bean hierarchy. Read below excerpt from WL official documentation.
To edit configuration beans, you must be connected to an
Administration Server, and you must navigate to the edit tree and
start an edit session, as described in edit and startEdit,
respectively.
If you connect to a Managed Server, WLST
functionality is limited to browsing the configuration bean hierarchy.
While you cannot use WLST to change the values of MBeans on Managed
Servers, it is possible to use the Management APIs to do so. BEA
Systems recommends that you change only the values of configuration
MBeans on the Administration Server. Changing the values of MBeans on
Managed Servers can lead to an inconsistent domain configuration.
So, basically you need to connect with your Admin server (current you are getting connected with your managed server, as per logs you have provided - Successfully connected to managed Server "MiCommApp" that belongs to domain "MiBeaDir".) and then issue edit configurations using edit() and startEdit() WLST commands.
BTW, I connect to my server using following command:
If HTTPS - connect(url='t3s://abc.xyz.com:37001',adminServerName='AdminServer')
If HTTP - connect(url='t3://abc.xyz.com:37001',adminServerName='AdminServer')

Launch Openstack Instances using python-boto

I am trying to launch instances on opensatck setup with multiple networks configured using python-boto.
But I got following error,
EC2ResponseError: EC2ResponseError: 400 Bad Request
<?xml version="1.0"?>
<Response><Errors><Error><Code>NetworkAmbiguous</Code><Message>Multiple possible networks found, use a Network ID to be more specific.</Message></Error></Errors><RequestID>req-28b5a4e8-3838-4111-95db-337c5048716d</RequestID></Response>
My code is like here,
from boto import ec2
ostack = ec2.connection.EC2Connection(
ec2_access_key, ec2_secret_key, is_secure=False, port=8773, region='nova',
path='/services/Cloud'
)
ostack.run_instances('ami-xxxxx', key_name='BotoTest')
The above is working fine for single network configured to openstack.
Note: run_instances doesn't have keyword argument for network-id.
Where I made a mistake or how to fix it? or is it bug in python-boto?
Advance in Thanks.
I believe that this isn't a bug of boto, which was built to communicate with the AWS-API. While most of the EC2-AWS functionality work well with the EC2-OpenStack API, some features are not implemented and are answered with a HTTP-Error 500 or 400.
AWS use the VPC (Virtual Private Cloud) as Network and an Availability Zone as Subnet. Both have a default setting, which is taken if there is no further specification when creating a new instance. But in OpenStack I can't see a possibility to mark a Network and a Subnet as default.
In my attempts, neither private_ip_address nor subnet_id works to specify a network/subnet at run_instances() if there are more than one at OpenStack.
Edit: if you only have one network/subnet, the following code works fine with boto at trystack.org:
import boto
conn = boto.connect_ec2_endpoint("http://8.21.28.222:8773/services/Cloud",aws_access_key_id='...',aws_secret_access_key='...')
new_instance = conn.run_instances("ami-00000020", key_name="trystack", security_groups=["default"], instance_type="m1.small")
Have you tried? :
from boto import ec2
ostack = ec2.connection.EC2Connection(
ec2_access_key, ec2_secret_key, is_secure=False, port=8773, region='nova',
path='/services/Cloud', debug=1
)
then
ostack.run_instances('ami-xxxxx', subnet_id='your network id', key_name='BotoTest')
Amazon uses this for VPC networks? Are you running a VPC?

Categories

Resources