I'm working on a project that is almost finished except for one thing and I'm in despirate need of help..
The backend of my project that is written in Django/Python is deployed on AWS Lambda using a library called Zappa because my knowledge about AWS is really low and this seemed (for the first version) the best and fastest way to have my project up and running. It uses a AWS RDS PostgreSQL database to handle my database.
The frontend of my project is written in Javascript/ReactJS has been put on a S3 bucket en made publicly available through Cloudfront by following this tutorial.
Now, all of the above already works perfectly, it's just that now, because we have a lot of data that needs to be send from our backend/database to our frontend through API's, I want to make use of elasticsearch.
So, I started using the AWS Opensearch Service (and using elasticsearch version 7.10) which then generated an endpoint for me to connect at.
This all works great on my localhost, I can connect by using the code below, create my indexes using the ElasticsearchDSL library.
Also because the endpoint is publicly availbable, I can just make an GET request with the endpoint AWS opensearch gives me and I can make use of the Kibana url.
import boto3, os
from requests_aws4auth import AWS4Auth
from elasticsearch import RequestsHttpConnection
service = "es"
region = "eu-central-1"
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(
credentials.access_key,
credentials.secret_key,
region,
service,
session_token=credentials.token
)
ELASTICSEARCH_DSL = {
'default': {
'hosts': 'my-opensearch-domain.com/',
"http_auth": awsauth,
"use_ssl": True,
"verify_certs": True,
"connection_class": RequestsHttpConnection,
}
}
The trouble only comes when deploying this on my lambda project. where it just gives me a timeout when trying to save(index) an object manually.
So, with some reading and research, I found out that my Lambda function runs in a VPC.
I tried putting my domain in the same VPC and security-group.
Now when the project is deployed, I can make connection with the endpoint and manually index an object when I save it, but I can't make connection on my localhost, can't use the kibana url and can't make use of their API endpoints.
Any opinions on what to do? am I seeing this wrong?
Thanks in advance
EDIT: I found this guide on how to connect to ES dashboards outside a VPC.
Related
I intend to use Google Cloud Storage through my own domain name supereye.co.uk .
However,
When I try to associate CNAME on my DNS record for supereye.co.uk with the Google Cloud Bucket production-supereye-co-uk, I get the following message when I try to access to
production-supereye-co-uk.supereye.co.uk/static/default-coverpng :
<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist.</Message>
</Error>
What shall I do ?
Important note : This is not a static site. This is a Django Application that runs on Google Cloud Engine and Django has its own URL routing mechanism. It means Django translates everything after supereye.co.uk/URL , I wonder how this should work.
So you cannot simply just add a CNAME record to redirect some traffic to a give URL.
You are going to have to do one of the following to get your desired result:
Serve traffic to a new sub domain data.supereye.co.uk which will host your content.
Proxy data through your django app, this is not ideal but would allow you to easily protect your data with authentication or authorization.
Proxy content through nginx, using nginx you proxy the request (forward) through to your cloud bucket. Light weight and fairly simple to implement.
Use a gcp loadbalance to split the traffic, you can setup a LB to split requests between the backend group and the bucket using host/ path rules.
I would either got for the LB or the nginx proxy as these will be the easiest to implement (depending on your setup). If you want any form of access control go for proxying the request through your django app.
Im trying to allow an app service (python) to get secrets from azure keyvault without the usage of hardcoded client id/secrets, therefore I`m trying to use ManagedIdentity.
I have enabled system & user assigned functions in my service app
I have created a policy in vault where the service app is granted access to the secrets
code:
credentials_object = ManagedIdentityCredential()
client = SecretClient(vault_url=VAULT_URL, credential=credentials_object)
value = client.get_secret('MYKEY').value
error (when app is deployed and when running locally):
azure.identity._exceptions.CredentialUnavailableError: ManagedIdentityCredential authentication unavailable, no managed identity endpoint found.
What am I missing?
Thank you!
It's important to understand that Managed Identity feature in Azure is ONLY relevant when, in this case, the App Service is deployed. This would mean you would probably want to use DefaultAzureCredential() from the Azure.Identity library which is compatible both when running locally and for the deployed web app.
This class will run down the hierarchy of possible authentication methods and when running locally I prefer to use a service principal which can created by running the following in Azure CLI: az ad sp create-for-rbac --name localtest-sp-rbac --skip-assignment. You then add the service principal localtest-sp-rbac in the IAM for the required Azure services.
I recommend reading this article for more information and how to configure your local environment: https://learn.microsoft.com/en-us/azure/developer/python/configure-local-development-environment
You can see the list of credential types that DefaultAzureCredential() goes through in the Azure docs.
In my case, it was the issue of having multiple Managed Identities attached to my VMs. I am trying to access Azure Storage Account from AKS using ManagedIdentityCredential. When I specified the client_id of the MI as:
credentials_object = ManagedIdentityCredential(client_id='XXXXXXXXXXXX')
it started to work! It's also mentioned in here that we need to specify the client_id of the MI if the VM or VMSS has multiple identities attached to it.
now to define Google storage client I'm using:
client = storage.Client.from_service_account_json('creds.json')
But I need to change client dynamically and prefer not deal with storing auth files to local fs.
So, is there some another way to connect by sending credentials as variable?
Something like for AWS and boto3:
iam_client = boto3.client(
'iam',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY
)
I guess, I miss something in docs and would be happy if someone point me where can I found this in docs.
If you want to use built-in methods, an option could be to create the constructor for the Client (Cloud Storage). In order to perform that actions these two links can be helpful.
Another possible option in order to avoid store auth files locally is using environment variable pointing to credentials outside of your applications code such as Cloud Key Management Service. To have more context about this you can take a look at this article.
I'm a complete noob with Python and boto and trying to establish a basic connection to ec2 services.
I'm running the following code:
ec2Conn = boto.connect_ec2('username','password')
group_name = 'python_central'
description = 'Python Central: Test Security Group.'
group = ec2Conn.create_security_group(group_name, description)
group.authorize('tcp', 8888,8888, '0.0.0.0/0')
and getting the following error:
AWS was not able to validate the provided access credentials
I've read some posts that this might be due to time difference between my machine and the EC2 server but according to the logs, they are the same:
host:ec2.us-east-1.amazonaws.com x-amz-date:20161213T192005Z
host;x-amz-date
515db222f793e7f96aa93818abf3891c7fd858f6b1b9596f20551dcddd5ca1be
2016-12-13 19:20:05,132 boto [DEBUG]:StringToSign:
Any idea how to get this connection running?
Thanks!
Call made to the AWS API require authentication via Access Key and Secret Key. These can be obtained from the Identity and Access Management (IAM) console, under the Security Credentials tab for a user.
See: Getting Your Access Key ID and Secret Access Key
If you are unfamiliar with Python, you might find it easier to call AWS services by using the AWS Command-Line Interface (CLI). For example, this single-line command can launch an Amazon EC2 instance:
aws ec2 run-instances --image-id ami-c2d687ad --key-name joe --security-group-id sg-23cb34f6 --instance-type t1.micro
See: AWS CLI run-instances documentation
I am using Django-Channels to try to get real time features such as chat/messaging, notifications, etc. Right now, I have gotten everything to work fine on my laptop using the settings described in the docs here: http://channels.readthedocs.io/en/latest/. I use a local redis server for testing purposes.
However, when I deploy to my Amazon EC2 Elastic Beanstalk server (using an AWS ElastiCache Redis), the WebSocket functionality fails. I was reading and I think it is due to the fact that Amazon's HTTPS does not support WebSockets, so I need to switch to Secure TCP.
I tried doing that with:
https://blog.jverkamp.com/2015/07/20/configuring-websockets-behind-an-aws-elb/
and
https://medium.com/#Philmod/load-balancing-websockets-on-ec2-1da94584a5e9#.ak2jh5h0q
but to no avail.
Does anyone have any success implementing WebSockets with CentOS/Apache and Django on AWS EB? The Django-Channels package is fairly new so I was wondernig if anyone has experienced and/or overcome this hurdle.
Thanks in advance
AWS has launched new Application Load Balancer that supports web sockets. Change your ELB to Application Load Balancer and that will fix your issue.
https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/
As described here it's possible to run Django Channels on Elastic Beanstalk using an Application Load Balancer.
In a simplified form, it's basically:
Create an ALB
Add 2 target groups, one that points to port 80, and one that points to Daphne port, ie 8080.
Create 2 path rules. Let the default route point to target group 1 (port 80), and set the second to use a relative path, ie. /ws/ and point it to target group 2.
Add Daphne and workers to supervisord (or another init system)
DONE! Access Daphne/websockets through the relative url ws://example.com/ws/.
I suppose ALB is the only way. The reason is because with the SSL protocol listner in the classic LB, the session stickiness and X-Forwaded headers won't be forwarded and will result in the proxy server redirect loop. Doc is here,
http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html
I'll update the answer if I find out a way with the existing CLB.