How to list all deployed aws resources with aws cdk - python

I have an AWS Cdk deployer application that deployed multiple resources in AWS. The application uses a config file that acts as an input file and using that it deployed multiple was ecs task in a fargate cluster and placed them behind an application load balancer.
Is there any way to get all the components/AWS services that are being deployed when I run cdk deploy --all. I'm trying to understand without using a separate boto3 function if there is any way which was cdk provides.

After synth, from the cdk.out CloudAssembly:
import aws_cdk.cx_api as cx_api
cloud_assembly = cx_api.CloudAssembly("cdk.out")
resources = [stack.template["Resources"] for stack in cloud_assembly.stacks]
After deploy, with the DescribeStackResources or ListStackResources APIs:
aws cloudformation describe-stack-resources --stack-name MyStack
Both return lists of CloudFormation Resource information, but with different content. The CloudAssembly resources are from the local template generated by cdk synth. The Resources returned from boto3 or the CLI are the deployed cloud resources with resolved names.

Related

Zip deploy azure function from storage account

I'm trying to zip deploy an azure function from a blob storage.
I have set SCM_DO_BUILD_DURING_DEPLOYMENT, to true.
I have also set WEBSITE_RUN_FROM_PACKAGE to the remote url.
I am able to deploy easily if the function is in a remote url. However, I can't seem to do it if I have it as a blob on azure.
The prefarable runtime for this is python.
For having the zip deploy from storage account you need to navigate to your .zip blob in your storage account and get the generated SAS token for that blob.
Then add the same url in your Function App Application settings for WEBSITE_RUN_FROM_PACKAGE.
NOTE:- This option is the only one supported for running from a package on Linux hosted in a Consumption plan.
For more information on this you can refer Run your functions from a package file in Azure

Convert Flask-based API to AWS Lambda

I currently have an AWS EC2 instance exposing a Flask API with blueprints running different things on different ports. I am wondering which is the bast solution architecture-wise in order to convert the endpoint to Lambda. For instance, should I remove the blueprints? If so, how can I call the different functionalities on the different pots?
Here is the python package that you can use to deploy your flask application to AWS lambda with minimal configurations
https://github.com/Miserlou/Zappa

Azure app service for kafka producer python api deployment

Scenario:
I want to deploy a kafka python producer api on Azure through pipeline. I have an artifact which is a producer python code that needs to be deployed on azure app service.
Question:
Is deploying this code on azure app service really recommended? (knowing that this is not a webapp but just a kafka producer for internal application).
What service can alternatively be used to run such python codes on azure?
It seems that Kafka is a server, not the code project. So I recommend you use the custom docker image and deploy it on the Azure Web App for Container.
You can create the custom docker image with installing the Kafka inside the image and then deploy the Web App for Container from the custom image. When it finishes, the web app can access and you do not need to deploy code to it.

Docker deployment of flask app with Celery and Redis in AWS, DB in AWS RDS

I need to deploy a Flask app with Celery and Redis to Amazon AWS. I'm used to work with AWS Lightsail and this will be my option.
By the other side I must( as per company policy) deploy my Postgres DB to AWS RDS
Im planing to use Dockers with Ngix, Unicorn in the AWS Lightsail to deploy the app that as I said uses Celery and Redis. So all this will be in the docker in Lightsail
By the other side the DB will be in RDS without using docker
What I want with this approach is a quick deployment of changes and upgrades to the app .
What I want to know is this :
1-Is this a good approach to production, that will help me in quick deployments ?
2-Does anybody know of some examples of docker-compose files that could help me with this ?
3-Could someone please let me know some limitation in this approach and
4-Is Lightsail a good option in AWS for a Docker deployment of flask apps as the one described here ?
Thanks
When I asked this question, was looking for some examples of easy deployment of medium complexity apps to AWS. The app itself used Celery Redis and Amazon AWS RDS postgress DB. The deployment to Amazon Lightsail was pretty simple after I look a video in you tube from an Amazon Engineer. I basically created the container in my local laptop , used an initial script while deploying an OS only Ubuntu instance and that scrip loaded a daemon so Ubuntu "system" could daemonize my Docker deployment when restarting. I created 3 videos in you tube where I explained everything.
If someone needs help with this see the videos at:
Dockerize Flask API ,NGINX, GUNICORN, CELERY,REDIS to Amazon AWS
Part1:3
Dockerize Flask API ,NGINX, GUNICORN, CELERY,REDIS to Amazon
AWS Part2:3
Dockerize Flask API ,NGINX, GUNICORN, CELERY,REDIS to
Amazon AWS Part3:3
Links below:
Part1:3
Part2:3
Part3:3

Migrating from Amazon S3 to Azure Storage (Django web app)

I maintain this Django web app where users congregate and chat with one another. They can post pictures too if they want. I process these photos (i.e. optimize their size) and store them on an Amazon S3 bucket (like a 'container' in Azure Storage). To do that, I set up the bucket on Amazon, and included the following configuration code in my settings.py:
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
AWS_S3_FORCE_HTTP_URL = True
AWS_QUERYSTRING_AUTH = False
AWS_SECRET_ACCESS_KEY = os.environ.get('awssecretkey')
AWS_ACCESS_KEY_ID = os.environ.get('awsaccesskeyid')
AWS_S3_CALLING_FORMAT='boto.s3.connection.OrdinaryCallingFormat'
AWS_STORAGE_BUCKET_NAME = 'dakadak.in'
Additionally Boto 2.38.0 and django-storages 1.1.8 are installed in my virtual environment. Boto is a Python package that provides interfaces to Amazon Web Services, whereas django-storages is a collection of custom storage backends for Django.
I now want to stop using Amazon S3 and instead migrate to Azure Storage. I.e. I need to migrate all existing files from S3 to Azure Storage, and next I need to configure my Django app to save all new static assets on Azure Storage.
I can't find precise documentation on what I'm trying to achieve. Though I do know django-storages support Azure. Has anyone done this kind of a migration before, and can point out where I need to begin, what steps to follow to get everything up and running?
Note: Ask me for more information if you need it.
Per my experience, you can do the two steps for migrating from Amazon S3 to Azure Storage for Django web app.
The first step is moving all files from S3 to Azure Blob Storage. There are two ways you can try.
Using the tools for S3 and Azure Storage to move files from S3 to local directory to Azure Blob Storage.
These tools below for S3 can help moving files to local directory.
AWS Command Line(https://aws.amazon.com/cli/): aws s3 cp s3://BUCKET/FOLDER localfolder --recursive
S3cmd Tools(http://s3tools.org/): s3cmd -r sync s3://BUCKET/FOLDER localfolder
S3 Browser (http://s3browser.com/) :
This is a GUI client.
For moving local files to Azure Blob Storage, you can use AzCopy Command-Line Utility for high-performance uploading, downloading, and copying data to and from Microsoft Azure Blob, File, and Table storage, please refer to the document https://azure.microsoft.com/en-us/documentation/articles/storage-use-azcopy/.
Example: AzCopy /Source:C:\myfolder /Dest:https://myaccount.blob.core.windows.net/mycontainer
Migrating by programming with Amazon S3 and Azure Blob Storage APIs in your familiar language like Python, Java, etc. Please refer to their API usage docs https://azure.microsoft.com/en-us/documentation/articles/storage-python-how-to-use-blob-storage/ and http://docs.aws.amazon.com/AmazonS3/latest/API/APIRest.html.
The second step is following the document http://django-storages.readthedocs.org/en/latest/backends/azure.html to re-configure Django settings.py file. Django-storages would allow any uploads you do to be automatically stored in your storage container.
DEFAULT_FILE_STORAGE='storages.backends.azure_storage.AzureStorage'
AZURE_ACCOUNT_NAME='myaccount'
AZURE_ACCOUNT_KEY='account_key'
AZURE_CONTAINER='mycontainer'
You can find these settings on Azure Portal as #neolursa said.
Edit:
On Azure old portal:
On Azure new portal:
The link you've shared has the configuration for Django to Azure Blobs. Now when I did this all I had to do is to go to the Azure portal. Under the storage account, get the access keys as below. Then create a container and give the container name to Django configuration. This should be enough. However I've done this a while ago.
For the second part, migrating the current files from S3 bucket to BLOB, there a couple of tools you can use;
If you are using visual studio, you can find the blobs under the Server Explorer after entering your azure account credentials in Visual Studio.
Alternatively you can use third party tools like Azure Storage Explorer or Cloudberry explorer.
Hope this helps!

Categories

Resources