I have a currently working set of AWS lambda functions.
I want these functions to be hosted on a GitHub repo and to use GitHub actions to automatically deploy on AWS lambda when I push a new version.
Now I would like different behaviour for master branch compared to develop branch. Particularly, pushing to develop should somehow deploy the AWS lambda zipped functions on a ‘no-prod’ setup. I seem to understand this should be achieved through lambda functions version but I could not find a definite answer on the docs or internet with such setup.
Anybody knows how to achieve this ?
Related
I got a question about python, maven and aws lambda. Basically I am trying to build dependency trees for my repos using the terminal command
mvn dependency:tree
This command is being ran via python using the os library, i.e.
import os
os.system('mvn dependency:tree')
Now comes the issue - I need to run this on AWS Lambda.
Being aware that AWS Lambda is serverless and that the layers of each lambda can only be 250mb, 1) is it possible to run terminal commands via lambda without spinning up any sort of server? and 2) maven usually needs to be installed on a system, thus is it possible, or even viable, to run maven on AWS Lambda?
Any input will be appreciated.
Thanks
is it possible to run terminal commands via lambda without spinning up any sort of server?
Yes, you can run terminal commands in a Lambda function.
maven usually needs to be installed on a system, thus is it possible, or even viable, to run maven on AWS Lambda?
You can create a custom Lambda container image that includes dependencies.
Additional AWS Blog Post: https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/
I have some python libraries that I need to invoke in NodeJS on a Lambda function. I need to do this because some of the python functions are doing external API calls and may take a while to finish, and using NodeJS speeds this up quite a lot thanks to promises.
I have read it is possible to crate custom runtimes as Layers but canont find some samples on NodeJS 12 + Python 3.7 for example, so how to do that? is there a list of already published and available runtimes somewhere?
I think the newly announced option to run Docker images in AWS Lambda might be the best solution here.
You can use either the Python or the NodeJS base image provided by Amazon and then install the rest of the required dependencies. Put it into AWS ECR and then run your Lambda using the Docker image.
Check the news article.
I am searching for this for a few days, found some approaches like Serverless or Localstack, but what I would really like to do is be able to code everything using AWS API Gateway and Lambdas for a cloud-based version of my software (which is solved) and not manage my deployments.
Then...
A customer wants to host a copy of it inside its own private network, so... I wanna use the very same Lambda code (which makes no use of other AWS 'magic' services like DynamoDB ... only "regular" dependencies) injecting it into a container running "an API Gateway"-like software (perhaps a python/flask parsing the exported API Gateway config?).
I am willing to build this layer unless a better idea shows up. So I would be able to put my lambdas on a folder lets say "aws_lambda", and my container would know how to transform the HTTP payload to an AWS event payload, import the module, call 'lambda_handler' ... and hopefully that is it. Having another container with MySQL and another with Nginx (emulating CloudFront for static website) and I will be done. The whole solution in a can.
Any suggestions? Am I crazy?
Does anyone know some existing software solution to solve this?
If you are willing to use AWS SAM, the AWS SAM CLI offers what you're looking for.
The AWS SAM CLI implements its own API Gateway equivalent and runs the AWS Lambda functions in Docker containers. While it's mainly intended for testing, I don't see any reason, why you shouldn't be able to use it for your use-case as well.
Besides different serverless plugins and localstack you can try AWS SAM Cli to run local api gateway . The command is start-api https://docs.aws.amazon.com/lambda/latest/dg/test-sam-cli.html . It probably would not scale, never tried myself and it is intended for testing.
Curiosly what you are consider to do (transform lambda into normal flask server is oppozite to zappa, which is serverless package that convert normal flask server into a lambda function and uploads it to AWS). If you succed in your original idea of converting a flask request into lambda event and care to package you code, it can be called unzappa. While zappa is mature and large package, probably it would be easier to 'invert' some light-weight thing like awsgi https://github.com/slank/awsgi/blob/master/awsgi/init.py
#Lovato, I have been using https://github.com/lambci/docker-lambda Which is a docker image that mimics lambda environment, lambci seems to be maintaining a good version of the lambda images for nodejs, java, python, .net and even go lang. So you can technically reuse your entire lambda code in a docker running lambda "like" environment. I call it lambda-like mostly because aws doesn't fully publish every piece of information about how lambda works. But this is the nearest approximation I've seen. I use this for local development and testing of lambda. And I have tested a trail "offline" lambda. Let me know if this works for you.
I do prefer to use the docker files and create my docker images for my use.
I am trying to connect to MSSQL database from AWS Lambda (using python) and really struggling to proceed further.
I tried many options with pyodbc, pypyodbc, pymssql they work on local development machine (Windows 7), however AWS Lambda is unable to find the required packages when deployed on AWS. I use ZAPPA for deployment of Lambda package.
I searched through many forums but unable to see the anything moving ahead, any help on this would be highly appreciated.
Many thanks,
Akshay
I tried different trial and error steps and ended up with the following one which is working fine in AWS Lambda, I am using PYMSSQL package only.
1) did 'pip install pymssql' on amazon EC2 instance as under the hood Amazon uses Linux AMIs to run their Lambda functions.
2) copied the generated '.so' files* and packaged inside the Lambda deployment package
Below is the folder structure of my lambda deployment package
Let me know if you further need help with this.
Try to do import cython together with pymssql in your code.
So I'm currently working on building the deployer for our AWS Lambda functions.
Since AWS versions all share a configuration, this requires having multiple functions (foo_prod, foo_staging, foo_whatever) that are the various versions of our code instead of using aliases like I want to do.
So my question is:
1) Whether or not there's a sane way to re-deploy code. (IE: Staging to Prod) without downloading it to my desktop first and then re-uploading.
2) Whether or not I'm wrong about that shared configuration bit or whether it's possible to tell under which alias the function is running in the actual Lambda such that I can create multiple environment variables for each environment.
You can deploy lambda functions in a lot of different ways that don't involve downloading and re-uploading code. If you use something like SAM (http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-use-app-spec.html) you can just point to an S3 bucket that holds your code and build functions from that. You can also hook CloudFormation up to git repository like Github or AWS CodeCommit and have it automatically update your functions when you push commits to the repository. And there are other systems like Severless (https://serverless.com) that can abstract and automate deploys in repeatable and manageable ways.
The Lambda's version is available in the context object. You should be able to tell which alias is called by looking at the ARN. ARNs have the alias as a suffix such as:
arn:aws:lambda:aws-region:acct-id:function:helloworld:PROD
Info here: http://docs.aws.amazon.com/lambda/latest/dg/python-context-object.html