I'm trying to deploy a python .zip package as an AWS Lambda
I choose the hello-python Footprint.
I created the 1st lambda with the inline code, after that I tried to change to upload from a development .zip.
The package I used is a .zip contains a single file called hello_python.py with the same code as the default inline code sample, which is shown below:
from __future__ import print_function
import json
print('Loading function')
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
print("value1 = " + event['key1'])
print("value2 = " + event['key2'])
print("value3 = " + event['key3'])
return event['key1'] # Echo back the first key value
#raise Exception('Something went wrong')
After I click "save and test", nothing happens, but I get this weird red ribbon, but no other substantive error messages. The logs and the run results do not exhibit any change if modifying to source, repackaging and uploading it again.
Lambda functions requires a handler in the format <FILE-NAME-NO-EXTENSION>.<FUNCTION-NAME>. In your case the handler is set to lambda_function.lambda_handler, which is the default value assigned by AWS Lambda). However, you've named your file hello_python.py. Therefore, AWS Lambda is looking for a python file named lambda_function.py and finding nothing.
To fix this either:
Rename your hello_python.py file to lambda_function.py
Modify your lambda function handler to be hello_python.lambda_handler
You can see an example of how this works in the documentation where they create a python function called my_handler() inside the file hello_python.py, and they create a lambda function to call it with the handler hello_python.my_handler.
Related
Using Python3.8, CDK 2.19.0
I want to create an A Record against a hosted zone that's already in my AWS account.
I am doing the following:
hosted_zone = route53.HostedZone.from_hosted_zone_attributes(self, "zone",
zone_name="my.awesome.zone.",
hosted_zone_id="ABC12345DEFGHI"
)
route53.ARecord(self, "app_record_set",
target=self.lb.load_balancer_dns_name, # this is declared above, and works fine.
zone=hosted_zone,
record_name="test-cdk.my.awesome.zone"
)
Inside my app.py I have:
env_EU = cdk.Environment(account="12345678901112", region="eu-west-1")
app = cdk.App()
create_a_record = DomianName(app, "DomianName", env=env_EU)
When I run cdk synth I get the following error:
➜ cdk synth
jsii.errors.JavaScriptError:
Error: Expected object reference, got "${Token[TOKEN.303]}"
File ".../.venv/lib/python3.8/site-packages/jsii/_kernel/providers/process.py", line 326, in send
...(full traceback)
Subprocess exited with error 1
I've tried from_lookup (rather than from_hosted_zone_attributes, Python3.9/Node 17/12/16 (just in case) but nothing is helping. I get the same error every time.
If I comment out the A Record creation, then the synth completes as expected.
cdk.context.json also has the correct hosted zone cached BUT only happens if I comment out the A record creation.
The ARecord target expects a type of RecordTarget. You are passing a string (token). Use a LoadBalancerTarget:
import aws_cdk.aws_elasticloadbalancingv2 as elbv2
# zone: route53.HostedZone
# lb: elbv2.ApplicationLoadBalancer
route53.ARecord(self, "AliasRecord",
zone=zone,
target=route53.RecordTarget.from_alias(targets.LoadBalancerTarget(lb))
)
We can upload file using telegram-upload library by using the following command on terminal
telegram-upload file1.mp4 /path/to/file2.mkv
But if I want to call this inside python function, How should I do it. I mean in a python function if users passes the file path as an argument, then that function should be able to upload the file to telegram server.It is not mentioned in the documentation.
In other words I want to ask how to execute or run shell commands from inside python function?
For telegram-upload you can use upload method in telegram_upload.management and
for telegram-download use download method in the same file.
Or you can see how they are implemented there.
from telegram_upload.client import Client
from telegram_upload.config import default_config, CONFIG_FILE
from telegram_upload.exceptions import catch
from telegram_upload.files import NoDirectoriesFiles, RecursiveFiles
DIRECTORY_MODES = {
'fail': NoDirectoriesFiles,
'recursive': RecursiveFiles,
}
def upload(files, to, config, delete_on_success, print_file_id, force_file, forward, caption, directories,
no_thumbnail):
"""Upload one or more files to Telegram using your personal account.
The maximum file size is 1.5 GiB and by default they will be saved in
your saved messages.
"""
client = Client(config or default_config())
client.start()
files = DIRECTORY_MODES[directories](files)
if directories == 'fail':
# Validate now
files = list(files)
client.send_files(to, files, delete_on_success, print_file_id, force_file, forward, caption, no_thumbnail)
I found the solution.Using os module we can run command line strings inside python function i.e. os.system('telegram-upload file1.mp4 /path/to/file2.mkv')
I'm trying to create a process that dynamically watches jupyter notebooks, compiles them on modification and imports them into my current file, however I can't seem to execute the updated code. It only executes the first version that was loaded.
There's a file called producer.py that calls this function repeatedly:
import fs.fs_util as fs_util
while(True):
fs_util.update_feature_list()
In fs_util.py I do the following:
from fs.feature import Feature
import inspect
from importlib import reload
import os
def is_subclass_of_feature(o):
return inspect.isclass(o) and issubclass(o, Feature) and o is not Feature
def get_instances_of_features(name):
module = __import__(COMPILED_MODULE, fromlist=[name])
module = reload(module)
feature_members = getattr(module, name)
all_features = inspect.getmembers(feature_members, predicate=is_subclass_of_feature)
return [f[1]() for f in all_features]
This function is called by:
def update_feature_list(name):
os.system("jupyter nbconvert --to script {}{} --output {}{}"
.format(PATH + "/" + s3.OUTPUT_PATH, name + JUPYTER_EXTENSION, PATH + "/" + COMPILED_PATH, name))
features = get_instances_of_features(name)
for f in features:
try:
feature = f.create_feature()
except Exception as e:
print(e)
There is other irrelevant code that checks for whether a file has been modified etc.
I can tell the file is being reloaded correctly because when I use inspect.getsource(f.create_feature) on the class it displays the updated source code, however during execution it returns older values. I've verified this by changing print statements as well as comparing the return values.
Also for some more context the file I'm trying to import:
from fs.feature import Feature
class SubFeature(Feature):
def __init__(self):
Feature.__init__(self)
def create_feature(self):
return "hello"
I was wondering what I was doing incorrectly?
So I found out what I was doing wrong.
When called reload I was reloading the module I had newly imported, which was fairly idiotic I suppose. The correct solution (in my case) was to reload the module from sys.modules, so it would be something like reload(sys.modules[COMPILED_MODULE + "." + name])
I have created an event grid triggered azure function in python. I have deployed my solution to azure successfully and the execution is working fine. But, I have an issue with calling another python script in the same folder location. My code is given below: -
import os, json, subprocess
import logging
import azure.functions as func
def main(event: func.EventGridEvent):
try:
correctionsMessages = event.get_json()
for correctionMessage in correctionsMessages:
strMessage = json.dumps(correctionMessage)
full_path_to_script = os.path.join(os.path.dirname(os.path.realpath(__file__)) + '/' + correctionMessage['ScriptName'] + '.py')
logging.info('Script Path: %s', full_path_to_script)
logging.info('Parameter: %s', json.dumps(detectionMessage))
subprocess.check_call('python '+ full_path_to_script + ' ' + json.dumps(strMessage))
result = json.dumps({
'id': event.id,
'data': event.get_json(),
'topic': event.topic,
'subject': event.subject,
'event_type': event.event_type,
})
logging.info('Python EventGrid trigger processed an event: %s', result)
except Exception as e:
logging.info('Error: %s', e)
The above code is giving error for subprocess.check_call. Error is "Error: [Errno 2] No such file or directory: 'python /home/site/wwwroot/Detections/Script1.py". Script1.py is in same folder with init.py. When i am running this function locally, it is working absolutely fine.
Per my experience, the error was caused by the subprocess.check_call function not know the call path of python, not due to the Script1.py path.
On your local for Azure Functions development environment, the python path has been configured in the local environment variable, so the subprocess.check_call function could invoke python via search the python execute file from the paths of environment variable. But on cloud, there is not a python path value pre-configured in the same environment variable, only the Azure Function Host know the real absoluted path for Python.
So the solution is to find out the real absoluted path of Python and use it instead of python in your code.
However, in Azure Function for Python stack runtime, I think it's not a good idea for using subprocess.check_call to spawn a child process to do some processing for a given message. The safe and correct way is to define a function in Script1.py or directly in __init__.py to pass the given message as parameters to realize the same feature.
I need to create an AWS Lambda version of an existing Python 2.7 program written by someone else who has left the company.
Using one function I need to convert as an example:
#!/usr/bin/env python
from aws_common import get_profiles,get_regions
from aws_ips import get_all_public_ips
import sys
def main(cloud_type):
# csv header
output_header = "profile,region,public ip"
profiles = get_profiles(cloud_type)
regions = get_regions(cloud_type)
print output_header
for profile in profiles:
for region in regions:
# public_ips = get_public_ips(profile,region)
public_ips = get_all_public_ips(profile,region)
for aws_ip in public_ips:
print "%s,%s,%s" % (profile,region,aws_ip)
if __name__ == "__main__":
cloud_type = 'commercial'
if sys.argv[1]:
if sys.argv[1] == 'govcloud':
cloud_type = 'govcloud'
main(cloud_type)
I need to know how to create this as an AWS handler with event and context arguments from the code above.
If I could get some pointers on how to do this it would be appreciated.
You can simply start writing python function inside the handler of aws labda.
in handler simply start defining functions and variables and uplaod zip file inside lambda if there is any type of dependency.
you can change the python version in lambda as per if you are using python 2.7.
i would like to suggest server less framework and uplaoding your code to lambda. it's so easy to manage dependency code management from locally.
here you are using aws_common and importing you have to check it is inside aws sdk or not.
you can import aws-sdk and use it
var aws = require('aws-sdk');
exports.handler = function (event, context)
{
}
inside exports handler you can start making for loops in python or goes further