create utils.py in AWS lambda - python

I had a def hello() function in my home/file.py file. I created a home/common/utils.pyfile and moved the function there.
Now, I want to import it in my file file.py.
I imported it like this: from utils import hello and from common.utils import hello and the import in my file doesn't throw an error. However, when I run it on AWS Lambda, I get an error that:
Runtime.ImportModuleError: Unable to import module 'file': No module named 'utils'
How can I fix this? without having to use Ec2 or something...
data "archive_file" "file_zip" {
type = "zip"
source_file = "${path.module}/src/file.py"
output_file_mode = "0666"
output_path = "${path.module}/bin/file.zip"
}

The deployment package that you're uploading only contains your main Python script (file.py). Specifically, it does not include any dependencies such as common/utils.py. That's why the import fails when the code runs in Lambda.
Modify the creation of your deployment package (file.zip) so that it includes all needed dependencies.
For example:
data "archive_file" "file_zip" {
type = "zip"
output_file_mode = "0666"
output_path = "${path.module}/bin/file.zip"
source {
content = file("${path.module}/src/file.py")
filename = "file.py"
}
source {
content = file("${path.module}/src/common/utils.py")
filename = "common/utils.py"
}
}
If all of your files happen to be in a single folder then you can use source_dir instead of indicating the individual files.
Note: I don't use Terraform so the file(...) with embedded interpolation may not be 100% correct, but you get the idea.

First of all, properly follow this standard URL:- https://docs.aws.amazon.com/lambda/latest/dg/python-package.html (refer section with title:- Deployment package with dependencies)
Now, if you notice, in the end of section,
zip -g my-deployment-package.zip lambda_function.py
Follow the same command for your utils file:-
zip -g my-deployment-package.zip common/
zip -g my-deployment-package.zip common/utils.py
Ensure that, in lambda_function, you are using proper import statement like:-
from common.utils import util_function_name
Now, you can upload this zip and test for yourself. It should run.
Hope this helps.

Related

Boto3 Glue client create_job() module path error `ModuleNotFoundError`

I'm trying to use boto3 client create_job() to create a Glue job, this is the script:
job = client.create_job(Name=xxx,
Role=xxx,
Command={
'Name': 'glueetl',
'ScriptLocation': 's3://my_bucket_name/my_project_name/src/glue.py',
'PythonVersion': '3'},
DefaultArguments={
'--job-language': 'python',
'--extra-py-files': 's3://my_bucket_name/my_project_name/src/test.zip',
'--conf': 'spark.yarn.executor.memoryOverhead=7g --conf spark.jars.packages=xxx',
},
ExecutionProperty={
'MaxConcurrentRuns': 1
},
GlueVersion='1.0'
)
The structure in test.zip is __init__.py file + 'glue.py' file (which is duplicated with the one specified in ScriptLocation) + example.py
Inside the 'glue.py' I have import example, then the job failed with error "ErrorMessage":"ModuleNotFoundError: No module named \'example\'".
I tried from test import example but not working, I'm confused and stuck here, how Glue read and import modules? do I need to setup something? Might someone be able to help please? Many thanks.
The _init_.py is incorrect. It should be __init__.py (double underscore) as explained in the AWS docs.

Failed to import defined modules in __init__.py

My directory looks like this :
- HttpExample:
- __init__.py
- DBConnection.py
- getLatlong.py
I want to import DBConnection and import getLatlong in __init__.py. There is no error in my __init__.py until I run it, I received :
System.Private.CoreLib: Exception while executing function: Functions.HttpExample. System.Private.CoreLib: Result: Failure
Exception: ModuleNotFoundError: No module named 'getLatlong'
I'm trying to use function in getLatlong to use the information input by user from __init__.py to getLatlong. Below is the code:
__init__.py :
from getLatlong import result
from DBConnection import blobService, container_name, account_key, file_path
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
section = req.params.get('section')
bound = req.params.get('bound')
km_location = req.params.get('km_location')
location = req.params.get('location')
if not section:
try:
req_body = req.get_json()
except ValueError:
pass
else:
section = req_body.get('section')
if section and bound and km_location:
result(section, km_location, bound, location).getResult() #HERE
return func.HttpResponse(f"Hello {section}, {bound}!")
#elif section is None or bound is None or km_location is None:
# return func.HttpResponse("All values are mandatory!")
I am also receiving compile error at getLatlong to import DBConnection to this class. The following values will pass to getLatlong.py. The code :
from DBConnection import blobService, container_name, account_key, file_path #Another import error here says : Unable to import DBConnection
class result:
def __init__(self, section, bound, km_location, location):
self.section = section
self.bound = bound
self.km_location = km_location
self.location = location
def getResult(self):
print(self.section)
print(self.bound)
print(self.km_location)
print(self.location)
I've tried every way to import these files before I lost my mind..
You get these errors, because Python does not know where to look for the files you want to import. Depending on which Python version you are using, I see three ways to solve this:
You could add HttpExample to your PYTHONPATH and than your imports should work as you have them currently.
Another way would be to use the sys module and append the path to HttpExample, e.g.
import sys
sys.path.append('PATH/TO/HttpExample')
But you would have to do this in all files, where you want to import something from the parent folder.
Or you use relative imports, which have been available since Python 2.5 (See PEP238). Those are only available in modules, but as you have your __init__.py file, it should work. For relative imports you are using dots . to tell Python where to look for the import. One dot . tells Python to look for the desired import in the parent folder. You could also use .. to go up two levels. But one level should be enough in your case.
So in your case changing your code to this, should solve your problem.
In __init.py__:
from .getLatlong import result
from .DBConnection import blobService, container_name, account_key, file_path
In getLangLong.py:
from .DBConnection import blobService, container_name, account_key, file_path
You could try from __app__.HttpExample import getLatlong.
There is a document about how to import module in the Shared Code folder. Check this doc:Folder structure.
It says Shared code should be kept in a separate folder in __app__. And in my test this could work for me.

distutils wildcard entry for include_dir

I am setting up a package using distutils.
I need to allow access to a module that is built during the set-up process and is located in ./build/temp.linux-x86_64-3.6. I do this by including the
include_dirs=["./build/temp.linux-x86_64-3.6"]
when adding the extension to the distutils Configuration.
My question is there a way of setting this using a wildcard such as:
include_dirs=["./build/temp.linux*"]
as when I try this it fails, citing error:
Nonexistent include directory ‘build/temp.linux*’ [-Wmissing-include-dirs]
The reason I want this is that the build folder will be named differently depending on the system. Alternatively if anyone knows a way of figuring out what this temp build folder will be called that would also work.
The way I have got around this problem is as follows:
def return_major_minor_python():
import sys
return str(sys.version_info[0])+"."+str(sys.version_info[1])
def return_include_dir():
from distutils.util import get_platform
return get_platform()+'-'+return_major_minor_python()
Then when calling config.add_extension() using:
include_dirs=['build/temp.' + return_include_dir()]
So the whole process for adding a f90wrapped, f2py extension to a python package is:
def setup_fort_ext(args,parent_package='',top_path=''):
from numpy.distutils.misc_util import Configuration
from os.path import join
import sys
config = Configuration('',parent_package,top_path)
fort_src = [join('PackageName/','fortran_source.f90')]
config.add_library('_fortran_source', sources=fort_src,
extra_f90_compile_args = [ args["compile_args"]],
extra_link_args=[args["link_args"]])
sources = [join('PackageName','f90wrap_fortran_source.f90')]
config.add_extension(name='_fortran_source',
sources=sources,
extra_f90_compile_args = [ args["compile_args"]],
extra_link_args=[args["link_args"]],
libraries=['_tort'],
include_dirs=['build/temp.' + return_include_dir()])
return config
if __name__ == '__main__':
import sys
import subprocess
import os
install_numpy() #installs numpy
install_dependencies() #calls to pip to install any requirements
from numpy.distutils.core import setup
config = {'name':'PackageName',
'version':__version__,
'project_description':'Some Package description',
'description':'Some package Description',
'long_description': open('README.txt').read(),
'long_description_content_type':'text/markdown',
'author':'Your name here',
'author_email':'your email here',
'url':'link to git repo here',
'python_requires':'>=3.3',
'packages':['PackageName'],
'package_dir':{'PackageName':'PackageName'},
'package_data':{'PackageName':['*so*']},
'name': 'PackageName'
}
config_fort = setup_fort_ext(args,parent_package='PackageName',top_path='')
config2 = dict(config,**config_fort.todict())
setup(**config2)
where the source fortran_source.f90 is wrapped beforehand and the resulting wrapped source file (f90wrap_fortran_source.f90) is included as a library, and subsequently compiled by f2py.
args in the above is just a dict with the any linking or compile args you wish to pass through.

how to write my AWS lambda function?

I am new to AWS lambda function and i am trying to add my existing code to AWS lambda. My existing code looks like :
import boto3
import slack
import slack.chat
import time
import itertools
from slacker import Slacker
ACCESS_KEY = ""
SECRET_KEY = ""
slack.api_token = ""
slack_channel = "#my_test_channel"
def gather_info_ansible():
.
.
def call_snapshot_creater(data):
.
.
def call_snapshot_destroyer(data):
.
.
if __name__ == '__main__':
print "Calling Ansible Box Gather detail Method first!"
ansible_box_info = gather_info_ansible()
print "Now Calling the Destroyer of SNAPSHOT!! BEHOLD THIS IS HELL!!"
call_snapshot_destroyer(ansible_box_info)
#mapping = {i[0]: [i[1], i[2]] for i in data}
print "Now Calling the Snapshot Creater!"
call_snapshot_creater(ansible_box_info)
Now i try to create a lambda function from scratch on AWS Console as follows (a hello world)
from __future__ import print_function
import json
print('Loading function')
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
print("value1 = " + event['key1'])
print("value2 = " + event['key2'])
print("value3 = " + event['key3'])
print("test")
return event['key1'] # Echo back the first key value
#raise Exception('Something went wrong')
and the sample test event on AWS console is :
{
"key3": "value3",
"key2": "value2",
"key1": "value1"
}
I am really not sure how to put my code in AWS lambda coz if i even add the modules in lambda console and run it it throws me error :
Unable to import module 'lambda_function': No module named slack
How to solve this and import my code in lambda?
You have to make a zipped package consisting of your python script containing the lambda function and all the modules that you are importing in the python script. Upload the zipped package on aws.
Whatever module you want to import, you have to include that module in the zip package. Only then the import statements will work.
For example your zip package should consist of
test_package.zip
|-test.py (script containing the lambda_handler function)
|-boto3(module folder)
|-slack(module folder)
|-slacker(module folder)
You receive an error because AWS lambda does not have any information about a module called slack.
A module is a set of .py files that are stored somewhere on a computer.
In case of lambda, you should import all your libraries by creating a deployment package.
Here is an another question that describes similar case and provides several solutions:
AWS Lambda questions

Python: Importing files based on which files the user wants [duplicate]

This question already has answers here:
How to import a module in Python with importlib.import_module
(3 answers)
Closed 8 years ago.
I have the following directory structure
+ code
|
--+ plugins
|
-- __init__.py
-- test_plugin.py (has a class TestPlugin)
-- another_test_plugin.py (has a class AnotherTestPlugin)
--+ load.py
--+ __init__.py
In load.py, I want to be able to initialize only those classes that the user specifies. For example, lets say I do something like
$ python load.py -c test_plugin # Should only import test_plugin.py and initialize an object of the TestPlugin class
I am having trouble trying to use the "imp" module to do it. It keeps on saying "No such file or directory". My understanding is that it is somehow not understanding the path properly. Can someone help me out with this?
ok, your problem is a path related problem. You expect that the script is being run in the same directory as where load.py is, where it is not the case.
what you have to do is something like:
import imp, os, plugins
path = os.path.dirname(plugins.__file__)
imp.load_source('TestPlugin', os.path.join(path, 'test_plugin.py')
where plugins is the module containing all your plugins (i.e. just the empty __init__.py), that will help you get the full path to your plugin modules' files.
Another solution, if you want a "plugins" discovery tool:
import imp, os
import glob
def load_plugins(path):
"""
Assuming `path` is the only directory in which you store your plugins,
and assuming each name follows the syntax:
plugin_file.py -> PluginFile
Please note that we don't import files starting with an underscore.
"""
plugins = {}
plugin_files = glob.glob(path + os.sep + r'[!_]*.py')
for plugin_path in plugin_files:
module_name, ext = os.path.splitext(plugin_path)
module_name = os.path.basename(module_name)
class_name = module_name.title().replace('_', '')
loaded_module = imp.load_source(class_name, plugin_path) # we import the plugin
plugins[module_name] = getattr(loaded_module, class_name)
return plugins
plugins = load_plugins(your_path_here)
plugin_name = sys.argv[3]
plugin = plugins.get(plugin_name)
if not plugin:
# manage a not existing plugin
else:
plugin_instance = plugin() # creates an instance of your plugin
This way, you can also specify different names by changing your keys, e.g., 'test_plugins' => 'tp'. You don't have to initialize your plugins, but you can still run this function whenever you want to load your plugins at runtime.
exec('import ' + sys.argv[2])
obj = test_plugin.TestPlugin()
Here sys.argv[2] is 'test_plugin' string from command line arguments.
EDIT: Another way to avoid using exec:
import importlib
mod = importlib.import_module(sys.argv[2])

Categories

Resources