I have a python app that uses around 15 pip libraries.
It also requires Azure Storage as it creates files.
I'd like to publish this app to Azure Functions. How do I do this?
How does Azure Function manage all the python libraries that I need?
I can't seem to find any sample code.
My code is basically like this:
import libA
import libB
import libC
def function1(...):
Commands-For-Function1
def function2(...):
Commands-For-Function2
def function3(...):
Commands-For-Function3
function1(param1, param2).. # Execution
But with Azure function apps, it looks for an init function. How would I integrate my function into an azure function?
Also, would Azure Container Instances not be a better solution? I'd just have to containerise my solution and publish it.
Thanks
Azure Functions Python Project has some default folder structure recommended by Microsoft:
Whatever the dependencies, libraries and the shared function code, you can place in the shared code folder which is imported as modules in the main function _init_.py file.
You can import all the packages/libraries/methods derived in the normal class files into Azure Functions Python using the import keyword (absolute and relative references):
from shared_code import my_first_helper_function #(absolute)
import shared_code.my_second_helper_function #(absolute)
from . import example #(relative)
Please refer here for more information.
Related
I'm creating a Gtk4 application with python bindings on Linux. I'm using flatpak as well.
So I'm creating like an extensions/plugins systems where user can define his main module to call in a specific file and later on ill load it. Every thing works but when the user uses imports for external libraries like NumPy or pandas it will start looking inside the flatpak sandbox and this is normal. I'm wondering how can I tell the python interpreter to use the modules of the system for the imported plugins instead of looking in the modules of the app.
User's code and requirements should be independent of the app's requirements.
This is how I'm loading the modules:
extension_module = SourceFileLoader(file_name, module_path).load_module()
extension_class = getattr(dataset_module, class_name)
obj = extension_class()
This is an example of the loaded class
the absolute path of this module is /home/user/.extensions/ext1/module.py
import numpy as np
class Module1:
def __init__(self):
self.data = np.array([1, 2, 3, 4, 5, 6])
def get_data(self):
return self.data
I tried using
os.path.append('/usr/lib64/python-10.8/site-packages')
It's added, but in the sandbox environment.
I thought about looking for user imports manually like when a user import pandas ill try to look for installed python packages in the system and use the importlib or the SourceFileLoader to load it, but I don't think it's a good way to do it.
So, finally, after a day of reading the Flatpak docs, I found a way to do it.
I had to add the argument --filesystem=home. This argument will give you access to the user's directory. When you use pip to install packages, the packages will be installed in the following directory ~/.local/lib/python3.10/site-packages/. To let python interpreter search for packages in that folder, you can add it to the path like this.
import sys
sys.path.append('~/.local/lib/python3.10/site-packages/')
Note1: In my case, this is enough because the app is for learning and not serious, so it's 0 security concerns.
I am using openSUSE Tumbleweed, so I have python 3.10.11 and on the Flatpak runtime there is python 3.10.6. So some users who have an old distro or a distro like Ubuntu or Debian maybe you don't have the latest python version, and you may have compatibility issues.
A better solution is to create a simple folder in the user local directory f.g ~/.cache/myapp/packages and then you can add it to the flatpak manifest file --filesystem=~/.cache/myapp:create this way you're mapped your folder to be accessed from the sandbox and the option :create will create the folder if it doesn't exist. Then i your python script to install required packages based on the imports used in external scripts. In your python, add the folder path to the path sys.path.append('~/.cache/app/packages/').
Note2: It's not safe to import directly scripts to your code when you want to create a plugin system it's better to create a sub process that executes these scripts, this way you isolate your code from external code. You can use IPC protocol or other techniques to change data between the main process and the subprocess.
I'm having a hard time trying to import modules created inside a project.
The structure of my project is as follows:
myapp/calcs/__init__.py
myapp/calcs/calculations.py
myapp/tests/__init__.py
myapp/tests/test_calcs.py
Inside test_calcs.py, I have the following import statement:
from calcs import calculations as clc
However, I am getting this error:
ModuleNotFoundError: No module named calcs
I can't understand why I am having this error as I have included an init.py file inside calcs which should make it act as a module.
I also found this part of python extremely confusing. You basically need to make your project an installable package and do an editable install of it in your environment to import your modules this way. I would recommend this youtube video for a great walkthrough on the process, you only need to watch 2:36-8:36 for this topic specifically but the whole video is quite useful. Best of luck.
The first three answers in this thread describe how you can solve this pretty well!
Short version:
change from calcs import calculations as clc to from ..calcs import calculations as clc
start a terminal session in the parent folder of myapp
run python -m myapp.tests.test_calcs.py
I am havin an issue with importing a common function util file in my AWS lambda. It is a python file and the folder structure looks something like this
(functions folder)
common_util.py
(lambda 1 folder)
lambda1
(lambda 2 folder)
lambda2
I need to access the common_util from both these lambdas. When I run my CDK project locally this is easy i use something like .. on the import statement to tell the file it is one directory up
from ..common_util import (...)
When I deploy to AWS as a lambda (I package all of the above) I need to specify the import without the .. because this is the root folder of the lambda
from common_util import(...)
I need an import statement or a solution that will work for both my CDK project and the lambda.
here is the CDK where the lambda is created
const noteIntegrationLambda = new Function(this as any,"my-lambda",
{
functionName:
"my-lambda",
runtime: StackConfiguration.PYTHON_VERSION,
handler:
"my_lambda.execute",
timeout: Duration.seconds(15),
code: Code.fromAsset("functions/"),
role,
layers: [dependencyLayer],
environment: env,
},
}
);
Lambda layers provide an ideal mechanism to include in solving this problem. As mentioned in https://medium.com/#manojf/sharing-code-among-lambdas-using-lambda-layers-ca097c8cd500,
Lambda layers allow us to share code among lambda functions. We just
have to upload the layer once and reference it in any lambda function.
So consider deploying your common code via a layer. That said, to structure your code, I recommend you create a common package that you install locally using pip install, as outlined at Python how to share package between multiple projects. Then you put that package into a layer that both of your lambdas reference. That completely solves the problem of how to structure code when your local file structure is different than the lambda file structure.
Also consider these resources:
Import a python module in multiple AWS Lambdas
What is the proper way to work with shared modules in Python development?
https://realpython.com/absolute-vs-relative-python-imports/
Python: sharing common code among a family of scripts
Sharing code in AWS Lambda
Installing Python packages from local file system folder to virtualenv with pip
As a layer example, suppose you wanted to include a "common_utils" library for your lambdas to reference. To make a layer, you would need to create a directory structure that contains that code, then zip the entire directory. It may be as follows:
/python
/common_utils
__init__.py
common_util.py
...
When zipped, the zip file must have the "python" folder and inside of that you put your code. If you do this and also install your common code as a package, you can import it in your local code and in your lambdas using the same import.
What I do is use pip install to install to a certain file location--the location that I then zip into a layer. For example, if I wanted to make a layer for the pymysql library I might do
pip install --target=c:\myLayers\python pymysql
That will install the library files into the location I specified, which makes it easy to know what to zip up (just create a zip that includes the "python" directory).
I know this question is old, but I ran into a similar issue. My solution was to detect if the current environment is local or lambda using the os package, and then import differently based on the environment (local or cloud). Will leave here as a reference.
if os.environ.get("AWS_EXECUTION_ENV") is not None:
# For use in lambda function
from package_a import class_a
else:
# For local use
from ...package_a import class_a
Credits to: How to check if Python app is running within AWS lambda function?
I'm trying to make use of an external library in my Python mapper script in an AWS Elastic MapReduce job.
However, my script doesn't seem to be able to find the modules in the cache. I archived the files into a tarball called helper_classes.tar and uploaded the tarball to an Amazon S3 bucket. When creating my MapReduce job on the console, I specified the argument as:
cacheArchive s3://folder1/folder2/helper_classes.tar#helper_classes
At the beginning of my Python mapper script, I included the following code to import the library:
import sys
sys.path.append('./helper_classes')
import geoip.database
When I run the MapReduce job, it fails with an ImportError: No module named geoip.database. (geoip is a folder in the top level of helper_classes.tar and database is the module I'm trying to import.)
Any ideas what I could be doing wrong?
This might be late for the topic.
Reason is that the module geoip.database is not installed on all the Hadoop nodes.
You can either try not use uncommon imports in your map/reduce code,
or try to install the needed modules on all Hadoop nodes.
Im trying to import a company module into my software and I get the error:
ImportError: No module named config
from:
from pylons.config import config
So obviously, the module that im importing requires pylons.config but cant find it in my virtual environment.
If I go to the terminal and try some Python scripts I can seem to find the config file if I try:
from pylons import config
but will error if I try:
import pylons.config
Why is this?
And does anybody how or where I can get:
from pylons.config import config
to work. Bearing in mind that I cannot change the code for this module, only mine which is importing it or my own system files.
UPDATE
If anyone finding this page has a similar problem you may find that you are trying to run two modules with different versions of Pylons.
For example, you are creating a login application called myApp. You have some Python modules which help with login handling called pyLogin.
First you install pyLogin with python setup.py install. This adds the libraries to your site packages and updates any libraries it depends on, such as SqlAlchemy.
Next you install myApp in the same way which again updates libraries and dependencies.
This problem will occur if pyLogin and myApp are using different versions of Pylons. If pyLogin is using Pylons 0.9.6 and myApp is using Pylons 1.0 for example, then the pyLogin code will be called from myApp but it will be running in the wrong Pylons framework and hence will require EITHER from pylons import config or from pylons.config import config, but will only work with one. If it is using the wrong call for Pylons then you will find yourself with this error message.
So the only solution to this error is to either find earlier or later libraries which use the same Pylons version as your application or to convert your application to the same Pylons version as the libraries you are using.
There is a diffrence between two usages...
import loads a Python module into its own namespace, while from loads a Python module into the current namespace.
So, using from pylons import config imports config to to your current namespace. But trying to import a class or function using import is not possible since there is no namespace to keep them... You can only import modules, and use functions or classes via calling them with their own namespace like
import pylons
....
pylons.config #to retreive config
More about import in Python