Error when invoking lambda function as a ZIP file - python

This is the error I get when I try to invoke my lambda function as a ZIP file.
"The file lambda_function.py could not be found. Make sure your
handler upholds the format: file-name.method."
What am I doing wrong?

Mostly it is because of the way zipping the files making the problem. Instead of zipping the root folder you have to select all files and zip it like below,
Please upload all files and subfolders. My example is using node.js but you can do the same for python

Just to clarify: If I want to invoke Keras all I have to do is download the Keras directories and put my lambda code and Keras directories as a zip folder and upload it directly from my desktop right?
Just wanted to know if this is the right method to invoke Keras.

Whenever getting these kind of messages, if you see all files and handlers have the right name, format, location, etc., check also if other parts of the Lambda configuration are set up properly for what the code is trying to do.
For example, you might receive that unrelated error if your code is trying to execute against an RDS database that is in a private subnet and you are missing the correct VPC configuration that allows connectivity to that database.

Related

How to get a list of Folders and Files in Sharepoint

I have started to explore the Graph API, but Sharepoint is quite complicated and I am not sure how to proceed. I previously have worked with OneNote using this API successfully.
Purpose: There are thousands of folders/files and I need to go through the list in order to organize it in a better way. I am looking for a way to export this list into Excel/CSV using Python and Graph API
I want to dynamically get a list of all underlying Folders and files visible from this URL:
https://company1.sharepoint.com/teams/TEAM1/Shared Documents/Forms/AllItems.aspx?id=/teams/TEAMS_BI_BI-AVS/Shared Documents/Team Channel1/Folder_Name1&viewid=6fa603f8-82e2-477c-af68-8b3985cfa525
When I open this URL, I see that this folder is part of a private group called PRIVATE_GROUP1 (on the top left).
Looking at some sample API calls here:
GET /drives/{drive-id}/items/{item-id}/children -> Not sure what drive-id
GET /groups/{group-id}/drive/items/{item-id}/children -> I assume group-id refers to private group. Not sure how to get the ID
GET /sites/{site-id}/drive/items/{item-id}/children -> Assuming site-id is 'company1.sharepoint.com'?
For all above not sure what item-id refers to...
Thanks
refer below code. This might help you.
https://gist.github.com/keathmilligan/590a981cc629a8ea9b7c3bb64bfcb417

Reading a file from Databrick filesystem

I used the following code to read a shapefile from dbfs:
geopandas.read_file("file:/databricks/folderName/fileName.shp")
Unfortunately, I don't have access to do so and I get the following error
DriverError: dbfs:/databricks/folderName/fileName.shp: Permission denied
Any idea how to grant the access? File exist there (I have a permission to save a file there using dbutils, also - I can read a file from there using spark, but I have no idea how to read a file using pyspark).
After adding those lines:
dbutils.fs.cp("/databricks/folderName/fileName.shp", "file:/tmp/fileName.shp", recurse = True)
geopandas.read_file("/tmp/fileName.shp")
...from suggestion below I get another error.
org.apache.spark.api.python.PythonSecurityException: Path 'file:/tmp/fileName.shp' uses an untrusted filesystem 'org.apache.hadoop.fs.LocalFileSystem', but your administrator has configured Spark to only allow trusted filesystems: (com.databricks.s3a.S3AFileSystem, shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem, shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem, com.databricks.adl.AdlFileSystem, shaded.databricks.V2_1_4.com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem, shaded.databricks.org.apache.hadoop.fs.azure.NativeAzureFileSystem, shaded.databricks.org.apache.hadoop.fs.s3a.S3AFileSystem)
GeoPandas doesn't know anything about DBFS - it's working with the local files. So you either need:
to use the DBFS Fuse to read file from DBFS (but there are some limitations):
geopandas.read_file("/dbfs/databricks/folderName/fileName.shp")
or use dbutils.fs.cp command to copy file from DBFS to the local filesystem, and read from it:
dbutils.fs.cp("/databricks/folderName/fileName.shp", "file:/tmp/fileName.shp", recurse = True)
geopandas.read_file("/tmp/fileName.shp")
P.S. But if the file is already copied to the driver node, then you just need to remove file: from the name.
Updated after updated question:
There are limitations on what could be done on the AAD passthrough clusters, so your administrator needs to change cluster configuration as it's described in the documentation on troubleshooting if you want to copy file from DBFS to local file system.
But the /dbfs way should work for passthrough clusters as well, although it should be at least DBR 7.3 (docs)
Okay, the answer is easier than I could thought:
geopandas.read_file("/dbfs/databricks/folderName")
(folder name as it is a folder with all shape files)
Why it should be like that? Easy. Enable the possibility of checking files on DBFS in admin control panel ('Advanced' tab), click on the file you need and you will get two possible paths to the file. One is dedicated to Spark API and another one for File API (this is what I needed).
:)

Python UDF - import/read external files

I would like to import other python/csv files into my python udf to perform some operations.
Like,
Comparing the table data(which flows in as a stream, row by row) to an external .csv row.
When I try to read data of .csv file, it gives me an error
IOError: File /home/abc/xyz/myfile.csv does not exist
While the code works perfectly well when it is written as a regular python script (not like udf)
If I understood it right . You can try
ADD FILE [Your complete file path]
or
Add FILES [Your directory path].
Because before referring anything on cluster you must add it to the distribution cache so that code there can access that portion.
you can have a look at it.
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli
Be careful about the syntax! It may cause many problems and unfortunately, query language interpreter is not able to show where the problem is coming from and it just shows some generic error report.
Look at kind of the same problem here that was caused by a syntax issue in addressing the file!
Accessing external file in Python UDF

Is it possible to import python scripts in Azure?

I have a python script that is written in different files (one for importing, one for calculations, et cetera). These are all in the same folder, and when I need a function from another function I do something like
import file_import
file_import.do_something_usefull()
where, of course, in the file_import there is a function do_something_usefull() that, uhm, does something usefull. How can I accomplish the same in Azure?
I found it out myself. It is documenten on Microsoft's site here.
The steps, very short, are:
Include all the python you want in a .zip
Upload that zip as a dataset
Drag the dataset as the third option parameter in the 'execute python'-block (example below)
execute said function by importing import Hello (the name of the file, not the zip) and running Hello.do_something_usefull()
As reference, there is a similiar answered thread you can refer to, please see Access Azure blog storage from within an Azure ML experiment.

Read static content from within the code of an application

Is there a way to read the contents of a static data directory or interact with that data in any way from within the code of an application?
Edit: Please excuse me if it wasn't clear at first, I mean getting a list of the files in that directory, not reading the data in them.
No. Files marked as static in app.yaml are not available to your application; they're served from separate servers.
If you just need to list them, you could build a list as part of your deploy process. If you need to actually read them, you'll need to include a second copy in your application directory (although the "copy" can be just a symlink; appcfg.py will follow symlinks and upload them.)
You can just open them (only read only).

Categories

Resources