I'm trying to create snowpark UDF in python as object. Below is my code
from snowflake.snowpark.functions import udf
from pytorch_tabnet.tab_model import TabNetRegressor
session.clearImports()
model = TabNetRegressor()
model.load_model(model_file)
lib_test = udf(lambda: (model.device), return_type=StringType())
I'm getting a error like below
Failed to execute query
CREATE
TEMPORARY FUNCTION "TEST_DB"."TEST".SNOWPARK_TEMP_FUNCTION_GES3G8XHRH()
RETURNS STRING
LANGUAGE PYTHON
RUNTIME_VERSION=3.8
IMPORTS=('#"TEST_DB"."TEST".SNOWPARK_TEMP_STAGE_CR0E7FBWQ6/cloudpickle/cloudpickle.zip','#"TEST_DB"."TEST".SNOWPARK_TEMP_STAGE_CR0E7FBWQ6/TEST_DBTESTSNOWPARK_TEMP_FUNCTION_GES3G8XHRH_5843981186544791787/udf_py_1085938638.zip')
HANDLER='udf_py_1085938638.compute'
002072 (42601): SQL compilation error:
Unknown function language: PYTHON.
It throws error as python is not available.
I checked the packages available in information schema. It shows only scala and java. I'm not sure why python is not available in packages. How to add python to the packages? adding python will resolve this issue?
can anyone help on this? Thanks
The Python UDFs are not in production yet and are only available to selected accounts.
Please reach to Snowflake account team to have the functionality enabled.
Related
I'm using TDengine database to process time-series data. My application is developed by Python.
I imported the Python connector of TDengine. I encountered an error while loading the python module.
taos.error.InterfaceError: [0xffff]: unable to load taos C
library:Could not find module taos (or one of its dependencies)
I don't know how to fix it.
I checked the documentation, but no solution was found.
I am using the python pandas library to read data in reshift warehouse. The command pd.read_sql() fails with the error message, AssertionError: Could not determine version from string 'Redshift 1.0.32574 on Amazon Linux, compiled by gcc-7.3.0'
Anyone here knows how I can resolve this?
This is a known issue from a recent cluster (patch) upgrade where the version string format changed. Apps that expect a specific string format so they can parse to ensure it's a supported version can error out. See https://forums.aws.amazon.com/thread.jspa?messageID=998260󳭴 for an AWS contact you can ping for immediate fix. If you have a support agreement with AWS, you can create a ticket there also.
I am trying to create Python UDF in Amazon Redshift, and I have successfully created the UDF with no error. I have also created the required library for this UDF successfully. But when I execute the UDF, I get the error:
No Module Named pyffx. Please look at svl_udf_log for more information
I have downloaded the library from pypi.org and uploaded it to Amazon S3. This is the link I used to download the library:
https://pypi.org/project/pyffx/#files
create library pyffx
language plpythonu
from 's3://aws-bucket/tmp/python_module/pyffx-0.3.0.zip'
credentials
'aws_iam_role=iam role'
region 'us-east-1';
CREATE OR REPLACE FUNCTION schema.ffx(src VARCHAR)
RETURNS VARCHAR
STABLE
AS $$
import pyffx
src = unicode(src)
value=(src)
l=len(value)
e = pyffx.String(b'secret-key', alphabet='abcedefghijklmnopqrstuvwxyz123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ', length=l)
return e.encrypt(value)
$$ LANGUAGE plpythonu;
I managed to get it to work... sort of.
I did the following:
Downloaded pyffx via the link you provided
Extracted the .tar.gz file and created a .zip of the files
Copied the .zip file to Amazon S3
Loaded the library using your CREATE LIBRARY command
Created the function
However, when I use the function, I receive the error:
Invalid operation: AttributeError: 'module' object has no attribute 'add_metaclass'
My research suggests that the six library (that provides Python 2 and 3 compatibility) is the source of this problem. The Python Language Support for UDFs - Amazon Redshift page indicates that six 1.3 is included in Redshift, yet Pip six.add_metaclass error says that this version does not include add_metaclass. The current version of six is 1.12.
I tried to include an updated six library in the code but wasn't successful. You might be able to wrangle better than me.
This is for a PySpark / Databricks project:
I've written a Scala JAR library and exposed its functions as UDFs via a simple Python wrapper; everything works as it should in my PySpark notebooks. However, when I try to use any of the functions imported from the JAR in an sc.parallelize(..).foreach(..) environment, execution keeps dying with the following error:
TypeError: 'JavaPackage' object is not callable
at this line in the wrapper:
jc = get_spark()._jvm.com.company.package.class.get_udf(function.__name__)
My suspicious is that the JAR library is not available in the parallelized context, since if I replace the library path to some gibberish, the error remains exactly the same.
I haven't been able to find the necessary clues in the Spark docs so far. Using an sc.addFile("dbfs:/FileStore/path-to-library.jar") didn't help.
You could try adding the JAR in the PYSPARK_SUBMIT_ARGS environment variable (before Spark 2.3 this was doable with SPARK_CLASSPATH as well).
For example with:
os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars <path/to/jar> pyspark-shell'
I have a project that has code which will communicate with a python script and call python functions. In the proj file, i've added the includePath for the python header files, and added external python library to the project (python27.a). However the qt compiler gave me an error:
No rule to make target
/home/pgeng/work/OpenModelica-rev.9876.CSA/OMEdit/OMEditGUI/../../../../../../usr/lib/python2.7/libpython2.a
, needed by ../bin/OMEdit. Stop
What does this mean? How can i fix it? this is anything related to PyQt?
It seems like libpython2 is missing.
You will have to :
Find out what package provides this library.
You can Google for that.
Or search a repo for the lib.