Connect google cloud function to an oracle database - python

Does anyone know, how to connect google cloud function(Python) to an Oracle database ? I tried importing cx_Oracle library in cloud function. But it shows an error
Function load error: DPI-1047: Oracle Client library cannot be loaded: libclntsh.so: cannot open shared object file
Following is main.py code:
import cx_Oracle
def import_data(request):
request_json = request.get_json()
if request_json and 'message' in request_json:
con = cx_Oracle.connect("username", "password", "host:port/SID")
print(con.version)
con.close
Following is requirement.txt
# Function dependencies, for example:
# package>=version
cx_Oracle==6.0b1

It seems Google Cloud Functions does not support shared libraries (in other words, it only supports "pure python" libraries) and that cx_oracle depends on this. Sadly I haven't been able to find a pure-python Oracle library, so for now this is not supported.
Your best bet is to use App Engine Flexible as it the closest equivalent service that allows non-pure python libraries. cl_oracle should work with it.

Related

how to query RDS SQL Server database in AWS lambda using python?

I am trying to connect to AWS RDS SQL Server instance to query table from AWS Lambda using python script. But, I am not seeing any AWS api so when I try using "import pyodbc" seeing the below error.
Unable to import module 'lambda_function': No module named 'pyodbc'
Connection:
cnxn = pyodbc.connect("Driver={SQL Server};"
"Server=data-migration-source-instance.asasasas.eu-east-1.rds.amazonaws.com;"
"Database=sourcedb;"
"uid=source;pwd=source1234")
Any points on how to query RDS SQL Server?
The error you're getting means that the lambda doesn't have the pyodbc module.
You should read up on dependency management in AWS Lambda. There are basically two strategies for including dependencies with your deployment - Lambda Layers or zip with the deployment package.
If you're using the Serverless Framework then Serverless-python-requirements is an excellent package for managing your dependencies and lets you choose your dependency management strategy with minimal changes to your application.
you need to upload the dependencies of the lambda along with the code. If you deploy your lambda manually (i.e. create a zip file / right from the console), you will need to attach the pyodb library. (More information is available here: https://docs.aws.amazon.com/lambda/latest/dg/python-package.html#python-package-dependencies).
If you're using any other deployment tool (serverless, SAM, chalice), it will be much easier: https://www.serverless.com/plugins/serverless-python-requirements, https://aws.github.io/chalice/topics/packaging.html#rd-party-packages, https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-build.html

What is the mean about "mysql+pymysql" in flask

I want to use MySQL in flask, and one config is
app.config['SQLALCHEMY_DATABASE_URI'] = "mysql+pymysql://user:password#127.0.0.1:3306/db"
If I use mysql+pymysql, it can work
But when I only use mysql, the erroe message like this
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
But in my code, I don't import pymysql, so what is the pymysql and why need use that can work
I know pymysql is a moudle
Thanks your reply!
The create_engine function (which is what uses the URL given in the config), requires you to give it a "dialect". A "dialect" is the name of the underlying database engine that SQLAlchemy is connecting to.
However, since many databases have multiple different clients (in Python these implement DBAPI), so in many cases (such as for the mysql dialect) you're required to give the name of the client you want SQLAlchemy to use. In this case, you're asking it to use the pymysql library to actually handle connectivity with MySQL.
SQLAlchemy 1.3 supports the following dialect/DBAPI-libraries for connecting to MySQL:
mysqlclient (maintained fork of MySQL-Python)
PyMySQL
MySQL Connector/Python
CyMySQL
OurSQL
Google Cloud SQL
PyODBC
zxjdbc for Jython

Python Code from Databricks to connect to SQL server

I am trying to execute a python code from Databricks which primarily establish a connection from Python to SQL server using JDBC.
I used 'jaydebeapi' python library and when I run the code it gives error saying "JayDeBeApi throws AttributeError: '_jpype.PyJPField' object has no attribute 'getStaticAttribute' "
I searched in the internet and found the Jpype library is used in jaydebeapi is the problem and I downgraded the same to 0.6.3 version.
But still I am getting the same error. Can anyone explain me how to make this change and run in databricks.
Or is there any alternative library which I can use.
Why not directly follow the offical documents of databricks below to install Microsoft JDBC Driver for SQL Server for Spark Connector and refer to the sample code of Python using JDBC connect SQL Server.
SQL Databases using the Apache Spark Connector
SQL Databases using JDBC and its Python example with the jdbc url of MS SQL Server
If you were using Azure, there are the same documents for Azure Databricks, as below.
SQL Databases using the Apache Spark Connector for Azure Databricks
SQL Databases using JDBC for Azure Databricks
This is a known issue with JayDeBeApi, you may check out the issue on GitHub.
Due to a bug in 0.6.3 private variables were exposed as part of the interface. Also 0.6.3 had a default class customizer which automatically created a property to get and set if the methods matched a Java bean pattern. This property customizer was loaded late after many common java.lang classes were already loaded, and was not retroactive thus only user loaded classes that happened after initializer would have the customization. The private variable bug would mask the property customizer as the property customizer was not to override fields. Some libraries were unknowingly accessing private variables assuming they were using the property customizer.
The customizer was both unnecessary and resulted in frequent errors for new programmers. The buggy behavior has been removed and the problematic property customizer has been disable by default in 0.7.
Add lines to the module to enable the old property behavior. But this will not reenable the previous buggy access to private variables. Thus code that was exploiting the previous behavior which bypassed the getter/setter of java will need to use the reflection API.
To enable the property customizer, use
try:
import jpype.beans
except ImportError:
pass
Hope this helps.

ImportError: No module named _sqlite3 after deploying Flask app to gcloud

I'm deployed an app to Google Cloud using this tutorial. The app is made using Flask and makes use of flask-sqlalchemy (and thus sqlalchemy).
I can load pages that don't make use of sqlalchemy fine, but pages that do raise a 500 error. The error page shows ImportError: No module named _sqlite3.
I suspect it has something to do with me trying to install a Python3 library to gcloud's Python2.7 environment, but I don't know how to fix this. Who can help me?
Have a look at Google's examples here for App Engine Standard, Cloud SQL and Python:
https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/cloud-sql
There's one for MySQL and another for PostgreSQL.
The tutorial link you shared does not appear to include a database component so I'm assuming you've evolved the example and that you're planning to use Google Cloud SQL database service for your backend.
You're likely missing a Python package that provides SQL connectivity (Google uses PyMySQL)

Run cleanup script on AWS Oracle RDS from AWS Lambda

I'm using Apex to deploy lambda functions in AWS. I need to write a lambda function which runs a cleanup script on an Oracle RDS in my AWS VPC. Oracle has a very nice python library called cx_Oracle, but I'm having some problems using it in a Lambda function (running on Python 2.7). My first step was to try to run the oracle-described test code as follows:
from __future__ import print_function
import json
import boto3
import boto3.ec2
import os
import cx_Oracle
def handle(event, context):
con = cx_Oracle.connect('username/password#my.oracle.rds:1521/orcl')
print(str(con.version))
con.close()
When I try to run this piece of test code, I get the following response:
Unable to import module 'main': /var/task/cx_Oracle.so: invalid ELF header
Google has told me that this error is caused because the cx_Oracle library is not a complete oracle implementation for python, rather it requires the SQLPlus client to be pre-installed, and the cx_Oracle library references components installed as part of SQLPlus.
Obviously pre-installing SQLPlus might be difficult.
Apex has the
hooks {}
functionality which would allow me to pre-build things, but I'm having trouble finding documentation showing what happens to those artefacts and how that works. In theory I could download the libraries into a nexus or an S3 bucket, and then in my hooks{} declaration, I could add them to the zip file. I could then try to install them as part of the python script. However, I have a few problems with this:
How are the 'built' artefacts accessed inside the lambda
function? Can they be? Have I misunderstood this?
Does a python 2.7 lambda function have enough access rights to
the operating system of the host container to be able to install a
library?
If the answer to question 2 is no, is there another way to write
a lambda function to run some SQL against an Oracle RDS instance?

Categories

Resources