No module named pymysql - aws serverless framework - python

I deployed a python lambda function through server less framework. Installed pymysql through pip. My handler info is : dynamodbtoauroradb/aurora-data-management/aurora-data-management.handler
I get this error:
Unable to import module 'dynamodbtoauroradb/aurora-data-management/aurora-data-management': No module named 'pymysql'
Not sure where the mistake is.

There is a chance that pymysql is there in your system packages. So when you built the virtualenvironment, it used the system package.
Create a clean virtualenv using
virtualenv --no-site-packages envname
Or else you can use the current one, with
pip install pymysql --no-deps --ignore-installed

Use the plugin serverless-python-requirements with docker.
This will package all your python virtual env dependencies into your serverless package.
See this answer for more details

Related

try to run airflow on databricks but got error

I am trying to use airflow on databricks.
I have installed apache-airflow 1.10.6 from https://pypi.org/project/apache-airflow/.
I am using python3.6 on databricks.
But, I got error:
import airflow
ModuleNotFoundError: No module named 'werkzeug.wrappers.json'; 'werkzeug.wrappers' is not a package
I have tried the followings:
Apache Airflow : airflow initdb results in "ImportError: No module named json"
Apache Airflow : airflow initdb throws ModuleNotFoundError: No module named 'werkzeug.wrappers.json'; 'werkzeug.wrappers' is not a package error
But, I still got the same problem.
Thanks
Note: By default, "Airflow" and its dependency is not installed on the databricks.
You need to install the package explicitly.
Dependency installation: Using Databricks library utilities.
dbutils.library.installPyPI("Werkzeug")
You can install the packages in different methods.
Method1: Installing external packages using pip cmdlet.
Syntax: %sh /databricks/python3/bin/pip install <packagename>
%sh
/databricks/python3/bin/pip install apache-airflow
Method2: Using Databricks library utilities
Syntax:
dbutils.library.installPyPI("pypipackage", version="version", repo="repo", extras="extras")
dbutils.library.restartPython() # Removes Python state, but some libraries might not work without calling this function
To install apache-airflow using databricks library utilities use the below command.
dbutils.library.installPyPI("apache-airflow")
Method3: GUI Method
Go to Clusters => Select Cluster => Libraries => Install New => Library Source "PyPI" => Package "apache-airflow" => Install
Hope this helps. Do let us know if you any further queries.
Do click on "Mark as Answer" and Upvote on the post that helps you, this can be beneficial to other community members.

ImportError: import apache_beam as beam. Module not found

I've installed apache_beam Python SDK and apache airflow Python SDK in a Docker.
Python Version: 3.5
Apache Airflow: 1.10.5
I'm trying to execute apache-beam pipeline using **DataflowPythonOperator**.
When I run a DAG from airflow UI at that time I get
Import Error: import apache_beam as beam. Module not found
With the same setup I tried **DataflowTemplateOperator** and it's working perfectly fine.
When I tried same docker setup with Python 2 and apache airflow 1.10.3, two months back at that time operator didn't returned any error and was working as expected.
After SSH into docker when I checked the installed libraries (using pip freeze) in a docker container I can see the installed versions of apache-beam and apache-airflow.
apache-airflow==1.10.5
apache-beam==2.15.0
Dockerfile:
RUN pip install --upgrade pip
RUN pip install --upgrade setuptools
RUN pip install apache-beam
RUN pip install apache-beam[gcp]
RUN pip install google-api-python-client
ADD . /home/beam
RUN pip install apache-airflow[gcp_api]
airflow operator:
new_task = DataFlowPythonOperator(
task_id='process_details',
py_file="path/to/file/filename.py",
gcp_conn_id='google_cloud_default',
dataflow_default_options={
'project': 'xxxxx',
'runner': 'DataflowRunner',
'job_name': "process_details",
'temp_location': 'GCS/path/to/temp',
'staging_location': 'GCS/path/to/staging',
'input_bucket': 'bucket_name',
'input_path': 'GCS/path/to/bucket',
'input-files': 'GCS/path/to/file.csv'
},
dag=test_dag)
This look like a known issue: https://github.com/GoogleCloudPlatform/DataflowPythonSDK/issues/46
please run pip install six==1.10. This is a known issue in Beam (https://issues.apache.org/jira/browse/BEAM-2964) which we are trying to get fixed upstream.
So try installing six==1.10 using pip
This may not be an option for you, but I was getting the same error with python 2. Executing the same script with python 3 resolved the error.
I was running through the dataflow tutorial:
https://codelabs.developers.google.com/codelabs/cpb101-simple-dataflow-py/
and when I follow the instructions as specified:
python grep.py
I get the error from the title of your post. I hit it with:
python3 grep.py
and it works as expected. I hope it helps. Happy hunting if it doesn't. See the link for details on what exactly I was running.
From this github link will help you to solve your problem. Follow below steps.
Read following nice article on virtualenv, this will help in later steps,
https://www.dabapps.com/blog/introduction-to-pip-and-virtualenv-python/?utm_source=feedly
Create virtual environment ( Note I created it in cloudml-samples folder & named it env)
titanium-vim-169612:~/cloudml-samples$ virtualenv env
Activate virtual env
#titanium-vim-169612:~/cloudml-samples$ source env/bin/activate
Install cloud-dataflow using following link: (this brings in apache_beam)
https://cloud.google.com/dataflow/docs/quickstarts/quickstart-python
Now u can check that apache_beam is present in env/lib/python2.7/site-packages/
#titanium-vim-169612:~/cloudml-samples/flowers$ ls ../env/lib/python2.7/site-packages/
Run the sample
At this point, I got an error about missing tensorflow. I installed tensorflow in my virtualenv by using the link below (use installation steps for virtualenv),
https://www.tensorflow.org/install/install_linux#InstallingVirtualenv
The sample seems to work now.

Local development of Google App Engine not importing built-in library

I followed the quickstart then I simply clone hello_world from here. I already downloaded google_appengine sdk from here. I extract it and now I have folder google_appengine alongside with hello_world
so I execute it like this:
It runs well apparently, until I start to request to localhost:8080.
then I got this error:
what's wrong with it? did I miss something?
google said that I can use the built-in library without manually install it with pip.
PS: it works when I just deploy it to my project on Google. and also it works if I manually install webapp2 inside lib inside hello_world like described here then request it locally.
my python version Python 2.7.6 on ubuntu 14.04 32bit
Please if anybody can solve this I would be appreciate it.
Seems like this is acknowledged bug in app engine SDK. As a temporary workaround, you may try this steps:
Uninstalling the following PIP packages resolved this issue for me.
sudo pip uninstall gcloud
sudo pip uninstall googleapis-common-protos
sudo pip uninstall protobuf
Credit to this thread:
https://groups.google.com/forum/?hl=nl#!topic/google-appengine/LucknWk8iaQ
Be sure to use correct executable of pip if you use virtualenv or have multiple python versions installed.
Thanks to #Dmytro Sadovnychyi for the answer. It doesn't work for me to uninstall those packages because I never installed it before, But that makes me think maybe built-in library conflict with other package so I decide to create Virtual Environment. just fresh environment no need to install any package.
activate the environment then execute dev_appserver.py hello_world now it works
for now I'll stick with it until next update like said here

Getting ImportError: No module named azure.storage.blob when doing python manage.py syncdb

When I try to do python manage.py syncdb in my Django app, I get the error ImportError: No module named azure.storage.blob. But thing is, the following packages are installed if one does pip freeze:
azure-common==1.0.0
azure-mgmt==0.20.1
azure-mgmt-common==0.20.0
azure-mgmt-compute==0.20.0
azure-mgmt-network==0.20.1
azure-mgmt-nspkg==1.0.0
azure-mgmt-resource==0.20.1
azure-mgmt-storage==0.20.0
azure-nspkg==1.0.0
azure-servicebus==0.20.1
azure-servicemanagement-legacy==0.20.1
azure-storage==0.20.3
Clearly azure-storage is installed, as is evident. Why is azure.storage.blob not available for import? I even went into my .virtualenvs directory, and got in all the way to azure.storage.blob (i.e. ~/.virtualenvs/myvirtualenv/local/lib/python2.7/site-packages/azure/storage/blob$). It exists!
What do I do? This answer here has not helped: Install Azure Python api on linux: importError: No module named storage.blob
Note: please ask for more information in case you need it
I had a similar issue. To alleviate that, I followed this discussion here: https://github.com/Azure/azure-storage-python/issues/51#issuecomment-148151993
Basically, try pip install azure==0.11.1 before trying syncdb, and I'm confident it will work for you!
There is a thread similar with yours, please check my answer for the thread Unable to use azure SDK in Python.
Based on my experience, Python imports the third-party library packages from some library paths that you can check them thru codes import sys & sys.path in the python interpreter. So you can try to dynamically add the new path contains the installed azure packages into the sys.path in the Python runtime to solve the issue. For adding the new library path, you just code sys.path.append('<the new paths you want to add>') at the front of the code like import azure.
If the way has not helped, I suggest you can try to reinstall Python environment. On Ubuntu, you can use the command sudo apt-get remove python python-pip & sudo apt-get install python python-pip to reinstall Python 2.7 & pip 2.7.(Note: The current major Linux distributions use Python 2.7 as the system default version.)
If Python 3.4 as your runtime for Django, the apt package names for Ubuntu are python3 and python3-pip, and you can use sudo pip3 install azure for Python 3.4 on Ubuntu.
Any concern, please feel free to let me know.

ImportError: No Module named picklefield.fields (Yep pip install says it is there)

I am using vagrant to run my python environment. In my data models I am using django-picklefield module.
when I run my server it says
ImportError: No module named picklefield.fields.
I tried to uninstall and install the picklefield module. Still having the same problem.
You should be able install via:
/[your python install directory]/bin/pip install django-picklefield
If you do this directly instead of a general pip call to install django-picklefield, that will ensure that it is installed on the correct version of Python.
Based on your description my best guess is that you have multiple versions of Python installed, and that your install/uninstall is happening on the wrong one.

Categories

Resources