I'm trying to execute the TFX-Airflow tutorial (https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop). When I try to connect to the server using airflow webserver -p 8008 it gives me the following error
Traceback (most recent call last):
File "/home/tester/.local/lib/python3.6/site-packages/airflow/models/dagbag.py", line 243, in process_file
m = imp.load_source(mod_name, filepath)
File "/usr/lib/python3.6/imp.py", line 172, in load_source
module = _load(spec)
File "<frozen importlib._bootstrap>", line 684, in _load
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/tester/airflow/dags/taxi_pipeline_solution.py", line 26, in <module>
from tfx.components import CsvExampleGen
File "/home/tester/Airflow/tfx/tfx/components/__init__.py", line 17, in <module>
from tfx.components.bulk_inferrer.component import BulkInferrer
File "/home/tester/Airflow/tfx/tfx/components/bulk_inferrer/component.py", line 19, in <module>
from tfx.components.bulk_inferrer import executor
File "/home/tester/Airflow/tfx/tfx/components/bulk_inferrer/executor.py", line 26, in <module>
from tfx.dsl.components.base import base_executor
File "/home/tester/Airflow/tfx/tfx/dsl/components/base/base_executor.py", line 33, in <module>
from tfx.proto.orchestration import execution_result_pb2
ImportError: cannot import name 'execution_result_pb2'
My system is running on Ubuntu 20.04 and I have a virtual environment with all the dependencies installed as listed on the linked tutorial. I've used the 8008 port since I was getting an error with the 8080 port (that the connection is in use).
I read that it could be because the module is missing/circular dependent imports but I'm not sure which packages are responsible. I cannot find an execution_result_pb2.py file on my system (assuming it is a file). Does anyone know how to solve this?
Related
I am trying to deploy my Django project that uses React on the frontend inside Django app to railway. The app was deployed on Heroku before but after adding a whisper package from openAI the slug size became too big to use it there so I opted to use railway instead. I have another Django project with React running there so the problem comes from the packages.
The problem comes from package called gpt-index which uses pandas package and that's where the error occurs.
What can I do to fix this?
Nix-packs setup:
[phases.setup]
nixPkgs = ['nodejs-16_x', 'npm-8_x ', 'python310', 'postgresql', 'gcc ']
[phases.install]
cmds = ['npm i ', 'python -m venv /opt/venv && . /opt/venv/bin/activate && pip install -r requirements.txt']
[phases.build]
cmds = ['npm run build']
[start]
cmd = '. /opt/venv/bin/activate && python manage.py migrate && gunicorn backend.wsgi'
Full traceback:
Traceback (most recent call last):
File "/app/manage.py", line 22, in <module>
main()
File "/app/manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/opt/venv/lib/python3.10/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/opt/venv/lib/python3.10/site-packages/django/core/management/__init__.py", line 420, in execute
django.setup()
File "/opt/venv/lib/python3.10/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/opt/venv/lib/python3.10/site-packages/django/apps/registry.py", line 91, in populate
app_config = AppConfig.create(entry)
File "/opt/venv/lib/python3.10/site-packages/django/apps/config.py", line 193, in create
import_module(entry)
File "/nix/store/al6g1zbk8li6p8mcyp0h60d08jaahf8c-python3-3.10.9/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/opt/venv/lib/python3.10/site-packages/gpt_index/__init__.py", line 18, in <module>
from gpt_index.indices.keyword_table import (
File "/opt/venv/lib/python3.10/site-packages/gpt_index/indices/__init__.py", line 4, in <module>
from gpt_index.indices.keyword_table.base import GPTKeywordTableIndex
File "/opt/venv/lib/python3.10/site-packages/gpt_index/indices/keyword_table/__init__.py", line 4, in <module>
from gpt_index.indices.keyword_table.base import GPTKeywordTableIndex
File "/opt/venv/lib/python3.10/site-packages/gpt_index/indices/keyword_table/base.py", line 16, in <module>
from gpt_index.indices.keyword_table.utils import extract_keywords_given_response
File "/opt/venv/lib/python3.10/site-packages/gpt_index/indices/keyword_table/utils.py", line 7, in <module>
import pandas as pd
File "/opt/venv/lib/python3.10/site-packages/pandas/__init__.py", line 48, in <module>
from pandas.core.api import (
File "/opt/venv/lib/python3.10/site-packages/pandas/core/api.py", line 47, in <module>
from pandas.core.groupby import (
File "/opt/venv/lib/python3.10/site-packages/pandas/core/groupby/__init__.py", line 1, in <module>
from pandas.core.groupby.generic import (
File "/opt/venv/lib/python3.10/site-packages/pandas/core/groupby/generic.py", line 76, in <module>
from pandas.core.frame import DataFrame
File "/opt/venv/lib/python3.10/site-packages/pandas/core/frame.py", line 172, in <module>
from pandas.core.generic import NDFrame
File "/opt/venv/lib/python3.10/site-packages/pandas/core/generic.py", line 169, in <module>
from pandas.core.window import (
File "/opt/venv/lib/python3.10/site-packages/pandas/core/window/__init__.py", line 1, in <module>
from pandas.core.window.ewm import (
File "/opt/venv/lib/python3.10/site-packages/pandas/core/window/ewm.py", line 15, in <module>
import pandas._libs.window.aggregations as window_aggregations
ImportError: libstdc++.so.6: cannot open shared object file: No such file or directory
When I deploy a python3 application using Google App Engine Flex I get the following error:
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/vmagent/app/run.py", line 8, in <module>
app = create_app(os.getenv('FLASK_CONFIG') or 'default')
File "/home/vmagent/app/application/__init__.py", line 43, in create_app
from .main import main as main_blueprint
File "/home/vmagent/app/application/main/__init__.py", line 5, in <module>
from . import cron_jobs, views
File "/home/vmagent/app/application/main/cron_jobs.py", line 4, in <module>
from google.cloud import datastore
File "/env/lib/python3.6/site-packages/google/cloud/datastore/__init__.py", line 60, in <module>
from google.cloud.datastore.batch import Batch
File "/env/lib/python3.6/site-packages/google/cloud/datastore/batch.py", line 24, in <module>
from google.cloud.datastore import helpers
File "/env/lib/python3.6/site-packages/google/cloud/datastore/helpers.py", line 29, in <module>
from google.cloud.datastore_v1.proto import datastore_pb2
File "/env/lib/python3.6/site-packages/google/cloud/datastore_v1/__init__.py", line 18, in <module>
from google.cloud.datastore_v1.gapic import datastore_client
File "/env/lib/python3.6/site-packages/google/cloud/datastore_v1/gapic/datastore_client.py", line 18, in <module>
import google.api_core.gapic_v1.client_info
File "/env/lib/python3.6/site-packages/google/api_core/gapic_v1/__init__.py", line 26, in <module>
from google.api_core.gapic_v1 import method_async # noqa: F401
File "/env/lib/python3.6/site-packages/google/api_core/gapic_v1/method_async.py", line 20, in <module>
from google.api_core import general_helpers, grpc_helpers_async
File "/env/lib/python3.6/site-packages/google/api_core/grpc_helpers_async.py", line 145, in <module>
class _WrappedStreamUnaryCall(_WrappedUnaryResponseMixin, _WrappedStreamRequestMixin, aio.StreamUnaryCall):
AttributeError: module 'grpc.experimental.aio' has no attribute 'StreamUnaryCall'
My requirements.txt file include the following:
google-cloud-datastore==1.12.0
grpcio==1.27.2
The reason I am using grpcio version 1.27.2 instead of the most recent 1.29.0 is because of the information shown here
Can somebody help?
I just encountered the same issue, so this may help you out. I've noticed that google-api-core is also a dependency and it was recently updated (specifically around the grpc_helpers_async), so I just pinned it to version 1.17.0 and it resolved the issue. Just add this to your requirements:
google-api-core==1.17.0
My calculations using PyMC3 fail after pushing my operators to Google Cloud Composer. Errors provided below.
We've tried updating the PyPI packages of the compute environment and looked at the logs. This shows some clear errors, which seem to indicate that a compile job is failing on the cloud.
This is happens when airflow tries to parse our files - it is a fail on import.
Traceback (most recent call last):
File "/opt/python3.6/lib/python3.6/site-packages/theano/gof/lazylinker_c.py", line 81, in <module>
actual_version, force_compile, _need_reload))
ImportError: Version check of the existing lazylinker compiled file. Looking for version 0.211, but found None.
Extra debug information: force_compile=False, _need_reload=True
Traceback (most recent call last):
File "/usr/local/lib/airflow/airflow/models.py", line 374, in
process_file m = imp.load_source(mod_name, filepath)
File "/opt/python3.6/lib/python3.6/imp.py", line 172, in
load_source module = _load(spec)
File "<frozen importlib._bootstrap>", line 684, in
_load
File "<frozen importlib._bootstrap>", line 665, in
_load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in
exec_module
File "<frozen importlib._bootstrap>", line 219, in
_call_with_frames_removed
File "/home/airflow/gcs/dags/dag_abtesting/abtesting_orchestration.py", line 10, in <module>
from dag_abtesting.abtesting_tasks import task_analysis
File "/home/airflow/gcs/dags/dag_abtesting/abtesting_tasks.py", line 16, in <module>
from dag_abtesting.task_bernoulli import task_analyse_bernoulli
File "/home/airflow/gcs/dags/dag_abtesting/task_bernoulli.py", line 3, in <module>
from dag_abtesting.template_distributions import abstract_template_distribution, bernoulli_distribution
File "/home/airflow/gcs/dags/dag_abtesting/template_distributions/bernoulli_distribution.py", line 6, in <module>
import pymc3 as pm
File "/opt/python3.6/lib/python3.6/site-packages/pymc3/__init__.py", line 5, in <module>
from .distributions import *
File "/opt/python3.6/lib/python3.6/site-packages/pymc3/distributions/__init__.py", line 1, in <module>
from . import timeseries
File "/opt/python3.6/lib/python3.6/site-packages/pymc3/distributions/timeseries.py", line 1, in <module>
import theano.tensor as tt
File "/opt/python3.6/lib/python3.6/site-packages/theano/__init__.py", line 110, in <module>
from theano.compile import (
File "/opt/python3.6/lib/python3.6/site-packages/theano/compile/__init__.py", line 12, in <module>
from theano.compile.mode import *
File "/opt/python3.6/lib/python3.6/site-packages/theano/compile/mode.py", line 11, in <module>
import theano.gof.vm
File "/opt/python3.6/lib/python3.6/site-packages/theano/gof/vm.py", line 674, in <module>
from . import lazylinker_c
File "/opt/python3.6/lib/python3.6/site-packages/theano/gof/lazylinker_c.py", line 140, in <module>
preargs=args)
File "/opt/python3.6/lib/python3.6/site-packages/theano/gof/cmodule.py", line 2396, in
compile_str (status, compile_stderr.replace('\n', '. ')))
Exception: Compilation failed (return status=1): /usr/bin/ld: cannot find -lpython3.6m. collect2: error: ld returned 1 exit status.
It looks to me as if I'm missing something -likely a compiler - on my hosted environment? Can someone advise on a next action to take?
When deploying a Python spaCy app to AWS Lambda, I get the following error in the deploy (see below). Why deploy using zappa? The zip file is 125MB compressed, so a direct upload from the aws-cli fails on space, and a transfer to S3 also fails because the uncompressed is more than 250MB.
My program, itself, is not doing any multithreading nor multiprocessing, and it is only using spaCy 2.0. I built and deployed on an EC2 AWS Linux t2.medium. What are the exact steps that get a round-trip answer from a spaCy AWS Lambda function?
Failure trace below:
[1520570028387] Failed to find library...right filename?
[1520570029826] [Errno 38] Function not implemented: OSError
Traceback (most recent call last):
File "/var/task/handler.py", line 509, in lambda_handler
return LambdaHandler.lambda_handler(event, context)
File "/var/task/handler.py", line 237, in lambda_handler
handler = cls()
File "/var/task/handler.py", line 129, in __init__
self.app_module = importlib.import_module(self.settings.APP_MODULE)
File "/var/lang/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 978, in _gcd_import
File "<frozen importlib._bootstrap>", line 961, in _find_and_load
File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "/tmp/spaciness/front.py", line 1, in <module>
import spacy
File "/tmp/spaciness/spacy/__init__.py", line 4, in <module>
from .cli.info import info as cli_info
File "/tmp/spaciness/spacy/cli/__init__.py", line 1, in <module>
from .download import download
File "/tmp/spaciness/spacy/cli/download.py", line 10, in <module>
from .link import link
File "/tmp/spaciness/spacy/cli/link.py", line 7, in <module>
from ..compat import symlink_to, path2str
File "/tmp/spaciness/spacy/compat.py", line 11, in <module>
from thinc.neural.util import copy_array
File "/tmp/spaciness/thinc/neural/__init__.py", line 1, in <module>
from ._classes.model import Model
File "/tmp/spaciness/thinc/neural/_classes/model.py", line 12, in <module>
from ..train import Trainer
File "/tmp/spaciness/thinc/neural/train.py", line 7, in <module>
from tqdm import tqdm
File "/tmp/spaciness/tqdm/__init__.py", line 1, in <module>
from ._tqdm import tqdm
File "/tmp/spaciness/tqdm/_tqdm.py", line 53, in <module>
mp_lock = mp.Lock() # multiprocessing lock
File "/var/lang/lib/python3.6/multiprocessing/context.py", line 67, in Lock
return Lock(ctx=self.get_context())
File "/var/lang/lib/python3.6/multiprocessing/synchronize.py", line 163, in __init__
SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)
File "/var/lang/lib/python3.6/multiprocessing/synchronize.py", line 60, in __init__
unlink_now)
OSError: [Errno 38] Function not implemented
I could solve the issue with the following steps:
Increase memory size of the lambda function in the zappa_settings.json:
{
"dev": {
"memory_size": 3008,
}
}
I had to use a newer version of tqdm. Per default it was version 4.19 which had these issues as described here: https://github.com/tqdm/tqdm/issues/466
The described issue is fixed in a newer version. Its only to add tqdm to my requirements.txt and execute a pip upgrade of the package:
pip install -U tqdm
When I execute zappa deploy dev I get now the following message:
(tqdm 4.32.1 (/var/task/ve/lib/python3.6/site-packages), Requirement.parse('tqdm==4.19.1'), {'zappa'})
tqdm 4.19.1 was the default version of zappa and tqdm 4.32.1 is the new version containing the fix.
I'm trying to run a unittest with the following imports:
from LinearProbeTable import Dictionary
import unittest
import string
import random
class TestDictionary(unittest.TestCase):
(testing code)
In LinearProbeTable I import an excecption:
from Exceptions import DictFullError
The DictFullError is defined as (in Exceptions.py):
class DictFullError(Exception):
pass
I get the following traceback when I try to run the test:
Traceback (most recent call last):
File "/Applications/PyCharm Educational.app/Contents/helpers/pycharm/utrunner.py", line 135, in <module>
module = loadSource(a[0])
File "/Applications/PyCharm Educational.app/Contents/helpers/pycharm/utrunner.py", line 40, in loadSource
module = imp.load_source(moduleName, fileName)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/imp.py", line 171, in load_source
module = methods.load()
File "<frozen importlib._bootstrap>", line 1220, in load
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1471, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "/Users/andrewsalter/.../Prac 9/Testing/task1_autotest.py", line 1, in <module>
from LinearProbeTable import Dictionary
File "/Users/andrewsalter/.../Prac 9/LinearProbeTable.py", line 1, in <module>
from Exceptions import DictFullError
ImportError: cannot import name 'DictFullError'
I don't have a circular dependency, so I'm not sure what's happening.
I'm trying to run this on my laptop, but it worked on my Desktop (I have it all hosted on google drive - I know I should be using Git but I haven't got around to it yet)
EDIT: Could it be due to the fact LinearProbeTable.py is in the parent folder to the testing script?