I developed an appengine application in GO and now I tried to use the androidpublisher api. For this I need many dependencies like:
github.com/google/google-api-go-client
github.com/golang/oauth2
google.golang.org/appengine
google.golang.org/appengine/urlfetch
I tried to setup oauth2 authentication for google-api-go-client according to the example in https://github.com/golang/oauth2
Everything looks fine but I can't run the app-server anymore on my windows development machine. It complains about too long filenames:
INFO 2016-08-20 22:48:03,786 devappserver2.py:769] Skipping SDK update check.
INFO 2016-08-20 22:48:03,960 api_server.py:205] Starting API server at: http://localhost:64053
INFO 2016-08-20 22:48:03,969 dispatcher.py:197] Starting module "default" running at: http://localhost:8080
INFO 2016-08-20 22:48:03,974 admin_server.py:116] Starting admin server at:http://localhost:8000
Exception in thread Instance Adjustment: Traceback (most recent call last):
File "C:\Python27\lib\threading.py", line 801, in __bootstrap_inner
self.run()
File "C:\Python27\lib\threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "C:\work\go_appengine\google\appengine\tools\devappserver2\module.py",line 1485, in _loop_adjusting_instances
self._adjust_instances()
File "C:\work\go_appengine\google\appengine\tools\devappserver2\module.py",line 1460, in _adjust_instances
self._add_instance(permit_warmup=True)
File "C:\work\go_appengine\google\appengine\tools\devappserver2\module.py",line 1338, in _add_instance
expect_ready_request=perform_warmup)
File "C:\work\go_appengine\google\appengine\tools\devappserver2\go_runtime.py",line 174, in new_instance
if self._go_application.maybe_build(self._modified_since_last_build):
File "C:\work\go_appengine\google\appengine\tools\devappserver2\go_application.py",line 304, in maybe_build
self._extras_hash, old_extras_hash = (self._get_extras_hash(),
File "C:\work\go_appengine\google\appengine\tools\devappserver2\go_application.py",line 247, in _get_extras_hash gab_stdout,
_ = self._run_gab(gab_args, env={})
File "C:\work\go_appengine\google\appengine\tools\devappserver2\go_application.py",line 175, in _run_gab
gab_extra_args, env)
File "C:\work\go_appengine\google\appengine\tools\devappserver2\go_application.py",line 111, in _run_gab
env=env)
File "C:\work\go_appengine\google\appengine\tools\devappserver2\safe_subprocess.py",line 74, in start_process
stdin=subprocess.PIPE, startupinfo=startupinfo)
File "C:\Python27\lib\subprocess.py", line710, in __init__
errread, errwrite)
File "C:\Python27\lib\subprocess.py", line 958, in _execute_child
startupinfo)
WindowsError: [Error 206] The filename or extension is too long
I think my GOPATH is set up wrong so he gives all gofiles as argument to go-app-builder.exe.
My project is under C:\Users\me\project\ that's where the gopath points to and were I'm standing when I type:
goapp.bat serve .
Can someone help to fix this problem? Thank you.
EDIT
My project structure is like this:
How should i set my GOPATH?
GOPATH
$GOPATH
app.yaml
cron.yaml
pkg
src
testapp
app.go
golang.org
x
oauth2
Edit 2
I tried to move my GOPATH to project-root-dir/gopath but now i get this error message:
Exception in thread Instance Adjustment: Traceback (most recent call last): File "C:\Python27\lib\threading.py", line 810, in
__bootstrap_inner
self.run() File "C:\Python27\lib\threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs) File "C:\Users\Indra\development\tools\go_appengine\google\appengine\tools\devappserver2\module.py", line 1486, in _loop_adjusting_instances
self._adjust_instances() File "C:\Users\Indra\development\tools\go_appengine\google\appengine\tools\devappserver2\module.py", line 1461, in _adjust_instances
self._add_instance(permit_warmup=True) File "C:\Users\Indra\development\tools\go_appengine\google\appengine\tools\devappserver2\module.py", line 1339, in _add_instance
expect_ready_request=perform_warmup) File "C:\Users\Indra\development\tools\go_appengine\google\appengine\tools\devappserver2\go_runtime.py", line 176, in new_instance
if self._go_application.maybe_build(self._modified_since_last_build): File "C:\Users\Indra\development\tools\go_appengine\google\appengine\tools\devappserver2\go_application.py", line 304, in maybe_build
self._extras_hash, old_extras_hash = (self._get_extras_hash(), File "C:\Users\Indra\development\tools\go_appengine\google\appengine\tools\devappserver2\go_application.py", line 247, in _get_extras_hash
gab_stdout, _ = self._run_gab(gab_args, env={}) File "C:\Users\Indra\development\tools\go_appengine\google\appengine\tools\devappserver2\go_application.py", line 175, in _run_gab
gab_extra_args, env) File "C:\Users\Indra\development\tools\go_appengine\google\appengine\tools\devappserver2\go_application.py", line 111, in _run_gab
env=env) File "C:\Users\Indra\development\tools\go_appengine\google\appengine\tools\devappserver2\safe_subprocess.py", line 74, in start_process
stdin=subprocess.PIPE, startupinfo=startupinfo) File "C:\Python27\lib\subprocess.py", line 710, in __init__
errread, errwrite) File "C:\Python27\lib\subprocess.py", line 958, in _execute_child
startupinfo) WindowsError: [Error 87] Falscher Parameter
for all non german users it complains about a wrong parameter
I solved the problem by setting my GOPATH.
It's set now like this:
GOPATH=C:\Users\me\development\appengine\gopath;C:\Users\me\project
Now everything works find and the uploaded file is much smaller now. 2MB vs. 11MB.
Thanks for the tip.
Related
I am new to airflow. I created a virtual environment and followed the steps in https://airflow.apache.org/docs/apache-airflow/stable/start.html. In the end I gave "airflow standalone" and got "ValueError: Unable to configure handler 'processor'"
(venv) hgovea155#INSML-CPXX7WW AFDocProj % airflow standalone
Unable to load the config, contains a configuration error.
Traceback (most recent call last):
File "/usr/local/opt/python#3.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/logging/config.py", line 563, in configure
handler = self.configure_handler(handlers[name])
File "/usr/local/opt/python#3.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/logging/config.py", line 736, in configure_handler
result = factory(**kwargs)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/utils/log/file_processor_handler.py", line 49, in __init__
Path(self._get_log_directory()).mkdir(parents=True, exist_ok=True)
File "/usr/local/opt/python#3.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py", line 1273, in mkdir
self._accessor.mkdir(self, mode)
PermissionError: [Errno 13] Permission denied: '/Users/hgovea155/airflow/logs/scheduler/2023-01-02'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/bin/airflow", line 5, in <module>
from airflow.__main__ import main
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/__init__.py", line 46, in <module>
settings.initialize()
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/settings.py", line 569, in initialize
LOGGING_CLASS_PATH = configure_logging()
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/logging_config.py", line 74, in configure_logging
raise e
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/logging_config.py", line 69, in configure_logging
dictConfig(logging_config)
File "/usr/local/opt/python#3.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/logging/config.py", line 800, in dictConfig
dictConfigClass(config).configure()
File "/usr/local/opt/python#3.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/logging/config.py", line 571, in configure
'%r' % name) from e
ValueError: Unable to configure handler 'processor'
I then gave export AIRFLOW_HOME=. after which I gave "airflow standalone" again and I received "airflow.exceptions.AirflowConfigException: Cannot use relative path: sqlite:///./airflow.db to connect to sqlite. Please use absolute path such as sqlite:////tmp/airflow.db."
standalone | Database ready
Traceback (most recent call last):
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/bin/airflow", line 8, in <module>
sys.exit(main())
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/__main__.py", line 39, in main
args.func(args)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/cli/commands/standalone_command.py", line 48, in entrypoint
StandaloneCommand().run()
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/cli/commands/standalone_command.py", line 64, in run
self.initialize_database()
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/cli/commands/standalone_command.py", line 183, in initialize_database
appbuilder = cached_app().appbuilder
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/www/app.py", line 167, in cached_app
app = create_app(config=config, testing=testing)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/www/app.py", line 90, in create_app
f'Cannot use relative path: `{conf.get("database", "SQL_ALCHEMY_CONN")}` to connect to sqlite. '
airflow.exceptions.AirflowConfigException: Cannot use relative path: `sqlite:///./airflow.db` to connect to sqlite. Please use absolute path such as `sqlite:////tmp/airflow.db`.
I then tried the fix provided by #kulasangar
I navigated to user -> airflow and changed the "logs" folder to read write and execute for all users. The "ValueError: Unable to configure handler 'processor'" error didn't occur but I received another error.
(venv) hgovea155#INSML-CPXX7WW AFDocProj % airflow standalone
standalone | Starting Airflow Standalone
standalone | Checking database is initialized
Traceback (most recent call last):
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1901, in _execute_context
cursor, statement, parameters, context
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
sqlite3.OperationalError: attempt to write a readonly database
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/bin/airflow", line 8, in
sys.exit(main())
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/main.py", line 39, in main
args.func(args)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/cli/commands/standalone_command.py", line 48, in entrypoint
StandaloneCommand().run()
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/cli/commands/standalone_command.py", line 64, in run
self.initialize_database()
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/cli/commands/standalone_command.py", line 175, in initialize_database
db.initdb()
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/utils/db.py", line 697, in initdb
_create_db_from_orm(session=session)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/utils/db.py", line 682, in _create_db_from_orm
_create_flask_session_tbl()
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/utils/db.py", line 677, in _create_flask_session_tbl
db.create_all()
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/flask_sqlalchemy/init.py", line 1094, in create_all
self._execute_for_all_tables(app, bind, 'create_all')
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/flask_sqlalchemy/init.py", line 1086, in _execute_for_all_tables
op(bind=self.get_engine(app, bind), **extra)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/sql/schema.py", line 4931, in create_all
ddl.SchemaGenerator, self, checkfirst=checkfirst, tables=tables
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 3228, in _run_ddl_visitor
conn._run_ddl_visitor(visitorcallable, element, **kwargs)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2211, in _run_ddl_visitor
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 524, in traverse_single
return meth(obj, **kw)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 855, in visit_metadata
_is_metadata_operation=True,
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 524, in traverse_single
return meth(obj, **kw)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 900, in visit_table
include_foreign_key_constraints, # noqa
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1380, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 81, in _execute_on_connection
self, multiparams, params, execution_options
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1478, in _execute_ddl
compiled,
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1944, in execute_context
e, statement, parameters, cursor, context
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2125, in handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from=e
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 211, in raise
raise exception
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1901, in _execute_context
cursor, statement, parameters, context
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) attempt to write a readonly database
[SQL:
CREATE TABLE session (
id INTEGER NOT NULL,
session_id VARCHAR(255),
data BLOB,
expiry DATETIME,
PRIMARY KEY (id),
UNIQUE (session_id)
)
]
(Background on this error at: https://sqlalche.me/e/14/e3q8)
Is there a way to fix this? I believe this problem is because of some mistake in the initial setup or configuration. Can a more proper fix be found rather than a temporary one.
It seems like the user doesn't have the privilege to write logs under your airflow distribution folder.
Could you try granting write permission to the folder, so that airflow instance can write logs
sudo chmod -R 777 /home/user/airflow_logs
Please change the logs directory accordingly.
As a part of some automation I am trying to connect to mainframe using python, where I can access the mainframe files and create a
report. Just like using mainframe files as DB for python program.
To login to mainframe - we need to provide host details(xyz.host.com)
followed with region details(h123p) and then with our credentials.
I found that we can do this using python package py3270 and tried doing it but getting the error.
from py3270 import Emulator
# or not (uses s3270)
em = Emulator()
em.connect('xyx.example.com')
em.fill_field(3, 1, 'xxxx',5)
em.send_enter()
em.fill_field(2, 1, 'xxxxxxx', 7)
em.send_enter()
em.fill_field(8, 20, 'xxxxxxxx', 8)
em.send_enter()
# if your host unlocks the keyboard before truly being ready you can use:
em.wait_for_field()
# maybe look for a status message
if not em.string_found(1, 2, 'login succesful'):
abort()
# do something useful
# disconnect from host and kill subprocess
em.terminate()
The error:
File "C:/Users/vganr/PycharmProjects/test/mainframe.py", line 6, in
<module>
em = Emulator()
File "C:\Program Files (x86)\Python37-32\lib\site-packages\py3270
\__init__.py", line 273, in __init__
self.app = app or self.create_app(visible, args)
File "C:\Program Files (x86)\Python37-32\lib\site-packages\py3270
\__init__.py", line 291, in create_app
return Ws3270App(args)
File "C:\Program Files (x86)\Python37-32\lib\site-packages\py3270
\__init__.py", line 140, in __init__
self.spawn_app()
File "C:\Program Files (x86)\Python37-32\lib\site-packages\py3270
\__init__.py", line 145, in spawn_app
args, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE
File "C:\Program Files (x86)\Python37-32\lib\subprocess.py", line 775,
in __init__
restore_signals, start_new_session)
File "C:\Program Files (x86)\Python37-32\lib\subprocess.py", line 1178,
in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified
Exception ignored in: <function Emulator.__del__ at 0x038CB810>
Traceback (most recent call last):
File "C:\Program Files (x86)\Python37-32\lib\site-packages\py3270
\__init__.py", line 285, in __del__
self.terminate()
File "C:\Program Files (x86)\Python37-32\lib\site-packages\py3270
\__init__.py", line 320, in terminate
if not self.is_terminated:
AttributeError: 'Emulator' object has no attribute 'is_terminated'
Based on the error messages you're seeing, I suspect you're having a problem with missing/not found x3270/s3270 libraries.
return Ws3270App(args)
File "C:\Program Files (x86)\Python37-32\lib\site-packages\py3270
\__init__.py", line 140, in __init__
self.spawn_app()
File "C:\Program Files (x86)\Python37-32\lib\site-packages\py3270
\__init__.py", line 145, in spawn_app
args, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE
File "C:\Program Files (x86)\Python37-32\lib\subprocess.py", line 775,
in __init__
restore_signals, start_new_session)
File "C:\Program Files (x86)\Python37-32\lib\subprocess.py", line 1178,
in _execute_child
startupinfo)
The above suggests that the library is trying to start Ws3270, the windows version of x3270, and is unable to do so.
Make sure the the required libraries are in your path and visible from python.
I am running LDAMulticore from the python gensim library, and the script cannot seem to create more than one thread. Here is the error:
Traceback (most recent call last):
File "/usr/lib64/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib64/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 97, in worker
initializer(*initargs)
File "/usr/lib64/python2.7/site-packages/gensim/models/ldamulticore.py", line 333, in worker_e_step
worker_lda.do_estep(chunk) # TODO: auto-tune alpha?
File "/usr/lib64/python2.7/site-packages/gensim/models/ldamodel.py", line 725, in do_estep
gamma, sstats = self.inference(chunk, collect_sstats=True)
File "/usr/lib64/python2.7/site-packages/gensim/models/ldamodel.py", line 655, in inference
ids = [int(idx) for idx, _ in doc]
TypeError: 'int' object is not iterable
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.7/threading.py", line 765, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 325, in _handle_workers
pool._maintain_pool()
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 229, in _maintain_pool
self._repopulate_pool()
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 222, in _repopulate_pool
w.start()
File "/usr/lib64/python2.7/multiprocessing/process.py", line 130, in start
self._popen = Popen(self)
File "/usr/lib64/python2.7/multiprocessing/forking.py", line 121, in __init__
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
I'm creating my LDA model like this:
ldamodel = LdaMulticore(corpus, num_topics=50, id2word = dictionary, workers=3)
I have actually asked another question about this script, so the full script can be found here:
Gensim LDA Multicore Python script runs much too slow
If it's relevant, I'm running this on a CentOS server. Let me know if I should include any other information.
Any help is appreciated!
OSError: [Errno 12] Cannot allocate memory sounds like you are running out of RAM.
Check your available free memory and swap.
You can try to to reduce the number of threads with the workers parameter or the number of documents to be used in each training chunk with the chunksize parameter.
I followed the instruction to run the command "tensorbard --logdir ." and there are no graph at the Tensorboard graph page and i get these errors
Exception in thread Reloader:
Traceback (most recent call last):
File "c:\users\imkha\appdata\local\programs\python\python36\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "c:\users\imkha\appdata\local\programs\python\python36\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "c:\users\imkha\appdata\local\programs\python\python36\lib\site-packages\tensorboard\backend\application.py", line 350, in _reload_forever
reload_multiplexer(multiplexer, path_to_run)
File "c:\users\imkha\appdata\local\programs\python\python36\lib\site-packages\tensorboard\backend\application.py", line 322, in reload_multiplexer
multiplexer.AddRunsFromDirectory(path, name)
File "c:\users\imkha\appdata\local\programs\python\python36\lib\site-packages\tensorboard\backend\event_processing\plugin_event_multiplexer.py", line 175, in AddRunsFromDirectory
for subdir in GetLogdirSubdirectories(path):
File "c:\users\imkha\appdata\local\programs\python\python36\lib\site-packages\tensorboard\backend\event_processing\plugin_event_multiplexer.py", line 445, in <genexpr>
subdir
File "c:\users\imkha\appdata\local\programs\python\python36\lib\site-packages\tensorboard\backend\event_processing\io_wrapper.py", line 50, in ListRecursively
for dir_path, _, filenames in tf.gfile.Walk(top):
File "c:\users\imkha\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 518, in walk
for subitem in walk(os.path.join(top, subdir), in_order):
File "c:\users\imkha\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 518, in walk
for subitem in walk(os.path.join(top, subdir), in_order):
File "c:\users\imkha\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 518, in walk
for subitem in walk(os.path.join(top, subdir), in_order):
File "c:\users\imkha\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 499, in walk
listing = list_directory(top)
File "c:\users\imkha\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 478, in list_directory
compat.as_bytes(dirname), status)
File "c:\users\imkha\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 473, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.UnknownError: FindFirstFile failed for: C:\Users\imkha\AppData\Local\Application Data : Access is denied.
; Input/output error
i have tried the debugging steps provided on the page but it did not seem to work. Thank you in advance. (OS:Windows 10)
I had the same problem and, as Rajat suggested, I used the cmd in administrator mode.
Worked like a charm.
I am trying to run tasks through the command 'airflow scheduler' when it produced this error, AFTER I tried to run one of the dags.
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 28, in <module>
args.func(args)
File "/usr/local/lib/python3.5/dist-packages/airflow/bin/cli.py", line 839, in scheduler
job.run()
File "/usr/local/lib/python3.5/dist-packages/airflow/jobs.py", line 200, in run
self._execute()
File "/usr/local/lib/python3.5/dist-packages/airflow/jobs.py", line 1309, in _execute
self._execute_helper(processor_manager)
File "/usr/local/lib/python3.5/dist-packages/airflow/jobs.py", line 1441, in _execute_helper
self.executor.heartbeat()
File "/usr/local/lib/python3.5/dist-packages/airflow/executors/base_executor.py", line 132, in heartbeat
self.sync()
File "/usr/local/lib/python3.5/dist-packages/airflow/executors/celery_executor.py", line 88, in sync
state = async.state
File "/home/userName/.local/lib/python3.5/site-packages/celery/result.py", line 436, in state
return self._get_task_meta()['status']
File "/home/userName/.local/lib/python3.5/site-packages/celery/result.py", line 375, in _get_task_meta
return self._maybe_set_cache(self.backend.get_task_meta(self.id))
File "/home/userName/.local/lib/python3.5/site-packages/celery/backends/amqp.py", line 156, in get_task_meta
binding.declare()
File "/home/userName/.local/lib/python3.5/site-packages/kombu/entity.py", line 605, in declare
self._create_queue(nowait=nowait, channel=channel)
File "/home/userName/.local/lib/python3.5/site-packages/kombu/entity.py", line 614, in _create_queue
self.queue_declare(nowait=nowait, passive=False, channel=channel)
File "/home/userName/.local/lib/python3.5/site-packages/kombu/entity.py", line 649, in queue_declare
nowait=nowait,
File "/home/userName/.local/lib/python3.5/site-packages/amqp/channel.py", line 1147, in queue_declare
nowait, arguments),
File "/home/userName/.local/lib/python3.5/site-packages/amqp/abstract_channel.py", line 50, in send_method
conn.frame_writer(1, self.channel_id, sig, args, content)
File "/home/userName/.local/lib/python3.5/site-packages/amqp/method_framing.py", line 166, in write_frame
write(view[:offset])
File "/home/userName/.local/lib/python3.5/site-packages/amqp/transport.py", line 258, in write
self._write(s)
**ConnectionResetError: [Errno 104] Connection reset by peer**
I am using Python 3.5, Airflow 1.8, Celery 4.1.0, and RabbitMQ 3.5.7 as the worker :
It looks like I am having a problem on RabbitMQ, but I cannot figure out the reason.
The reported error seems to be a identified error solved in Airflow 1.10.0.
Had the same issue.
Your dag contains many API calls to a server and your airflow scheduler has a limit to follow, there isn't a specific number of request at once to abide by but you should do trial and error to find the number that works for your Airflow environment. usually occurs when your dag has n number of tasks to run alongside each other simultaneously.
this issue is not resolvable by any updates that claimed in answers, I was getting the error even when I was using the latest release.