I am trying to run tasks through the command 'airflow scheduler' when it produced this error, AFTER I tried to run one of the dags.
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 28, in <module>
args.func(args)
File "/usr/local/lib/python3.5/dist-packages/airflow/bin/cli.py", line 839, in scheduler
job.run()
File "/usr/local/lib/python3.5/dist-packages/airflow/jobs.py", line 200, in run
self._execute()
File "/usr/local/lib/python3.5/dist-packages/airflow/jobs.py", line 1309, in _execute
self._execute_helper(processor_manager)
File "/usr/local/lib/python3.5/dist-packages/airflow/jobs.py", line 1441, in _execute_helper
self.executor.heartbeat()
File "/usr/local/lib/python3.5/dist-packages/airflow/executors/base_executor.py", line 132, in heartbeat
self.sync()
File "/usr/local/lib/python3.5/dist-packages/airflow/executors/celery_executor.py", line 88, in sync
state = async.state
File "/home/userName/.local/lib/python3.5/site-packages/celery/result.py", line 436, in state
return self._get_task_meta()['status']
File "/home/userName/.local/lib/python3.5/site-packages/celery/result.py", line 375, in _get_task_meta
return self._maybe_set_cache(self.backend.get_task_meta(self.id))
File "/home/userName/.local/lib/python3.5/site-packages/celery/backends/amqp.py", line 156, in get_task_meta
binding.declare()
File "/home/userName/.local/lib/python3.5/site-packages/kombu/entity.py", line 605, in declare
self._create_queue(nowait=nowait, channel=channel)
File "/home/userName/.local/lib/python3.5/site-packages/kombu/entity.py", line 614, in _create_queue
self.queue_declare(nowait=nowait, passive=False, channel=channel)
File "/home/userName/.local/lib/python3.5/site-packages/kombu/entity.py", line 649, in queue_declare
nowait=nowait,
File "/home/userName/.local/lib/python3.5/site-packages/amqp/channel.py", line 1147, in queue_declare
nowait, arguments),
File "/home/userName/.local/lib/python3.5/site-packages/amqp/abstract_channel.py", line 50, in send_method
conn.frame_writer(1, self.channel_id, sig, args, content)
File "/home/userName/.local/lib/python3.5/site-packages/amqp/method_framing.py", line 166, in write_frame
write(view[:offset])
File "/home/userName/.local/lib/python3.5/site-packages/amqp/transport.py", line 258, in write
self._write(s)
**ConnectionResetError: [Errno 104] Connection reset by peer**
I am using Python 3.5, Airflow 1.8, Celery 4.1.0, and RabbitMQ 3.5.7 as the worker :
It looks like I am having a problem on RabbitMQ, but I cannot figure out the reason.
The reported error seems to be a identified error solved in Airflow 1.10.0.
Had the same issue.
Your dag contains many API calls to a server and your airflow scheduler has a limit to follow, there isn't a specific number of request at once to abide by but you should do trial and error to find the number that works for your Airflow environment. usually occurs when your dag has n number of tasks to run alongside each other simultaneously.
this issue is not resolvable by any updates that claimed in answers, I was getting the error even when I was using the latest release.
Related
I am new to airflow. I created a virtual environment and followed the steps in https://airflow.apache.org/docs/apache-airflow/stable/start.html. In the end I gave "airflow standalone" and got "ValueError: Unable to configure handler 'processor'"
(venv) hgovea155#INSML-CPXX7WW AFDocProj % airflow standalone
Unable to load the config, contains a configuration error.
Traceback (most recent call last):
File "/usr/local/opt/python#3.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/logging/config.py", line 563, in configure
handler = self.configure_handler(handlers[name])
File "/usr/local/opt/python#3.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/logging/config.py", line 736, in configure_handler
result = factory(**kwargs)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/utils/log/file_processor_handler.py", line 49, in __init__
Path(self._get_log_directory()).mkdir(parents=True, exist_ok=True)
File "/usr/local/opt/python#3.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py", line 1273, in mkdir
self._accessor.mkdir(self, mode)
PermissionError: [Errno 13] Permission denied: '/Users/hgovea155/airflow/logs/scheduler/2023-01-02'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/bin/airflow", line 5, in <module>
from airflow.__main__ import main
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/__init__.py", line 46, in <module>
settings.initialize()
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/settings.py", line 569, in initialize
LOGGING_CLASS_PATH = configure_logging()
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/logging_config.py", line 74, in configure_logging
raise e
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/logging_config.py", line 69, in configure_logging
dictConfig(logging_config)
File "/usr/local/opt/python#3.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/logging/config.py", line 800, in dictConfig
dictConfigClass(config).configure()
File "/usr/local/opt/python#3.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/logging/config.py", line 571, in configure
'%r' % name) from e
ValueError: Unable to configure handler 'processor'
I then gave export AIRFLOW_HOME=. after which I gave "airflow standalone" again and I received "airflow.exceptions.AirflowConfigException: Cannot use relative path: sqlite:///./airflow.db to connect to sqlite. Please use absolute path such as sqlite:////tmp/airflow.db."
standalone | Database ready
Traceback (most recent call last):
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/bin/airflow", line 8, in <module>
sys.exit(main())
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/__main__.py", line 39, in main
args.func(args)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/cli/commands/standalone_command.py", line 48, in entrypoint
StandaloneCommand().run()
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/cli/commands/standalone_command.py", line 64, in run
self.initialize_database()
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/cli/commands/standalone_command.py", line 183, in initialize_database
appbuilder = cached_app().appbuilder
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/www/app.py", line 167, in cached_app
app = create_app(config=config, testing=testing)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/www/app.py", line 90, in create_app
f'Cannot use relative path: `{conf.get("database", "SQL_ALCHEMY_CONN")}` to connect to sqlite. '
airflow.exceptions.AirflowConfigException: Cannot use relative path: `sqlite:///./airflow.db` to connect to sqlite. Please use absolute path such as `sqlite:////tmp/airflow.db`.
I then tried the fix provided by #kulasangar
I navigated to user -> airflow and changed the "logs" folder to read write and execute for all users. The "ValueError: Unable to configure handler 'processor'" error didn't occur but I received another error.
(venv) hgovea155#INSML-CPXX7WW AFDocProj % airflow standalone
standalone | Starting Airflow Standalone
standalone | Checking database is initialized
Traceback (most recent call last):
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1901, in _execute_context
cursor, statement, parameters, context
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
sqlite3.OperationalError: attempt to write a readonly database
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/bin/airflow", line 8, in
sys.exit(main())
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/main.py", line 39, in main
args.func(args)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/cli/commands/standalone_command.py", line 48, in entrypoint
StandaloneCommand().run()
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/cli/commands/standalone_command.py", line 64, in run
self.initialize_database()
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/cli/commands/standalone_command.py", line 175, in initialize_database
db.initdb()
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/utils/db.py", line 697, in initdb
_create_db_from_orm(session=session)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/utils/db.py", line 682, in _create_db_from_orm
_create_flask_session_tbl()
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/airflow/utils/db.py", line 677, in _create_flask_session_tbl
db.create_all()
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/flask_sqlalchemy/init.py", line 1094, in create_all
self._execute_for_all_tables(app, bind, 'create_all')
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/flask_sqlalchemy/init.py", line 1086, in _execute_for_all_tables
op(bind=self.get_engine(app, bind), **extra)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/sql/schema.py", line 4931, in create_all
ddl.SchemaGenerator, self, checkfirst=checkfirst, tables=tables
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 3228, in _run_ddl_visitor
conn._run_ddl_visitor(visitorcallable, element, **kwargs)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2211, in _run_ddl_visitor
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 524, in traverse_single
return meth(obj, **kw)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 855, in visit_metadata
_is_metadata_operation=True,
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 524, in traverse_single
return meth(obj, **kw)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 900, in visit_table
include_foreign_key_constraints, # noqa
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1380, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 81, in _execute_on_connection
self, multiparams, params, execution_options
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1478, in _execute_ddl
compiled,
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1944, in execute_context
e, statement, parameters, cursor, context
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2125, in handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from=e
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 211, in raise
raise exception
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1901, in _execute_context
cursor, statement, parameters, context
File "/Users/hgovea155/PycharmProjects/AFDocProj/venv/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) attempt to write a readonly database
[SQL:
CREATE TABLE session (
id INTEGER NOT NULL,
session_id VARCHAR(255),
data BLOB,
expiry DATETIME,
PRIMARY KEY (id),
UNIQUE (session_id)
)
]
(Background on this error at: https://sqlalche.me/e/14/e3q8)
Is there a way to fix this? I believe this problem is because of some mistake in the initial setup or configuration. Can a more proper fix be found rather than a temporary one.
It seems like the user doesn't have the privilege to write logs under your airflow distribution folder.
Could you try granting write permission to the folder, so that airflow instance can write logs
sudo chmod -R 777 /home/user/airflow_logs
Please change the logs directory accordingly.
I am running LDAMulticore from the python gensim library, and the script cannot seem to create more than one thread. Here is the error:
Traceback (most recent call last):
File "/usr/lib64/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib64/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 97, in worker
initializer(*initargs)
File "/usr/lib64/python2.7/site-packages/gensim/models/ldamulticore.py", line 333, in worker_e_step
worker_lda.do_estep(chunk) # TODO: auto-tune alpha?
File "/usr/lib64/python2.7/site-packages/gensim/models/ldamodel.py", line 725, in do_estep
gamma, sstats = self.inference(chunk, collect_sstats=True)
File "/usr/lib64/python2.7/site-packages/gensim/models/ldamodel.py", line 655, in inference
ids = [int(idx) for idx, _ in doc]
TypeError: 'int' object is not iterable
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.7/threading.py", line 765, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 325, in _handle_workers
pool._maintain_pool()
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 229, in _maintain_pool
self._repopulate_pool()
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 222, in _repopulate_pool
w.start()
File "/usr/lib64/python2.7/multiprocessing/process.py", line 130, in start
self._popen = Popen(self)
File "/usr/lib64/python2.7/multiprocessing/forking.py", line 121, in __init__
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
I'm creating my LDA model like this:
ldamodel = LdaMulticore(corpus, num_topics=50, id2word = dictionary, workers=3)
I have actually asked another question about this script, so the full script can be found here:
Gensim LDA Multicore Python script runs much too slow
If it's relevant, I'm running this on a CentOS server. Let me know if I should include any other information.
Any help is appreciated!
OSError: [Errno 12] Cannot allocate memory sounds like you are running out of RAM.
Check your available free memory and swap.
You can try to to reduce the number of threads with the workers parameter or the number of documents to be used in each training chunk with the chunksize parameter.
I have a problem with airflow ( v1.9.0dev0+apache.incubating )
and anything looks working fine until the scheduler gets the job it crashes with this log:
[2017-03-15 15:54:18,075] {jobs.py:1329} INFO - Waiting up to 5s for processes to exit... Traceback (most recent call last): File "/usr/local/bin/airflow", line 4, in <module>
__import__('pkg_resources').run_script('airflow==1.9.0.dev0+apache.incubating', 'airflow') File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 738, in run_script
self.require(requires)[0].run_script(script_name, ns) File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 1499, in run_script
exec(code, namespace, namespace) File "/usr/local/lib/python2.7/dist-packages/airflow-1.9.0.dev0+apache.incubating-py2.7.egg/EGG-INFO/scripts/airflow", line 28, in <module>
args.func(args) File "/usr/local/lib/python2.7/dist-packages/airflow-1.9.0.dev0+apache.incubating-py2.7.egg/airflow/bin/cli.py", line 839, in scheduler
job.run() File "/usr/local/lib/python2.7/dist-packages/airflow-1.9.0.dev0+apache.incubating-py2.7.egg/airflow/jobs.py", line 200, in run
self._execute() File "/usr/local/lib/python2.7/dist-packages/airflow-1.9.0.dev0+apache.incubating-py2.7.egg/airflow/jobs.py", line 1309, in _execute
self._execute_helper(processor_manager) File "/usr/local/lib/python2.7/dist-packages/airflow-1.9.0.dev0+apache.incubating-py2.7.egg/airflow/jobs.py", line 1441, in _execute_helper
self.executor.heartbeat() File "/usr/local/lib/python2.7/dist-packages/airflow-1.9.0.dev0+apache.incubating-py2.7.egg/airflow/executors/base_executor.py", line 132, in heartbeat
self.sync() File "/usr/local/lib/python2.7/dist-packages/airflow-1.9.0.dev0+apache.incubating-py2.7.egg/airflow/executors/celery_executor.py", line 88, in sync
state = async.state File "/usr/local/lib/python2.7/dist-packages/celery/result.py", line 431, in state
return self._get_task_meta()['status'] File "/usr/local/lib/python2.7/dist-packages/celery/result.py", line 370, in _get_task_meta
return self._maybe_set_cache(self.backend.get_task_meta(self.id)) File "/usr/local/lib/python2.7/dist-packages/celery/backends/amqp.py", line 156, in get_task_meta
binding.declare() File "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 604, in declare
self._create_exchange(nowait=nowait, channel=channel) File "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 611, in
_create_exchange
self.exchange.declare(nowait=nowait, channel=channel) File "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 185, in declare
nowait=nowait, passive=passive, File "/usr/local/lib/python2.7/dist-packages/amqp/channel.py", line 630, in exchange_declare
wait=None if nowait else spec.Exchange.DeclareOk, File "/usr/local/lib/python2.7/dist-packages/amqp/abstract_channel.py", line 64, in send_method
conn.frame_writer(1, self.channel_id, sig, args, content) File "/usr/local/lib/python2.7/dist-packages/amqp/method_framing.py", line 174, in write_frame
write(view[:offset]) File "/usr/local/lib/python2.7/dist-packages/amqp/transport.py", line 269, in write
self._write(s) File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args) socket.error: [Errno 104] Connection reset by peer
On rabbitmq log I see:
=INFO REPORT==== 15-Mar-2017::15:54:17 ===
accepting AMQP connection <0.492.0> (127.0.0.1:42970 -> 127.0.0.1:5672)
=WARNING REPORT==== 15-Mar-2017::15:54:18 ===
closing AMQP connection <0.492.0> (127.0.0.1:42970 -> 127.0.0.1:5672):
connection_closed_abruptly
Which looks like the client is doing something weird when closes the connection.
Before the 1.9.0 I user the 1.7.1.3 which reports the same problem ( https://issues.apache.org/jira/browse/AIRFLOW-342).
Does somebody fixed this in some way ?
Any idea about where to put hands ?
Based on the logs, connection closing seems was initiated by rabbitmq. You may want to do tcpdump to confirm this.
It might be something related to security settings (username/password, or security settings). Did you do
Configure RabbitMQ: create user and grant privileges rabbitmqctl add_user rabbitmq_user_name rabbitmq_password rabbitmqctl add_vhost
rabbitmq_virtual_host_name rabbitmqctl set_user_tags
rabbitmq_user_name rabbitmq_tag_name rabbitmqctl set_permissions -p
rabbitmq_virtual_host_name rabbitmq_user_name "." "." ".*"
Taken from https://stlong0521.github.io/20161023%20-%20Airflow.html
If that's not the case, then it could be that for example rabbitmq is configured to use TLS and airflow/celery is not, or something like that - try using something Ethereal which supports RabbitMQ protocol - see for example https://www.rabbitmq.com/amqp-wireshark.html.
Hope this helps.
I developed an appengine application in GO and now I tried to use the androidpublisher api. For this I need many dependencies like:
github.com/google/google-api-go-client
github.com/golang/oauth2
google.golang.org/appengine
google.golang.org/appengine/urlfetch
I tried to setup oauth2 authentication for google-api-go-client according to the example in https://github.com/golang/oauth2
Everything looks fine but I can't run the app-server anymore on my windows development machine. It complains about too long filenames:
INFO 2016-08-20 22:48:03,786 devappserver2.py:769] Skipping SDK update check.
INFO 2016-08-20 22:48:03,960 api_server.py:205] Starting API server at: http://localhost:64053
INFO 2016-08-20 22:48:03,969 dispatcher.py:197] Starting module "default" running at: http://localhost:8080
INFO 2016-08-20 22:48:03,974 admin_server.py:116] Starting admin server at:http://localhost:8000
Exception in thread Instance Adjustment: Traceback (most recent call last):
File "C:\Python27\lib\threading.py", line 801, in __bootstrap_inner
self.run()
File "C:\Python27\lib\threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "C:\work\go_appengine\google\appengine\tools\devappserver2\module.py",line 1485, in _loop_adjusting_instances
self._adjust_instances()
File "C:\work\go_appengine\google\appengine\tools\devappserver2\module.py",line 1460, in _adjust_instances
self._add_instance(permit_warmup=True)
File "C:\work\go_appengine\google\appengine\tools\devappserver2\module.py",line 1338, in _add_instance
expect_ready_request=perform_warmup)
File "C:\work\go_appengine\google\appengine\tools\devappserver2\go_runtime.py",line 174, in new_instance
if self._go_application.maybe_build(self._modified_since_last_build):
File "C:\work\go_appengine\google\appengine\tools\devappserver2\go_application.py",line 304, in maybe_build
self._extras_hash, old_extras_hash = (self._get_extras_hash(),
File "C:\work\go_appengine\google\appengine\tools\devappserver2\go_application.py",line 247, in _get_extras_hash gab_stdout,
_ = self._run_gab(gab_args, env={})
File "C:\work\go_appengine\google\appengine\tools\devappserver2\go_application.py",line 175, in _run_gab
gab_extra_args, env)
File "C:\work\go_appengine\google\appengine\tools\devappserver2\go_application.py",line 111, in _run_gab
env=env)
File "C:\work\go_appengine\google\appengine\tools\devappserver2\safe_subprocess.py",line 74, in start_process
stdin=subprocess.PIPE, startupinfo=startupinfo)
File "C:\Python27\lib\subprocess.py", line710, in __init__
errread, errwrite)
File "C:\Python27\lib\subprocess.py", line 958, in _execute_child
startupinfo)
WindowsError: [Error 206] The filename or extension is too long
I think my GOPATH is set up wrong so he gives all gofiles as argument to go-app-builder.exe.
My project is under C:\Users\me\project\ that's where the gopath points to and were I'm standing when I type:
goapp.bat serve .
Can someone help to fix this problem? Thank you.
EDIT
My project structure is like this:
How should i set my GOPATH?
GOPATH
$GOPATH
app.yaml
cron.yaml
pkg
src
testapp
app.go
golang.org
x
oauth2
Edit 2
I tried to move my GOPATH to project-root-dir/gopath but now i get this error message:
Exception in thread Instance Adjustment: Traceback (most recent call last): File "C:\Python27\lib\threading.py", line 810, in
__bootstrap_inner
self.run() File "C:\Python27\lib\threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs) File "C:\Users\Indra\development\tools\go_appengine\google\appengine\tools\devappserver2\module.py", line 1486, in _loop_adjusting_instances
self._adjust_instances() File "C:\Users\Indra\development\tools\go_appengine\google\appengine\tools\devappserver2\module.py", line 1461, in _adjust_instances
self._add_instance(permit_warmup=True) File "C:\Users\Indra\development\tools\go_appengine\google\appengine\tools\devappserver2\module.py", line 1339, in _add_instance
expect_ready_request=perform_warmup) File "C:\Users\Indra\development\tools\go_appengine\google\appengine\tools\devappserver2\go_runtime.py", line 176, in new_instance
if self._go_application.maybe_build(self._modified_since_last_build): File "C:\Users\Indra\development\tools\go_appengine\google\appengine\tools\devappserver2\go_application.py", line 304, in maybe_build
self._extras_hash, old_extras_hash = (self._get_extras_hash(), File "C:\Users\Indra\development\tools\go_appengine\google\appengine\tools\devappserver2\go_application.py", line 247, in _get_extras_hash
gab_stdout, _ = self._run_gab(gab_args, env={}) File "C:\Users\Indra\development\tools\go_appengine\google\appengine\tools\devappserver2\go_application.py", line 175, in _run_gab
gab_extra_args, env) File "C:\Users\Indra\development\tools\go_appengine\google\appengine\tools\devappserver2\go_application.py", line 111, in _run_gab
env=env) File "C:\Users\Indra\development\tools\go_appengine\google\appengine\tools\devappserver2\safe_subprocess.py", line 74, in start_process
stdin=subprocess.PIPE, startupinfo=startupinfo) File "C:\Python27\lib\subprocess.py", line 710, in __init__
errread, errwrite) File "C:\Python27\lib\subprocess.py", line 958, in _execute_child
startupinfo) WindowsError: [Error 87] Falscher Parameter
for all non german users it complains about a wrong parameter
I solved the problem by setting my GOPATH.
It's set now like this:
GOPATH=C:\Users\me\development\appengine\gopath;C:\Users\me\project
Now everything works find and the uploaded file is much smaller now. 2MB vs. 11MB.
Thanks for the tip.
I'm using Celery 3.1.19 with scheduled tasks. I start the process like so:
celery beat --app=my_app.celery.app:app --pidfile=/usr/local/celerybeat.pid --schedule=/usr/local/celerybeat-schedule -l INFO
I've had a couple occurrences where the celery process terminates after an nslookup failure. This causes future scheduled tasks to not run. Eventually I notice and restart celery beat.
As far as I can tell the hostname it's trying to lookup is my RabbitMQ host. The nslookup failures are temporary. The hostname is correct and evidently there was a blip in name resolution. Ideally that would not crash the process and instead it would retry until it the hostname lookup succeeded.
Questions:
Is this expected behavior?
Is there a common way to ensure that the scheduler keeps running?
Do people have a system to watch the process and restart if it crashes?
Stack trace:
Message Error: Couldn't apply scheduled task ping: Error opening socket: hostname lookup failed
File "/usr/local/bin/celery", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/celery/__main__.py", line 30, in main
main()
File "/usr/local/lib/python2.7/dist-packages/celery/bin/celery.py", line 81, in main
cmd.execute_from_commandline(argv)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/celery.py", line 770, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 311, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
File "/usr/local/lib/python2.7/dist-packages/celery/bin/celery.py", line 762, in handle_argv
return self.execute(command, argv)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/celery.py", line 694, in execute
).run_from_argv(self.prog_name, argv[1:], command=argv[0])
File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 315, in run_from_argv
sys.argv if argv is None else argv, command)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 377, in handle_argv
return self(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 274, in __call__
ret = self.run(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/beat.py", line 79, in run
return beat().run()
File "/usr/local/lib/python2.7/dist-packages/celery/apps/beat.py", line 83, in run
self.start_scheduler()
File "/usr/local/lib/python2.7/dist-packages/celery/apps/beat.py", line 112, in start_scheduler
beat.start()
File "/usr/local/lib/python2.7/dist-packages/celery/beat.py", line 473, in start
File "/usr/local/lib/python2.7/dist-packages/celery/beat.py", line 221, in tick