I am trying to build a slam algorithm using python scripts and ROS. I have a top-level workspace which has two packages, one is the (rplidar_ros) package from GitHub and the other is the (slam) package I built using catkin_create_pkg which has an src directory that has two python files icp.py and mapping.py, a msg directory with a custom topic defined in Custom.msg and a launch directory with icp.launch.My problem is that when I use roslaunch to launch all three nodes, the rplidar_ros and icp_node launch fine and keep alive. map_node though dies immediately as soon as its started and then respawns and then dies again and continues this cycle.
When each node is run independently not using roslaunch, if the icp_node is run before map_node, map_node produces the same cycle of shutting and starting. But if map_node is started before icp_node it gives this error -
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/impl/tcpros_base.py", line 154, in run
(client_sock, client_addr) = self.server_sock.accept()
File "/usr/lib/python2.7/socket.py", line 206, in accept
sock, addr = self._sock.accept()
File "/usr/lib/python2.7/socket.py", line 174, in _dummy
raise error(EBADF, 'Bad file descriptor')
error: [Errno 9] Bad file descriptor
Any idea what could be causing this error?
Related
I have set up airflow on an Ubuntu server. I started the webserver just fine as a daemon process. I can start the scheduler using
airflow scheduler
and it works fine and the dags run. I then stop it and remove all the airflow-scheduler files in $AIRFLOW_HOME (airflow-scheduler.err, airflow-scheduler.log, airflow-scheduler.out)
I then try to start it as a daemon process using
airflow scheduler -D
It appears to start okay without error. However when I got to the webserver it says:
"The scheduler does not appear to be running. Last heartbeat was received 2 minutes ago.
The DAGs list may not update, and new tasks will not be scheduled."
When I look in airflow-scheduler.err I see:
Traceback (most recent call last):
File "/home/emauser/.local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 503, in <lambda>
File "/home/emauser/.local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 702, in _finalize_fairy
File "/usr/lib/python3.6/logging/__init__.py", line 1337, in error
File "/usr/lib/python3.6/logging/__init__.py", line 1444, in _log
File "/usr/lib/python3.6/logging/__init__.py", line 1454, in handle
File "/usr/lib/python3.6/logging/__init__.py", line 1516, in callHandlers
File "/usr/lib/python3.6/logging/__init__.py", line 865, in handle
File "/usr/lib/python3.6/logging/__init__.py", line 1071, in emit
File "/usr/lib/python3.6/logging/__init__.py", line 1061, in _open
NameError: name 'open' is not defined
Any idea why I'm getting an error on the built-in open function from the logging module?
Before restarting your scheduler in daemon mode, make sure no other scheduler process are running.
ps aux | grep airflow-scheduler
If there are any , kill them and then start your scheduler as a daemon .
I've been using Fabric to run setup commands on an EC2 instance during some automated unittests. These tests ran fine for months, and then a few days ago, suddenly started failing with the error:
Traceback (most recent call last):
File "tests.py", line 207, in test_setup
run('./setup_server')
File "/usr/local/myproject/src/buildbot/worker3/myproject_runtests/build/.env/local/lib/python2.7/site-packages/fabric/network.py", line 682, in host_prompting_wrapper
return func(*args, **kwargs)
File "/usr/local/myproject/src/buildbot/worker3/myproject_runtests/build/.env/local/lib/python2.7/site-packages/fabric/operations.py", line 1091, in run
shell_escape=shell_escape, capture_buffer_size=capture_buffer_size,
File "/usr/local/myproject/src/buildbot/worker3/myproject_runtests/build/.env/local/lib/python2.7/site-packages/fabric/operations.py", line 934, in _run_command
capture_buffer_size=capture_buffer_size)
File "/usr/local/myproject/src/buildbot/worker3/myproject_runtests/build/.env/local/lib/python2.7/site-packages/fabric/operations.py", line 816, in _execute
worker.raise_if_needed()
File "/usr/local/myproject/src/buildbot/worker3/myproject_runtests/build/.env/local/lib/python2.7/site-packages/fabric/thread_handling.py", line 26, in raise_if_needed
six.reraise(e[0], e[1], e[2])
File "/usr/local/myproject/src/buildbot/worker3/myproject_runtests/build/.env/local/lib/python2.7/site-packages/fabric/thread_handling.py", line 13, in wrapper
callable(*args, **kwargs)
File "/usr/local/myproject/src/buildbot/worker3/myproject_runtests/build/.env/local/lib/python2.7/site-packages/fabric/io.py", line 31, in output_loop
OutputLooper(*args, **kwargs).loop()
File "/usr/local/myproject/src/buildbot/worker3/myproject_runtests/build/.env/local/lib/python2.7/site-packages/fabric/io.py", line 152, in loop
self._flush(end_of_line + "\n")
File "/usr/local/myproject/src/buildbot/worker3/myproject_runtests/build/.env/local/lib/python2.7/site-packages/fabric/io.py", line 57, in _flush
self.stream.flush()
IOError: [Errno 32] Broken pipe
The command I'm running is a pip install -r requirements.txt, and it appears to run just fine. If I run the test locally, it completes without error. However, it now fails half-way through when run on AWS every time.
What would cause this? Since it's an IOError, and these can be caused by virtually any kind of minor network interruption, I'm not sure how to diagnose it. If Fabric lost connection temporarily, that would explain it, but it wouldn't explain why it's repeatable. I've re-run this script several dozen times, and it fails each time after initially connecting to the EC2 instance perfectly.
Is there some sort of configuration in Fabric I can change to improve connection error handling?
I am running a python 2.7.10 script in my Asus AC68 box. I just call speedtest-cli.py (the speedtest.net py script) from inside a script of mine.
It was working before I rebooted it and now every time I run it I have this error:
Testing download speed.......Exception in thread Thread-1:
Traceback (most recent call last):
File "/opt/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/opt/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/jffs/scripts/LaMetric/speedtestcli.py", line 268, in producer
thread.start()
File "/opt/lib/python2.7/threading.py", line 745, in start
_start_new_thread(self.__bootstrap, ())
error: can't start new thread
Just to be clear, my Asus runs Merlin, an enhanced version of Asuswrt - the firmware used by all recent Asus routers.
The code can be found here: https://github.com/sivel/speedtest-cli
Some tests on ulimit (thanks Google) and setting
ulimit -s 256
solved it...
Just anybody can confirm there might be any side effects?
Questions with similar issue:
Parallel Python - too many files and Python too many open files (subprocesses)
I am using Parallel Python [V1.6.2] to run tasks. The task processes an input file and outputs a log/report. Say, there are 10 folders each with 5000 ~ 20000 files which are read in parallel, processed and logs written out. Each file is approx 50KB ~ 250KB
After ~6 Hours of running, Parallel Python fails with the following error.
File "/usr/local/lib/python2.7/dist-packages/pp-1.6.2-py2.7.egg/pp.py", line 342, in __init__
File "/usr/local/lib/python2.7/dist-packages/pp-1.6.2-py2.7.egg/pp.py", line 506, in set_ncpus
File "/usr/local/lib/python2.7/dist-packages/pp-1.6.2-py2.7.egg/pp.py", line 140, in __init__
File "/usr/local/lib/python2.7/dist-packages/pp-1.6.2-py2.7.egg/pp.py", line 146, in start
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
File "/usr/lib/python2.7/subprocess.py", line 1135, in _execute_child
File "/usr/lib/python2.7/subprocess.py", line 1091, in pipe_cloexec
OSError: [Errno 24] Too many open files
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 66, in apport_excepthook
ImportError: No module named fileutils
Original exception was:
Traceback (most recent call last):
File "PARALLEL_TEST.py", line 746, in <module>
File "/usr/local/lib/python2.7/dist-packages/pp-1.6.2-py2.7.egg/pp.py", line 342, in __init__
File "/usr/local/lib/python2.7/dist-packages/pp-1.6.2-py2.7.egg/pp.py", line 506, in set_ncpus
File "/usr/local/lib/python2.7/dist-packages/pp-1.6.2-py2.7.egg/pp.py", line 140, in __init__
File "/usr/local/lib/python2.7/dist-packages/pp-1.6.2-py2.7.egg/pp.py", line 146, in start
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
File "/usr/lib/python2.7/subprocess.py", line 1135, in _execute_child
File "/usr/lib/python2.7/subprocess.py", line 1091, in pipe_cloexec
OSError: [Errno 24] Too many open files
While I understand, this could be the issue in subprocess pointed out here http://bugs.python.org/issue2320, but, seems the solution is only part of Py V3.2. I am currently tied to Py V2.7.
I would like to know if the following suggestion helps:
[1]http://www.parallelpython.com/component/option,com_smf/Itemid,1/topic,313.0
*) Adding worker.t.close() in destroy() method of /usr/local/lib/python2.7/dist-packages/pp-1.6.2-py2.7.egg/pp.py
*) Increasing BROADCAST_INTERVAL in /usr/local/lib/python2.7/dist-packages/pp-1.6.2-py2.7.egg/ppauto.py
I would like to know if there is a fix available/Work Around for this issue in Python V2.7.
Thanks in Advance
My team recently stumbled upon a similar issue with the same file handle resource exhaustion issue while running celeryd task queue jobs. I believe the OP has nailed it and it's most likely the messy code in suprocess.py lib in Python 2.7 and Python 3.1.
As suggested in , Python Bug#2320, please pass in close_fds=True everywhere you call subprocess.Popen(). In fact they make that a default in Python 3.2 while also fixing the underlying race condition issue. See more details in that ticket.
I had left at some lines to destroy the job servers. job_server.destroy() fixes the issue.
When I run bottle development server, I notice some warning showing up.
Can any one figure it out what exactly is the problem?
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.6/threading.py", line 525, in __bootstrap_inner
self.run()
File "/usr/local/lib/python2.6/dist-packages/bottle-0.8.1-py2.6.egg/bottle.py", line 1406, in run
if path: files[path] = mtime(path)
File "/usr/local/lib/python2.6/dist-packages/bottle-0.8.1-py2.6.egg/bottle.py", line 1401, in <lambda>
mtime = lambda path: os.stat(path).st_mtime
OSError: [Errno 20] Not a directory: '/usr/local/lib/python2.6/dist-packages/github2-0.1.2-py2.6.egg/github2/issues.py'
This is a bug in bottle (solved in 0.8.2). The reloading feature checks for modified module files and is confused by paths that point into egg archives. Update to 0.8.2 or disable the reloading-feature to solve this.