When checking the compatibility of the software I use with Python 3.11, I stumbled upon some issue where I am not sure how to solve or debug it. There already is a corresponding issue in the upstream project, but no apparent solution: https://github.com/hsnr-gamera/gamera-4/issues/54.
The library is partially using the Python C API to increase the performance. The build process does not error out, but as soon as the shared object should be imported, the following error is raised:
tests/test_color.py:3: in <module>
from gamera.core import *
/opt/hostedtoolcache/Python/3.11.0/x64/lib/python3.11/site-packages/gamera-4.0.0-py3.11-linux-x86_64.egg/gamera/core.py:51: in <module>
from gamera.gameracore import UNCLASSIFIED, AUTOMATIC, HEURISTIC, MANUAL
E SystemError: initialization of gameracore raised unreported exception
Python 3.10 seems to work fine here. There seems to have been a similar issue regarding Python 3.11 within psycopg2 (see Fedora bug tracker), so this might be related to changes in Python 3.11. The build environment is Ubuntu 20.04.5 through GitHub Actions, while I did some local tests with Ubuntu 22.04 as well.
As there does not seem to be much guidance on how to tackle such issues, the following questions arise for me:
How can these failures be debugged and fixed in a general manner? Following some basic tutorials on how to use GDB with Python did not help me much here, although my experience is rather limited there.
Is there any known change from Python 3.10 to Python 3.11 which would cause such issues?
Description
Hi all, I'm running into a strange error while trying to run the following:
import sentence_transformers as st
encoder = st.SentenceTransformer("all-mpnet-base-v2") # also tried sentence-t5-base and all-MiniLM-L6-v2
I get an error: Process finished with exit code 139 (interrupted by signal 11: SIGSEGV), and if I run it as a script, I get zsh: segmentation fault python. I also get a warning: multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appears to be 1 leaked semaphore objects to clean up at shutdown.
Reproducibility
I'm afraid I don't know how to reproduce this problem, it was working until it stopped working. Here are my specs:
MacBook Pro 2019, Intel Core i9, 32GB RAM
macOS Monterey 12.0.1
python 3.10.2
sentence_transformers==2.2.0
torch==1.11.0
What I've tried so far
clear the cache from ~/.cache/torch/sentence_transformers
reinstall sentence_transformers (also tried downgrading to 2.1.0)
reinstalling torch
recreating my virtual environment
rebooting
I've also managed to trace this problem to the following point in the torch codebase.
I've also started a discussion in this GitHub issue
Any help would be appreciated, thanks in advance!
Update: I managed to fix the problem by downgrading to Python 3.9.11. I'm not actually sure whether the Python version is to be blamed here, or perhaps just having a new Python environment did the trick (perhaps the virtual environment wasn't enough). Anyway, this isn't really a solution but a workaround, so if anyone has any better suggestions, then please let us know!
Lately I've had this problem when running my lambdas on aws.
I have checked that the lambda does not run, the lambda starts but does not run.
I tried to increase the memorySize of the lambda but it is not the problem. I leave you an image of the output in cloudwatch.
Thanks in advance.
Error: Runtime exited with error: signal: segmentation fault (core dumped) Runtime.ExitError
One big reason for this core dumped error is when your lambda reaches the maximum memory allowed.
You may experience a case where the graph does not show it, and this is still the case.
Check your lambda logic or logic runs from a third party dependency. There may be a memory usage collision which causes segmentation fault (core dumped) crash
Segmentation fault (core dumped)" is the error that Linux prints when
a program exits with a SIGSEGV signal and you have core creation
enabled. This means some program has crashed.
In this case python interpreter has crashed unexpectedly. There are
only a few reasons this can happen:
You're using a third-party extension module written in C, and that
extension module has crashed.
You're (directly or indirectly) using
the built-in module ctypes, and calling external code that crashes.
There's something wrong with your Python installation.
You've discovered a bug in Python or most probably one of the python's library you are using that you should report.
Things you can try to fix
If you have updated the libraries you are using check by downgrading them one by one
If you are using serverless, upgrade it to latest version
If you are using node, npm upgrade them to latest version
I'm trying to execute a Python script, but I am getting the following error:
Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)
I'm using python 3.5.2 on a Linux Mint 18.1 Serena OS
Can someone tell me why this happens, and how can I solve?
The SIGSEGV signal indicates a "segmentation violation" or a "segfault". More or less, this equates to a read or write of a memory address that's not mapped in the process.
This indicates a bug in your program. In a Python program, this is either a bug in the interpreter or in an extension module being used (and the latter is the most common cause).
To fix the problem, you have several options. One option is to produce a minimal, self-contained, complete example which replicates the problem and then submit it as a bug report to the maintainers of the extension module it uses.
Another option is to try to track down the cause yourself. gdb is a valuable tool in such an endeavor, as is a debug build of Python and all of the extension modules in use.
After you have gdb installed, you can use it to run your Python program:
gdb --args python <more args if you want>
And then use gdb commands to track down the problem. If you use run then your program will run until it would have crashed and you will have a chance to inspect the state using other gdb commands.
Another possible cause (which I encountered today) is that you're trying to read/write a file which is open. In this case, simply closing the file and rerunning the script solved the issue.
After some times I discovered that I was running a new TensorFlow version that gives error on older computers. I solved the problem downgrading the TensorFlow version to 1.4
When I encounter this problem, I realize there are some memory issues. I rebooted PC and solved it.
This can also be the case if your C-program (e.g. using cpython is trying to access a variable out-of-bound
ctypedef struct ReturnRows:
double[10] your_value
cdef ReturnRows s_ReturnRows # Allocate memory for the struct
s_ReturnRows.your_value = [0] * 12
will fail with
Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)
For me, I was using the OpenCV library to apply SIFT.
In my code, I replaced cv2.SIFT() to cv2.SIFT_create() and the problem is gone.
Deleted the python interpreter and the 'venv' folder solve my error.
I got this error in PHP, while running PHPUnit. The reason was a circular dependency.
I received the same error when trying to connect to an Oracle DB using the pyodbc module:
connection = pyodbc.connect()
The error occurred on the following occasions:
The DB connection has been opened multiple times in the same python
file
While in debug mode a breakpoint has been reached
while the connection to the DB being open
The error message could be avoided with the following approaches:
Open the DB only once and reuse the connection at all needed places
Properly close the DB connection after using it
Hope, that will help anyone!
11 : SIGSEGV - This signal is arises when an memory segement is illegally accessed.
There is a module name signal in python through which you can handle this kind of OS signals.
If you want to ignore this SIGSEGV signal, you can do this:
signal.signal(signal.SIGSEGV, signal.SIG_IGN)
However, ignoring the signal can cause some inappropriate behaviours to your code, so it is better to handle the SIGSEGV signal with your defined handler like this:
def SIGSEGV_signal_arises(signalNum, stack):
print(f"{signalNum} : SIGSEGV arises")
# Your code
signal.signal(signal.SIGSEGV, SIGSEGV_signal_arises)
I encountered this problem when I was trying to run my code on an external GPU which was disconnected. I set os.environ['PYOPENCL_CTX']=2 where GPU 2 was not connected. So I just needed to change the code to os.environ['PYOPENCL_CTX'] = 1.
For me these three lines of code already reproduced the error, no matter how much free memory was available:
import numpy as np
from sklearn.cluster import KMeans
X = np.array([[1, 2], [1, 4], [1, 0], [10, 2], [10, 4], [10, 0]])
kmeans = KMeans(n_clusters=1, random_state=0).fit(X)
I could solve the issue by removing an reinstalling the scikit-learn package. A very similar solution to this.
This can also occur if trying to compound threads using concurrent.futures. For example, calling .map inside another .map call.
This can be solved by removing one of the .map calls.
I had the same issue working with kmeans from scikit-learn.
Upgrading from scikit-learn 1.0 to 1.0.2 solved it for me.
This issue is often caused by incompatible libraries in your environment. In my case, it was the pyspark library.
In my case, reverting my most recent conda installs fixed the situation.
I got this error when importing monai. It was solved after I created a new conda environment. Possible reasons I could imagine were either that there were some conflict between different packages, or maybe that my environment name was the same as the package name I wanted to import (monai).
found on other page.
interpreter: python 3.8
cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml")
this solved issue for me.
i was getting SIGSEGV with 2.7, upgraded my python to 3.8 then got different error with OpenCV. and found answer on OpenCV 4.0.0 SystemError: <class 'cv2.CascadeClassifier'> returned a result with an error set.
but eventually one line of code fixed it.
I'm working on a variscite board with a yocto distribution and python 2.7.3.
I get sometimes a Bus error message from the python interpreter.
My program runs normally at least some hours or days before the error ocours.
But when I get it once, I get it directly when I try to restart my program.
I have to reboot before the system works again.
My program uses only a serial port, a bit usb communication and some tcp sockets.
I can switch to another hardware and get the same problems.
I also used the python selftest with
python -c "from test import testall"
And I get errors for these two tests
test_getattr (test.test_builtin.BuiltinTest) ... ERROR test_nameprep
(test.test_codecs.NameprepTest) ... ERROR
And the selftest stops always at
test_callback_register_double (ctypes.test.test_callbacks.SampleCallbacksTestCase) ... Segmentation
fault
But when the systems runs some hours the selftests stops earlier at
ctypes.macholib.dyld
Bus error
I checked the RAM with memtester, it seems to be okay.
How I can find the cause for the problems?
Bus errors are generally caused by applications trying to access memory that hardware cannot physically address. In your case there is a segmentation fault which may cause dereferencing a bad pointer or something similar which leads to accessing a memory address which physically is not addressable. I'd start by root causing the segmentation fault first as the bus error is the secondary symptom.
A year later I found the indirect cause for the problems.
I wrote a crc16 module which used:
from ctypes import c_ushort
...
value = c_ushort(crcValue >>8 ) ...
In case of a BUS-Error this was the problematic part.
I don't assume that the c_ushort() function itself causes the problem, it's only the function which shows that there is something broken.
The problem gone after upgrading the system to Linux version 3.14.38-6QP+g8740b9f (test#Yocto) (gcc version 4.9.2 (GCC) )