I am working with xyPython, specifically a wx.Grid and when I attempt to delete a column from the grid the program crashes and terminal says "Segmentation Error"
dataGrid.CreateGrid(30, 20)
...
dataGrid.DeleteCols()
That is pretty much the code. I can delete rows, just not columns.
If I remove the delete column line it works fine.
According to the following thread, you may have to set the column labels yourself or you could get an error:
https://groups.google.com/forum/?fromgroups=#!topic/wxpython-users/IpARv4wVoqw
Then in this newer thread, it's said that you may have to call IncRef: https://groups.google.com/forum/?fromgroups=#!topic/wxpython-users/I1kndNvIEEQ
http://www.wxpython.org/docs/api/wx.grid.Grid-class.html
there are parameters for DeleteCols(self, pos, numCols, updateLabels)
Related
I have searched a lot for this error, on stack overflow and other websites but I cannot seem to find a solution to my problem.
Basically, I have a program that is in python, and I am using python's module rpy2 for communicating with some R functions, from python.
The problem is that when I run the code, sometimes, but not always I encounter this error. I am on windows. Sometimes when I restart my PC this code runs more exercises, but then eventually this error pops up again. What should I do ?
I have python 3.6.7, with PyCharm 2018.3.3. However I doubt the problem is from PyCharm because when I run my program from the cmd the same thing happens, except that the program halts directly without notifying me with the message "Process finished with exit code -1073741819 (0xC0000005)". This message only appears in PyCharm, but still.
I have rpy2 version 2.9.5
Code description
I do know, relatively, which part of the code is doing this, but I cannot optimize it more. In other words, In this part of the code, inside cross validation, I am over populating each of the train and validation sets in a certain way, and in order to do that, I am combining both X_train and y_train back into one data frame, overpopulating this data frame, and then getting back the updated, overpopulated, X_train and y_train, and performing my analysis on these overpopulated ones. I think combining both into numpy arrays into a pandas dataframe and then un-combining back is creating this memory error. Also its important to note that this is happening in each fold, and I'm doing a 10-folds-10-repeats cross validation. However, even when I run this on a Desktop PC rather than on my laptop the same thing happens, knowing that I have plenty of GBs left on my own laptop. I am doubting this is a python/rpy2 error ??
Code snippet
# I am calling this function inside each fold
df_combined = self.prepare_data(X_train, y_train)
and then after calling prepare_data() I do as follows:
# THE apply_f1(), apply_f2(), apply_f3(), and apply_f4() ARE THE FUNCTIONS
# THAT USE rpy2 INTERNALLY
if self.f1:
X_train_inner, y_train_inner = self.apply_f1(df_combined)
elif self.f2:
X_train_inner, y_train_inner = self.apply_f2(df_combined)
elif self.f3:
X_train_inner, y_train_inner = self.apply_f3(df_combined)
else:
X_train_inner, y_train_inner = self.apply_f4(df_combined)
The prepare_data() function:
def prepare_data(self, X_train, y_train):
'''
concatenates X_train_inner and y_train_inner into one, and make them a data frame
so we are able to process the data frame by SMOGN, RandUnder, GN, or SMOTER
'''
# reshape + rename
X_train_samp = X_train
y_train_samp = y_train.reshape(-1, 1)
# combine two numpy arrays together into one numpy array
combined = np.concatenate((X_train_samp, y_train_samp), axis=1)
# transform X_train + y_train into a pandas dataframe
column_names = self.other + [self.target_variable]
df_combined = pd.DataFrame(combined, columns=column_names)
# convert the combined pandas dataframe to R Data.Frame
df_combined = pandas2ri.py2ri(df_combined)
return df_combined
I have had this same error message "Process finished with exit code -1073741819 (0xC0000005)" with PyCharm 2021.1.
It happened because I selected Python 3.9 as an interpreter, while PyCharm was actually trying to use Python 3.10. And actually I had only Python 3.8 installed.
As far as I am concerned, the error disappeared after I selected Python 3.8 as an interpreter.
For some time I am getting the following error (warning?):
ERROR! Session/line number was not unique in database. History logging moved to new session
when working with Jupyter notebook (<XXXX> is a number, e.g. 9149).
As the same error has been reported for Spyder (Spyder's Warning: "Session/line number not unique in database") my guess is that there is some problem with the IPython kernel logging.
The question is: may there be any relation between running my code and the error?
Is it likely the error is caused by my code? I touch IPython API as following:
import IPython
def beep():
Python.display.display(IPython.display.Audio(url="http://www.w3schools.com/html/horse.ogg", autoplay=True))
def play_sound(self, etype, value, tb, tb_offset=None):
self.showtraceback((etype, value, tb), tb_offset=tb_offset)
beep()
get_ipython().set_custom_exc((Exception,), play_sound)
I use the beep() function in my code. I also work with large data which results in MemoryError exceptions.
And more importantly, may the error affect my code behaviour (given I do not try to access the logs)?
[EDIT]
It seems the issue is different than Spyder's Warning: "Session/line number not unique in database" as I am able to reproduce it with Jupyter Notebook but not with Spyder.
It is only a partial answer - the bounty is still eligible.
The error does depend on my code - at least when there is SyntaxError.
I have reproduced it with three following cells.
In [31]: print(1)
1
In [31]: print 2
File "<ipython-input-32-9d8034018fb9>", line 1
print 2
^
SyntaxError: Missing parentheses in call to 'print'
In [32]: print(2)
2
ERROR! Session/line number was not unique in database. History logging moved to new session 7
As you can see the line counter has been not increased in the second cell (with syntax issues).
Inspired by #zwer's comment, I have queried the $HOME/.ipython/profile_default/history.sqlite database:
sqlite> select session, line, source from history where line > 30;
6|31|print(1)
6|32|print 2
7|32|print(2)
It is clear that the line counter for the second cell has been increased in the database, but not in the notebook.
Thus when the third cell has been executed successfully, the notebook attempted to store its source with the same line, which offended the PRIMARY KEY constraint:
sqlite> .schema history
CREATE TABLE history
(session integer, line integer, source text, source_raw text,
PRIMARY KEY (session, line));
As a result, a failsafe has been triggered which issued the warning and created a new session.
I guess the issue is not affecting my code behaviour, however I miss a credible source for such statement.
I experienced the same error when I was trying to run some asyncio code in a jupyter notebook. The gist was like this (might make sense to those familiar with asyncio)
cell #1
output = loop.run_until_complete(future)
cell #2
print(output)
Run both cells together, and I would get OP's error.
Merge the cells together like so, and it ran cleanly
cell #1
output = loop.run_until_complete(future)
print(output)
This problem arises in the Jupyter Notebook cells when the cells have the same line number.
What you can do - if you are in Jupyter Notebook - is just restart the kernel.
The error will be solved.
I'm not looking for a solution here as I found a workaround; mostly I'd just like to understand why my original approach didn't work given that the work around did.
I have a dataframe of 2803 rows with the default numeric key. I want to replace that with the values in column 0, namely TKR.
So I use f.set_index('TKR') and get
f.set_index('TKR')
Traceback (most recent call last):
File "<ipython-input-4-39232ca70c3d>", line 1, in <module>
f.set_index('TKR')
TypeError: 'str' object is not callable
So I think maybe there's some noise in my TKR column and rather than scrolling through 2803 rows I try f.head().set_index('TKR')
When that works I try f.head(100).set_index('TKR') which also works. I continue with parameters of 500, 1000, and 1500 all of which work. So do 2800, 2801, 2802 and 2803. Finally I settle on
f.head(len(f)).set_index('TKR')
which works and will handle a different size dataframe next month. I would just like to understand why this works and the original, simpler, and (I thought) by the book method doesn't.
I'm using Python 3.6 (64 bit) and Pandas 0.18.1 on a Windows 10 machine
You might have accidentally assigned the pd.DataFrame.set_index() to a value.
example of this mistake: f.set_index = 'intended_col_name'
As a result for the rest of your code .set_index was changed into a str, which is not callable, resulting in this error.
Try restarting your notebook, remove the wrong code and replace it with f.set_index('TKR')
I know it's been a long while, but I think some people may need the answer in the future.
What you do with f.set_index('TKR') is totally right as long as 'TKR' is a column of DataFrame f.
That is to say, this is a bug you are not supposed to have. It is always because that you redefine some build-in function methods or functions of python in your former steps(Possibly 'set_index'). So, the way to fix is to review your code to find out which part is wrong.
If you are using Jupiter notebook, restart it and run this block only can fix this problem.
I believe I have a solution for you.
I ran into the same problem and I was constructing my dataframes from a dictionary, like this:
df_beta = df['Beta']
df_returns = df['Returns']
then, trying to do df_beta.set_index(Date) would fail. My workaround was
df_beta = df['Beta'].copy()
df_returns = df['Returns'].copy()
So apparently, if you build your dataframes as a "view" of another existing dataframe, you can't set index and it will raise 'Series not callable' error. If instead you create an explicit new object copying the original dataframes, then you can call reset_index, which is what you kind of do when you compute the head.
Hope this helps, 2 years later :)
I have the same problem here.
import tushare as ts
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
ts.set_token('*************************************')
tspro = ts.pro_api()
gjyx = tspro.daily(ts_code='000516.SZ', start_date='20190101')
# this doesn't work
# out:'str' object is not callable
gjyx = gjyx.set_index('trade_date')
# this works
gjyx = gjyx.head(len(gjyx)).set_index('trade_date')
jupyter notebook 6.1.6, python 3.9.1, miniconda3, win10
But when I upload this ipynb to ubuntu on AWS, it works.
I once had this same issue.
This simple line of code keep throwing TypeError: 'series' object is not callable error again and again.
df = df.set_index('Date')
I had to shutdown my kernel and restart the jupyter notebook to fix it.
I tried writing the code for a problem, but the module won't run. It says invalid syntax, but it's not highlighting anything.
The code: http://pastebin.com/cJVNBcYE
The problem: http://pastebin.com/p8E0E0Nj
I don't understand why it's not working.
I have numDealers set as a variable so that info can be entered in the program. The arrays are all defined. I have index=0 and x=1 to set up the loop for the numDealer arrays for sales and commission. I have another array=index section to calculate commissions. And then I have the prints set up.
Why isn't the program working? I don't understand.
Please post code in future, with a full traceback of the error. However:
else print(sales[index]) and print(comm[index])
should be:
else:
print(sales[index]) and print(comm[index])
i.e. you are missing a colon
I'm a bit puzzled by the and. It means that the second print will only be executed if the first fails (unlikely). Did you mean:
else:
print(sales[index])
print(comm[index])
?
By the way, it appears you are not using arrays but lists. The Python standard library includes a module called array https://docs.python.org/3/library/array.html which you do not appear to be using. So don't have a list called array, that collides with the standard library module name, and can cause no end of confusion.
My Python script uses an ADODB.Recordset object. I use an ADODB.Command object with a collection of ADODB.Parameter objects to update a record in the set. After that, I check the state of the recordset, and it was 1, which is adStateOpen. But when I call MyRecordset.Close(), I get an exception complaining that the operation is invalid in the set's current state. What state could an open recordset be in that would make it invalid to close it, and what can I do to fix it?
Code is scattered between a couple of files. I'll work on getting an illustration together.
Yes, that was the problem. Once I change the value of one of a recordset's ADODB.Field objects, I have to either update the recordset using ADODB.Recordset.Update() or call CancelUpdate().
The reason I'm going through all this rigarmarole of the ADODB.Command object is that ADODB.Recordset.Update() fails at random (or so it seems to me) times, complaining that "query-based update failed because row to update cannot be found". I've never been able to predict when that will happen or find a reliable way to keep it from happening. My only choice when that happens is to replace the ADODB.Recordset.Update() call with the construction of a complete update query and executing it using an ADODB.Connection or ADODB.Command object.