While following the Jupyter notebooks for the course
I hit upon an error when these lines are run.
I know that the cnn_learner line has got no errors whatsoever, The problem lies in the lr_find() part
It seems that learn.lr_find() does not want to return two values! Although its documentation says that it returns a tuple. That is my problem.
These are the lines of code:
learn = cnn_learner(dls, resnet34, metrics=error_rate)
lr_min,lr_steep = learn.lr_find()
The error says:
not enough values to unpack (expected 2, got 1)
for the second line.
Also, I get this graph with one 'marker' which I suppose is either one of the values of lr_min or lr_steep
This is the graph
When I run learn.lr_find() only, i.e. do not capture the output in lr_min, lr_steep; it runs well but then I do not get the min and steep learning rates (which is really important for me)
I read through what lr_find does and it is clear that it returns a tuple. Its docstring says
Launch a mock training to find a good learning rate and return suggestions based on suggest_funcs as a named tuple
I had duplicated the original notebook, and when I hit this error, I ran the original notebook, with the same results. I update the notebooks as well, but no change!
Wherever I have searched for this online, any sort of error hasn't popped up. The only relevant thing I found is that lr_find() returns different results of the learning rates after every run, which is perfectly fine.
I was having the same problem and I found that the lr_find() output's has updated. You can substitute the second line to lrs = learn.lr_find(suggest_funcs=(minimum, steep, valley, slide)), and then you just substitute where you using lr_min and lr_steep to lrs.minimum and lrs.steep respectively, this should work fine and solve your problem.
If you wanna read more about it, you can see this post that is in the fastai's forum.
Related
Well first off, I used issues = jira.get_project_issuekey_all("project_name") to obtain all the issues in my project but I am only getting 50 results back. How would I go about getting all of the results?
Secondly, once this is figured out, I am wondering how I would be able to check if an issue would have a certain label. As far as I come is figuring out jira.issue_field_value('issue key', 'labels').
I want to combine these two conditions so that it will return all the issues under a certain project with a specific label. Any thoughts?
In my code i'm calculating f1-score for my multi-labelled dataset by writing the following command:
f1_score_results.append(f1_score(y_train[col], y_pred_train[col_idx], average='macro'))
This code is giving me the output but error is also coming which is given below.
I also wrote the following code
print(len(np.unique(X_train)))
print(len(np.unique(X_test)))
print(len(np.unique(y_train)))
print(len(np.unique(y_test)))
The output that i m getting is :
Kindly help me how I can resolve this issue.
The error happens, because you have a lack of testing data. Therefore the warning, saying hey i dont have enough test data. So i cant give you the f-score, so i am just gonna put a 0.0 value since i dont have anything to campare too.
What you should do is try to increase, your y_train and y_test dataset size. Maybe imposing less conditions on then
I'm currently trying to learn more about Deep learning/CNN's/Keras through what I thought would be a quite simple project of just training a CNN to detect a single specific sound. It's been a lot more of a headache than I expected.
I'm currently reading through this ignoring the second section about gpu usage, the first part definitely seems like exactly what I'm needing. But when I go to run the script, (my script is pretty much totally lifted from the section in the link above that says "Putting the pieces together, you may end up with something like this:"), it gives me this error:
AttributeError: 'DataFrame' object has no attribute 'file_path'
I can't find anything in the pandas documentation about a DataFrame.file_path function. So I'm confused as to what that part of the code is attempting to do.
My CSV file contains two columns, one with the paths and then a second column denoting the file paths as either positive or negative.
Sidenote: I'm also aware that this entire guide just may not be the thing I'm looking for. I'm having a very hard time finding any material that is useful for the specific project I'm trying to do and if anyone has any links that would be better I'd be very appreciative.
The statement df.file_path denotes that you want access the file_path column in your dataframe table. It seams that you dataframe object does not contain this column. With df.head() you can check if you dataframe object contains the needed fields.
So, i'm currently working with a pyomo model with multiple instances that are being solved in parallel. Issue is, solving them takes pyomo quite a long time (like 2 to 3 secs, even though the solving part by gurobi takes about 0.08s). I've found out that, by exporting a pyomo instance into an .mps file and then giving it to gurobipy i can get like an increase of 30% in overall speed.
The problem comes later, when i want to work with the variables of the solved model, because ive noticed that, when exporting the original instance from pyomo into a .mps file, variable names get lost; they all get named "x" (so, for example, model.Delta, model.Pg, model.Alpha, etc get turned into x1, x2, ... ,x9999 instead of Delta[0], Delta[1], ... Alpha[99,99]).
Is there a way to keep the original variable name when exporting the model?
Managed to solve it!
To anyone who might find this useful, i passed a dictionary with "symbolic_solver_labels" as an io_options argument for the method, like this:
instance.write(filename = str(es_) + ".mps", io_options = {"symbolic_solver_labels":True})
Now my variables are correctly labeled in the .mps file!
I'm currently trying to fit a set of (positive) data with the powerlaw.Fit() function from the powerlaw package. However, every single time I do this I obtain the following message:
<powerlaw.Fit at 0x25eac6d3e80>
which I've been trying to figure out what it means for ages, but obviously without success. Another issue that I've been facing is that whenever I plot my CCDF using
powerlaw.plot_ccdf()
and my PDF using
powerlaw.plot_pdf()
with my data, I only obtain a plot for the CCDF but nothing for the PDF. Why are all of these things happening? My data is within a NumPy array and looks as follows:
array([ 9.90857053e-06, 3.45336391e-05, 4.06757403e-05, ...,
6.91411789e-02, 6.92511375e-02, 7.45046008e-02])
I doubt there is any kind of issue with my data, since, as I said, I get the plot for the CCDF more than fine. Any kind of help would be highly appreciated. Thanks in advance. (Edit: the data is composed of 1908 non-integer values)
It probably helps to read the documentation. http://pythonhosted.org/powerlaw/
powerlaw.Fit is a class, so when you call powerlaw.Fit(...), you will get an object with associated methods. Save the object in a variable, then pull the results you want from it. For example:
results = powerlaw.Fit(data)
print(results.find_xmin())
The 'message' you are getting is just a placeholder for the Fit object that is created.