Hello this question is about python.
Is there a way to ignore all kinds off exceptions. I know i could just put the whole code in an huge try cahtch but i want it to continue even if one part fails and in result of this some other parts fail too. One way to achieve this would be to put every single line in a try except statement. But is there an other more elegant way to do this?
Well , you can
1 - Put every separate part in a try catch
try:
#something
except:
pass
2 - Put everything in a bigger try catch
try:
#do something
#so something else
#do something else
except:
pass
or you can use contextlib.suppress (https://docs.python.org/3/library/contextlib.html#contextlib.suppress) as Random Davis suggested to ignore certain types of exceptions
But ignoring all exception is a really bad idea , instead you should do
try:
#something
except:
#something else
As far as I know , there is no other "elegant" way to ignore exceptions
(This could have been a comment but I lack rep to post comments)
Besides using try, my python professor will generally comment out a function that he isn't testing so he could test a specific function. If part of your code isn't working try adding a breakpoint before debugging, then you can run your code up until the breakpoint to see if a certain line is doing what you want it to. In visual studio code you can do this by clicking in the empty space just to the left of the line numbers, and you will see a red dot on the line if done correctly.
If you are referencing that a function isn't doing what you want it to do, employing these methods will help you find the error in your ways, starting with tracking the input all the way until you get the output you want, going function by function.
If you're referencing that your code is simply too broken to function correctly, your code will always run until it returns an error, and if the error is early in that process between converting an input to an output, you can come across a multitude of reasons behind this that will stop functions that are supposed to run afterwards from working correctly. If that is the case comment out the later functions until your first one is working, and keep working chronologically to debug those errors. In the future, I highly recommend posting your exact code when asking a coding question, otherwise it can be hard for others to extract the necessary information to answer your question effectively. Good luck!
In reference to Global-Occult's answer, although you can basically try something and except extraneous information, you really don't want to be coding like that, because to develop higher level programs that extra information will no longer be extraneous, in fact it could be very important data that will allow you to develop a program much further.
Related
Is there a pythonic way to throw an exception to other developers to warn them about using a piece of code with bugs in it?
For example:
def times_two(x):
raise BrokenException("Attn.This code is unreliable. Only works on positive numbers")
x = abs(x) * 2
return x
I understand that I can raise a generic exception with a message, or even derive my own exception classes, but i just want to know if there is a built-in, pythonic way to do something like this.
And also, I understand that why the actual times_two function doesn't work. That was just an example function.
This is not something to validate input parameters, or even returned values.
This is simply to mark a function as potentially unreliable.
The code must be used in some areas under very specific circumstances, but when devs are writing code and run across this function should be warned of the limitations.
Your example is pretty flawed for any use case in which alerting the developers would be needed. This would need to alert the user not to input a negative number.
def times_two(x):
if x < 0:
raise BrokenException("Attn user. Don't give me negitive numbers.")
return x * 2
Although, I think if your example more accurately described an actual error needing developer attention then you should just fix that and not put it into production knowing there is an error in it.
sentry.io on the other hand can help find errors and help developers fix errors while in production. You may want to look into that if warnings isn't for you. From their README.me:
Sentry fundamentally is a service that helps you monitor and fix
crashes in realtime. The server is in Python, but it contains a full
API for sending events from any language, in any application.
Builtin Exception 'ValueError' is the one that should be used.
def times_two(x):
if x < 0:
raise ValueError('{} is not a positive number.'.format(x))
return x * 2
This seems like an XY problem. The original problem is that you have some code which is incomplete or otherwise known to not work. If it is something you are currently working on, then the correct tool to use here is your version control. With Git, you would create a new branch which only be merged into master and prepared for release to production after you have completed the work. You shouldn't release a partial implementation.
Do you want to stop execution when the function is called? If so, then some sort of exception, like the BrokenException in your example is a good way of doing this.
But if you want to warn the caller, and then continue on anyway, then you want a Warning instead of an exception. You can still create your own:
class BrokenCodeWarning(Warning)
pass
When you raise BrokenCodeWarning, execution will not be halted by default, but a warning will be printed to stderr.
The warnings filter controls whether warnings are ignored, displayed, or turned into errors (raising an exception).
https://docs.python.org/3.7/library/warnings.html#the-warnings-filter
I was able to write a program in python to do my data analyses. The program runs all well with a small mcve data from beginning to end. But, when I run it using my big dataset all works well until somewhere the data structure gets faulty and I get TypeError. Since the program is big and creates several data on the fly, I am not able to track at which specific line of the big data is the data-structure really messed up.
Problem: I want to know at what line of my data is the data structure wrong. Any easy way to do it.
I can tell from which function the problem is coming from. But, my problem isn't with the function, but the data structure which probably has a subtle structural problem somewhere. The data runs through several times until it hits the problem, but I cannot tell where. I tried adding a print function to visually trace it down. But, the data is so huge and lots of similar patterns and is really hard trace it back to the main-big data.
I am not sure if I should put my scripts here, but I think there are possible suggestions I can receive without writing my program on SE.
Any info appreciated.
Code would help, but without it, all I can think of is to keep track of the line number and include it with your error. Use a try.
line_number = 0
for line in your_file:
line_number += 1
try:
<do your thing>
except(TypeError):
print("Error at line number {}".format(line_number))
EDIT: This will simply print the line number and keep going. You could also raise the error if you want to halt processing.
I am trying to write a function that imports data from a Stata .dta file using the pandas read_stata function. I would like to detect any problems with the read process (for example, file doesn't exist) using something akin to:
try:
data = read_stata('filename.dta')
except someTypeOfException:
print "Error"
exit(0)
so I can print a message and exit gracefully (sorry, can't get the indents to work). However, I can't find any information about the Exceptions raised by read_stata if there is a problem. I'm new to python and pandas and I may not be expressing my web searches correctly. Or I may be barking up the wrong tree altogether, of course. Can anyone point me in the right direction please?
Thanks in advance.
I think your question is too broad. There are too many possible exceptions: some of them may be related to read_stata(), some may be not. The one you mentioned, file doesn't exist, would result in a IOerror, which is not even read_stata related.
To see all the possible exceptions that may be raised by read_stata(), go check its source code, located in <path to pandas>/io/stata.py. This should give you a good place to start.
Simple question:
Is there some code or function I can add into most scripts that would let me know its "running"?
So after you execute foo.py most people would see a blinking cursor. I currently am running a new large script and it seems to be working, but I wont know until either an error is thrown or it finish(might not finish).
I assume you could put a simple print "foo-bar"at the end of each for loop in the script?
Any other neat visual read out tricks?
I like clint.progress.bar. For logging, you can check Lggr.
The print "foo-bar" trick is basically what people do for quick&dirty scripts. However, if you have lots and lots of loops, you don't want to print a line for each one. Besides the fact that it'll fill the scrollback buffer of your terminal, on many terminals it's hard to see whether anything is happening when all it's doing is printing the same line over and over. And if your loops are quick enough, it may even mean you're spending more time printing than doing the actual work.
So, there are some common variations to this trick:
Print characters or short words instead of full lines.
Print something that's constantly changing.
Only print every N times through the loop.
To print a word without a newline, you just print 'foo',. To print a character with neither a newline nor a space, you have to sys.stdout.write('.'). Either way, people can see the cursor zooming along horizontally, so it's obvious how fast the feedback is.
If you're got a for n in … loop, you can print n. Or, if you're progressively building something, you can print len(something), or outfile.ftell(), or whatever. Even if it's not objectively meaningful, the fact that it's constantly changing means you can tell what's going on.
The easiest way to not print all the time is to add a counter, and do something like counter += 1; if counter % 250 == 0: print 'foo'. Variations on this include checking the time, and printing only if it's been, say, 1 or more seconds since the last print, or breaking the task into subtasks and printing at the end of each subtask.
And obviously you can mix and match these.
But don't put too much effort into it. If this is anything but a quick&dirty aid to your own use, you probably want something that looks more professional. As long as you can expect to be on a reasonably usable terminal, you can print a \r without a \n and overwrite the line repeatedly, allowing you to draw nice progress bars or other feedback (ala curl, wget, scp, and other similar Unix tools). But of course you also need to detect when you're not on a terminal, or at least write this stuff to stderr instead of stdout, so if someone redirects or pipes your script they don't get a bunch of garbage. And you might want to try to detect the terminal width, and if you can detect it and it's >80, you can scale the progress bar or show more information. And so on.
This gets complicated, so you probably want to look for a library that does it for you. There are a bunch of choices out there, so look through PyPI and/or the ActiveState recipes.
You could log things to a file. You could print out dots, like print ".".
I'm thinking about where to write the log record around an operation. Here are two different styles. The first one, write log before the operation.
Before:
log.info("Perform operation XXX")
operation()
And here is a different style, write the log after the operation.
After:
operation()
log.info("Operation XXX is done.")
With the before-style, the logging records say what is going to do now. The pro of this style is that when something goes wrong, developer can detect it easily, because they know what is the program doing now. But the con is that you are not sure is the operation finished correctly, if something wrong is inside the operation, for example, a function call gets blocked there and never return, you can't never know it by reading the logging records. With the after-style, you are sure the operation is done.
Of course, we can mix those two style together
Both:
log.info("Perform operation XXX")
operation()
log.info("Operation XXX is done.")
But I feel that is kinda verbose, it makes double logging records. So, here is my question - what is the good logging style? I would like to know how do you think.
I'd typically use two different log levels.
The first one I put on a "debug" level, and the second one on an "info" level. That way typical production machines would only log what's being done, but I can turn on the debug logging and see what it tries to do before it errors out.
It all depends what you want to log. If you're interested in the code getting to the point where it's about to do an operation. If you want to make sure the operation succeeded, do it after. If you want both, do both.
Maybe you could use something like a try catch ? Here 's a naive python example :
try :
operation()
log.info("Operation XXX is done.")
except Exception:
log.info("Operation xxx Failed")
raise Exception() # optional : if you want to propagate failure to another try catch statement and/or crash eventually.
Operation will be launched.
If it doesn't fail (no exception raised) you get a success statement in the logs.
If it fails (by raising an exception. Like disc full or whatever you are trying to do), Exception is caught and you get a failure statement.
Log is more meaning full. You get to keep the verbosity to a oneliner and get to know if operation succeeded. Best of all choices.
Oh and you get a hook point where you can add some code to be executed in case of failure.
I hope it help.
There's another style that I've seen used in Linux boot scripts and in strace. It's got the advantages of your combined style with less verbosity, but you've got to make sure that your logging facility isn't doing any buffering. I don't know log.info, so here's a rough example with print:
print "Doing XXX... ", # Note lack of newline :)
operation()
print "Done."
(Since in most cases print uses buffering, using this example verbatim won't work properly. You won't see "Doing XXX" until you see the "Done". But you get the general idea.)
The other disadvantage of this style is that things can get mixed up if you have multiple threads writing to the same log.