equivalent of Python's "with" in Ruby - python

In Python, the with statement is used to make sure that clean-up code always gets called, regardless of exceptions being thrown or function calls returning. For example:
with open("temp.txt", "w") as f:
f.write("hi")
raise ValueError("spitespite")
Here, the file is closed, even though an exception was raised. A better explanation is here.
Is there an equivalent for this construct in Ruby? Or can you code one up, since Ruby has continuations?

Ruby has syntactically lightweight support for literal anonymous procedures (called blocks in Ruby). Therefore, it doesn't need a new language feature for this.
So, what you normally do, is to write a method which takes a block of code, allocates the resource, executes the block of code in the context of that resource and then closes the resource.
Something like this:
def with(klass, *args)
yield r = klass.open(*args)
ensure
r.close
end
You could use it like this:
with File, 'temp.txt', 'w' do |f|
f.write 'hi'
raise 'spitespite'
end
However, this is a very procedural way to do this. Ruby is an object-oriented language, which means that the responsibility of properly executing a block of code in the context of a File should belong to the File class:
File.open 'temp.txt', 'w' do |f|
f.write 'hi'
raise 'spitespite'
end
This could be implemented something like this:
def File.open(*args)
f = new(*args)
return f unless block_given?
yield f
ensure
f.close if block_given?
end
This is a general pattern that is implemented by lots of classes in the Ruby core library, standard libraries and third-party libraries.
A more close correspondence to the generic Python context manager protocol would be:
def with(ctx)
yield ctx.setup
ensure
ctx.teardown
end
class File
def setup; self end
alias_method :teardown, :close
end
with File.open('temp.txt', 'w') do |f|
f.write 'hi'
raise 'spitespite'
end
Note that this is virtually indistinguishable from the Python example, but it didn't require the addition of new syntax to the language.

The equivalent in Ruby would be to pass a block to the File.open method.
File.open(...) do |file|
#do stuff with file
end #file is closed
This is the idiom that Ruby uses and one that you should get comfortable with.

You could use Block Arguments to do this in Ruby:
class Object
def with(obj)
obj.__enter__
yield
obj.__exit__
end
end
Now, you could add __enter__ and __exit__ methods to another class and use it like this:
with GetSomeObject("somefile.text") do |foo|
do_something_with(foo)
end

I'll just add some more explanations for others; credit should go to them.
Indeed, in Ruby, clean-up code is as others said, in ensure clause; but wrapping things in blocks is ubiquitous in Ruby, and this is how it is done most efficiently and most in spirit of Ruby. When translating, don't translate directly word-for-word, you will get some very strange sentences. Similarly, don't expect everything from Python to have one-to-one correspondence to Ruby.
From the link you posted:
class controlled_execution:
def __enter__(self):
set things up
return thing
def __exit__(self, type, value, traceback):
tear things down
with controlled_execution() as thing:
some code
Ruby way, something like this (man, I'm probably doing this all wrong :D ):
def controlled_executor
begin
do_setup
yield
ensure
do_cleanup
end
end
controlled_executor do ...
some_code
end
Obviously, you can add arguments to both controlled executor (to be called in a usual fashion), and to yield (in which case you need to add arguments to the block as well). Thus, to implement what you quoted above,
class File
def my_open(file, mode="r")
handle = open(file, mode)
begin
yield handle
ensure
handle.close
end
end
end
File.my_open("temp.txt", "w") do |f|
f.write("hi")
raise Exception.new("spitesprite")
end

It's possible to write to a file atomically in Ruby, like so:
File.write("temp.txt", "hi")
raise ValueError("spitespite")
Writing code like this means that it is impossible to accidentally leave a file open.

You could always use a try..catch..finally block, where the finally section contains code to clean up.
Edit: sorry, misspoke: you'd want begin..rescue..ensure.

I believe you are looking for ensure.

Related

What is the difference between 'open("file_path")' and 'with open("file_path")' in Python 3.8.10 and which one is most suitable to use? [duplicate]

This question already has an answer here:
What is the purpose of a context manager in python [duplicate]
(1 answer)
Closed 1 year ago.
I am studying Python and I found there are two types of file opening operations.
The first one is,
myreadfile = open("bear.txt", "r")
content = myreadfile.read()
second method is
with open("bear.txt") as file:
content = file.read()
I want to know is there any difference between these two methods and which one is most suitable to use.
They are context managers.
Explanation:
The with method is a context manager, if you use it for reading or writing I/O files, it will automatically close the file, don't need to add a line of file.close(), as mentioned in the docs:
Context managers allow you to allocate and release resources precisely when you want to. The most widely used example of context managers is the with statement. Suppose you have two related operations which you’d like to execute as a pair, with a block of code in between. Context managers allow you to do specifically that.
Examples:
There are examples in the docs, a regular with statement:
with open('some_file', 'w') as opened_file:
opened_file.write('Hola!')
Is equivalent to:
file = open('some_file', 'w')
try:
file.write('Hola!')
finally:
file.close()
It says that:
While comparing it to the first example we can see that a lot of boilerplate code is eliminated just by using with. The main advantage of using a with statement is that it makes sure our file is closed without paying attention to how the nested block exits.
A brief introduction of the implementation:
As mentioned in the docs, the context manager could be implemented with a class:
At the very least a context manager has an __enter__ and __exit__ method defined.
As shown there, an example context manager implementation in a class would be something like:
class File(object):
def __init__(self, file_name, method):
self.file_obj = open(file_name, method)
def __enter__(self):
return self.file_obj
def __exit__(self, type, value, traceback):
self.file_obj.close()
with File('demo.txt', 'w') as opened_file:
opened_file.write('Hola!')
The code can behave like a context manager due to the magic methods, __enter__ and __exit__.
The first one does not close the file automatically. The second one does.
As far as I understand, the first one, you need to close the file after you're done with the operations while in the latter the file is automatically closed after execution of that indent block.
The second method is the recommended method. This with syntax is known as a context and will automatically close the file once the context is exited as well as if something goes wrong during the operation.
In the first case, the file is opened and read. It stays open afterwards.
In the second case, you use the file object as a so-called "context manager". Special methods get called on entering and leaving the with block: on leaving, it is closed. This is superior to the other, even superior to
myreadfile = open("bear.txt", "r")
content = myreadfile.read()
myreadfile.close()
because the close() line isn't reached when read() throws an exception.
It is more like
myreadfile = open("bear.txt", "r")
try:
content = myreadfile.read()
finally:
myreadfile.close()
but easier to use.

Run some common code before and after a block

In a current project, I found myself often writing code like so:
statement_x()
do_something()
do_other_thing()
statement_y()
# ...
statement_x()
do_third_thing()
do_fourth_thing()
statement_y()
As you can see, statement_x and statement_y often get repeated, and they are always paired, but I am unable to condense them into a single statement. What I would really like is a language construct like this:
def env wrapping:
statement_x()
run_code
statement_y()
In this case, I'm pretending env is a Python keyword indicating a special "sandwich function" that runs certain statements before and after a given block, the point of entry of the block being indicated by the second keyword run_code.
My above program can now be made more readable using this construct:
env wrapping:
do_something()
do_other_thing()
env wrapping:
do_third_thing()
do_fourth_thing()
Which I mean to have the exact same behavior.
As far as I know such a construct does not exist, and the point of my question is not to speculate on future Python features. However, surely this situation of "run some common code before and after a variable block" must occur often enough that Python has a convenient way of dealing with it! What is this way? Or is the Pythonic solution to simple give up and accept the repetition?
PS: I realize that I could write a function that takes the variable statements as an argument, but that would not be very user-friendly - I would end up writing huge lists of statements inside the parens of my function.
You can use a with statement.
Example using contextlib.contextmanager:
import contextlib
#contextlib.contextmanager
def doing_xy():
print('statement_x')
yield
print('statement_y')
Example usage:
>>> with doing_xy():
... print('do_something')
... print('do_other_thing')
...
statement_x
do_something
do_other_thing
statement_y
>>> with doing_xy():
... print('do_third_thing')
... print('do_fourth_thing')
...
statement_x
do_third_thing
do_fourth_thing
statement_y

Can I use python with statement for conditional execution?

I'm trying to write code that supports the following semantics:
with scope('action_name') as s:
do_something()
...
do_some_other_stuff()
The scope, among other things (setup, cleanup) should decide if this section should run.
For instance, if the user configured the program to bypass 'action_name' than, after Scope() is evaluated do_some_other_stuff() will be executed without calling do_something() first.
I tried to do it using this context manager:
#contextmanager
def scope(action):
if action != 'bypass':
yield
but got RuntimeError: generator didn't yield exception (when action is 'bypass').
I am looking for a way to support this without falling back to the more verbose optional implementation:
with scope('action_name') as s:
if s.should_run():
do_something()
...
do_some_other_stuff()
Does anyone know how I can achieve this?
Thanks!
P.S. I am using python2.7
EDIT:
The solution doesn't necessarily have to rely on with statements. I just didn't know exactly how to express it without it. In essence, I want something in the form of a context (supporting setup and automatic cleanup, unrelated to the contained logic) and allowing for conditional execution based on parameters passed to the setup method and selected in the configuration.
I also thought about a possible solution using decorators. Example:
#scope('action_name') # if 'action_name' in allowed actions, do:
# setup()
# do_action_name()
# cleanup()
# otherwise return
def do_action_name()
do_something()
but I don't want to enforce too much of the internal structure (i.e., how the code is divided to functions) based on these scopes.
Does anybody have some creative ideas?
You're trying to modify the expected behaviour of a basic language construct. That's never a good idea, it will just lead to confusion.
There's nothing wrong with your work-around, but you can simplify it just a bit.
#contextmanager
def scope(action):
yield action != 'bypass'
with scope('action_name') as s:
if s:
do_something()
...
do_some_other_stuff()
Your scope could instead be a class whose __enter__ method returns either a useful object or None and it would be used in the same fashion.
The following seems to work:
from contextlib import contextmanager
#contextmanager
def skippable():
try:
yield
except RuntimeError as e:
if e.message != "generator didn't yield":
raise
#contextmanager
def context_if_condition():
if False:
yield True
with skippable(), context_if_condition() as ctx:
print "won't run"
Considerations:
needs someone to come up with better names
context_if_condition can't be used without skippable but there's no way to enforce that/remove the redundancy
it could catch and suppress the RuntimeError from a deeper function than intended (a custom exception could help there, but that makes the whole construct messier still)
it's not any clearer than just using #Mark Ransom's version
I don't think this can be done. I tried implementing a context manager as a class and there's just no way to force the block to raise an exception which would subsequently be squelched by the __exit__() method.
I have the same use case as you, and came across the conditional library that someone has helpfully developed in the time since you posted your question.
From the site, its use is as:
with conditional(CONDITION, CONTEXTMANAGER()):
BODY()

Self Modifying Python? How can I redirect all print statements within a function without touching sys.stdout?

I have a situation where I am attempting to port some big, complex python routines to a threaded environment.
I want to be able to, on a per-call basis, redirect the output from the function's print statement somewhere else (a logging.Logger to be specific).
I really don't want to modify the source for the code I am compiling, because I need to maintain backwards compatibility with other software that calls these modules (which is single threaded, and captures output by simply grabbing everything written to sys.stdout).
I know the best option is to do some rewriting, but I really don't have a choice here.
Edit -
Alternatively, is there any way I can override the local definition of print to point to a different function?
I could then define the local print = system print unless overwritten by a kwarg, and would only involve modify a few lines at the beginning of each routine.
In Python2.6 (and 2.7), you can use
from __future__ import print_function
Then you can change the code to use the print() function as you would for Python3
This allows you to create a module global or local function called print which will be used in preference to the builtin function
eg.
from __future__ import print_function
def f(x, print=print):
print(x*x)
f(5)
L=[]
f(6, print=L.append)
print(L)
Modifying the source code doesn't need to imply breaking backward compatibility.
What you need to do is first replace every print statement with a call to a function that does the same thing:
import sys
def _print(*args, **kw):
sep = kw.get('sep', ' ')
end = kw.get('end', '\n')
file = kw.get('file', sys.stdout)
file.write(sep.join(args))
file.write(end)
def foo():
# print "whatever","you","want"
_print("whatever","you","want")
Then the second step is to stop using the _print function directly and make it a keyword argument:
def foo(_print=_print):
...
and make sure to change all internal function calls to pass the _print function around.
Now all the existing code will continue to work and will use print, but you can pass in whatever _print function you want.
Note that the signature of _print is exactly that of the print function in more recent versions of Python, so as soon as you upgrade you can just change it to use print(). Also you may get away with using 2to3 to migrate the print statements in the existing code which should reduce the editing required.
Someone in the sixties had an idea about how to solve this but it requires a bit of alien technology. Unfortunately python has no "current environment" concept and this means you cannot provide context unless specifying it in calls as a parameter.
For handling just this specific problem what about replacing stdout with a file-like object that behaves depending on a thread-specific context ? This way the source code remains the same but for example you can get a separate log for each thread. It's even easy to do this on a specific per-call way... for example:
class MyFakeStdout:
def write(self, s):
try:
separate_logs[current_thread()].write(s)
except KeyError:
old_stdout.write(s)
and then having a function to set a logger locally to a call (with)
PS: I saw the "without touching stdout" in the title but I thought this was because you wanted only some thread to be affected. Touching it while still allowing other threads to work unaffected seems to me compatible with the question.

Is this an acceptable pythonic idiom?

I have a class that assists in importing a special type of file, and a 'factory' class that allows me to do these in batch. The factory class uses a generator so the client can iterate through the importers.
My question is, did I use the iterator correctly? Is this an acceptable idiom? I've just started using Python.
class FileParser:
""" uses an open filehandle to do stuff """
class BatchImporter:
def __init__(self, files):
self.files=files
def parsers(self):
for file in self.files:
try:
fh = open(file, "rb")
parser = FileParser(fh)
yield parser
finally:
fh.close()
def verifyfiles(
def cleanup(
---
importer = BatchImporter(filelist)
for p in BatchImporter.parsers():
p.method1()
...
You could make one thing a little simpler: Instead of try...finally, use a with block:
with open(file, "rb") as fh:
yield FileParser(fh)
This will close the file for you automatically as soon as the with block is left.
It's absolutely fine to have a method that's a generator, as you do. I would recommend making all your classes new-style (if you're on Python 2, either set __metaclass__ = type at the start of your module, or add (object) to all your base-less class statements), because legacy classes are "evil";-); and, for clarity and conciseness, I would also recomment coding the generator differently...:
def parsers(self):
for afile in self.files:
with open(afile, "rb") as fh:
yield FileParser(fh)
but neither of these bits of advice condemns in any way the use of generator methods!-)
Note the use of afile in lieu of file: the latter is a built-in identifier, and as a general rule it's better to get used to not "hide" built-in identifiers with your own (it doesn't bite you here, but it will in many nasty ways in the future unless you get into the right habit!-).
The design is fine if you ask me, though using finally the way you use it isn't exactly idiomatic. Use catch and maybe re-raise the exception (using the raise keyword alone, otherwise you mess the stacktrace up), and for bonus points, don't catch: but catch Exception: (otherwise, you catch SystemExit and KeyboardInterrupt).
Or simply use the with-statement as shown by Tim Pietzcker.
In general, it isn't safe to close the file after you yield a parser object that will try to read it. Consider this code:
parsers = list(BatchImporter.parsers())
for p in parsers:
# the file object that p holds will already be closed!
If you're not writing a long-running daemon process, most of the time you don't need to worry about closing files -- they will all get closed when your program exits, or when the file objects are garbage-collected. (And if you use CPython, that will happen as soon as all references to them are lost, since CPython uses reference counting.)
Nevertheless, taking care to free resources is a good habit to acquire, so I would probably write the FileParser class this way:
class FileParser:
def __init__(self, file_or_filename, closing=False):
if hasattr(file_or_filename, 'read'):
self.f = file_or_filename
self._need_to_close = closing
else:
self.f = open(file_or_filename, 'rb')
self._need_to_close = True
def close(self):
if self._need_to_close:
self.f.close()
self._need_to_close = False
and then BatchImporter.parsers would become
def parsers(self):
for file in self.files:
yield FileParser(file)
or, if you love functional programming
def parsers(self):
return itertools.imap(FileParser, self.files)
An aside: if you're new to Python, I recommend you take a look at the Python style guide (also known as PEP 8). Two-space indents look weird.

Categories

Resources