Best practice for using argument's class method - python

I am trying to figure out the solution for the following problem:
#ExampleA.py
class a:
def my_great_method_A(self):
pass
#ExampleB.py
def functionX(inst_a): #Argument 'inst_a' will be always ExampleA.py's class a.
inst_a.my_great_method_A() #<---
I use Liclipse as a python editor. When I am typing the last line, "a.my_gr...", I want to have the editor's auto filling feature kicks in to suggest to use "my_great_method_A()". However, it actually does not suggest anything.
I understand why, because the editor doesn't have any clue if 'inst_a' is class 'a'. To deal with this issue, I could do the following to make the autofiller work:
#ExampleA.py
class a:
def my_great_method_A(self):
pass
#ExampleB.py
import ExampleA
def functionX(inst_a): #Argument 'inst_a' will be always ExampleA.py's class a.
ExampleA.a.my_great_method_A(inst_a) #<--- then autofilling works
However, for the code's readability, I would rather use the . format and I believe everyone the same way. But I do not know how everyone deals with this. Many times I have to go into the imported file and copy & paste the method name, which is tedious. Obviously I am missing something that everyone is aware of. By the way this is my first time to post on stackoverflow. I hope this is a valid thing to pose here.

LiClipse/PyDev can recognize type hints in docstrings (as explained in http://www.pydev.org/manual_adv_type_hints.html) or using the new PEP 484 type hints (https://www.python.org/dev/peps/pep-0484/)... So, if you use one of those, it should work.
Note: I personally like docstrings better, but it's probably a matter of taste and both should be recognizable by LiClipse/PyDev.

I don't know of a way to make this editor guess the type you're expecting. But since Python is untyped it will always only be guessing.
However notice your workaround of using the class explicit method is not a good practice. It will not allow you to pass extensions of ExampleA in the future in case your code will evolve some day.
So it's more than just readability

Related

Setting a variable to a parameter value inline when calling a function

In other languages, like Java, you can do something like this:
String path;
if (exists(path = "/some/path"))
my_path = path;
the point being that path is being set as part of specifying a parameter to a method call. I know that this doesn't work in Python. It is something that I've always wished Python had.
Is there any way to accomplish this in Python? What I mean here by "accomplish" is to be able to write both the call to exists and the assignment to path, as a single statement with no prior supporting code being necessary.
I'll be OK with it if a way of doing this requires the use of an additional call to a function or method, including anything I might write myself. I spent a little time trying to come up with such a module, but failed to come up with anything that was less ugly than just doing the assignment before calling the function.
UPDATE: #BrokenBenchmark's answer is perfect if one can assume Python 3.8 or better. Unfortunately, I can't yet do that, so I'm still searching for a solution to this problem that will work with Python 3.7 and earlier.
Yes, you can use the walrus operator if you're using Python 3.8 or above:
import os
if os.path.isdir((path := "/some/path")):
my_path = path
I've come up with something that has some issues, but does technically get me where I was looking to be. Maybe someone else will have ideas for improving this to make it fully cool. Here's what I have:
# In a utility module somewhere
def v(varname, arg=None):
if arg is not None:
if not hasattr(v, 'vals'):
v.vals = {}
v.vals[varname] = arg
return v.vals[varname]
# At point of use
if os.path.exists(v('path1', os.path.expanduser('~/.harmony/mnt/fetch_devqa'))):
fetch_devqa_path = v('path1')
As you can see, this fits my requirement of no extra lines of code. The "variable" involved, path1 in this example, is stored on the function that implements all of this, on a per-variable-name basis.
One can question if this is concise and readable enough to be worth the bother. For me, the verdict is still out. If not for the need to call the v() function a second time, I think I'd be good with it structurally.
The only functional problem I see with this is that it isn't thread-safe. Two copies of the code could run concurrently and run into a race condition between the two calls to v(). The same problem is greatly magnified if one fails to choose unique variable names every time this is used. That's probably the deal killer here.
Can anyone see how to use this to get to a similar solution without the drawbacks?

passing a dict of available classes to a function

I want to pass a dict with available classes to a function, so that I can construct them using their name, without importing them.
My idea was to do this:
from construct_classes import construct_classes
class A:
def __init__(self):
print('A')
class B:
def __init__(self):
print('B')
if __name__ == '__main__':
construct_classes({'A': A, 'B': B})
And in construct_classes.py:
def construct_classes(my_classes):
a = my_classes['A'].__init__(my_classes['A'])
b = my_classes['B'].__init__(my_classes['B'])
This seems to work, but it looks hacky to me.
Are there any arguments against using this and if so is there another way to accomplish this behaviour?
Based on what I feel your question is about, this is what occurs to me:
If it looks hacky to you, maybe it is. But downvotes might have happened also because of the __init__ call (follow Klaus comment fix).
I don't remember the use of expressions like "plug-ins map" or "plug-ins dict", although we know that creatures like that can exist. Now if you say the "list of installed plug-ins", that is another thing, if you see what I mean...
If you just want to add functionality without touching some class code, use a Decorator pattern (a Python decorator, that is).
If you are building a list of plug-ins for a user or client code to choose from and run, you need to abstract a common API for plugin management and use. So that you can choose from the list obtained with .list(), or iterate the list, and for example .run() any one of them.
Also check namespace packages. Don't think it applies directly here, but plug-ins seem mostly a name management issue, maybe you can structure your plug-ins ideas differently, and then make good use of that. They are just subfolders with common names and no __init__.py, bringing defined names/modules inside them into a common top name.

Are there any dangers associated with using kwarg=kwarg in Python functions?

I've sometimes seen code with kwarg=kwarg in one of the functions as shown below:
def func1(foo, kwarg):
return(foo+kwarg)
def func2(bar, kwarg):
return(func1(bar*2, kwarg=kwarg))
print(func2(4,5))
I've normally tried to avoid this notation (e.g. by using kwarg1=kwarg2) in order to avoid any possible bugs, but is this actually necessary?
There's nothing wrong with it - in this case kwarg is just a variable name - it's not reserved. There may be a bit of confusion with it though, since def func(**kwargs): is the common syntax for creating a dictionary of all the "key word arguments" that are passed into the function. Since you're not doing that here, using such a similar name is unnecessarily confusing. Although it's not clear you're talking about using that exact name, so maybe this is just an issue with the example.
But broadly speaking, passing something=something is fairly common practice. You'll see it in lots of places, for example if you're iterating through a color pallette in Matplotlib, you might pass color=color into plot, or if you're building a list of headers in Pandas you might pass coloumns=columns into DataFrame.
Bottom line is it should be clear. If it is, it's good. If it's not, it isn't.

Intellisense for returned python object's methods

I am new to Python and I love this language much. But I encountered one annoying issue recently when working with PyDev in eclipse.
Some method returned an instance of some class. But I cannot get intellisense for the instance's methods.
For example:
import openpyxl
from openpyxl.reader.excel import load_workbook
from openpyxl.worksheet import Worksheet
xlsFile='hello.xlsx'
wbook = load_workbook(xlsFile)
wsheet1=wbook.get_sheet_by_name('mysheet')
wsheet1.cell('A9').hyperlink=r'\\sharefolder'
wsheet2=Worksheet()
wsheet2.cell('A1').hyperlink=r'\\sharefolder'
In this code, I can get the prompt for method cell() with wsheet2, but not with wsheet1. Though they are both of Worksheet type which I have already imported. It seems python or PyDev cannot properly detect the type of the returned object.
Is this a language limitation? Or is there something I did wrong? For now, I have to dig into the source code and see what the real type of the return value is. And then check the methods defined in that type. It's very tedious.
I wrote a small test to repro this issue. Strange, the intellisense seems working.
It's a consequence of the fact that Python is dynamically typed.
In a statically-typed language such as C#, methods are annotated with their type signatures. (Aside: in some systems types can be inferred by the type checker.) The compiler knows the return type of the function, and the types the arguments are meant to have, without running your code, because you wrote the types down! This enables your tooling to not only check the types of your programs, but also to build up metadata about the methods in your program and their types; Intellisense works by querying this metadata harvested from the text of your program.
Python has no static type system built in to the language. This makes it much harder for tooling to give you hints without running the code. For example, what is the return type of this function?
def spam(eggs):
if eggs:
return "ham"
return 42
Sometimes spam returns a string; sometimes it returns an integer. What methods should Intellisense display on the return value of a call to spam?
What are the available attributes on this class?
class Spam:
def __getattr__(self, name):
if len(name) > 5:
return "foo"
return super().__getattr__(name)
Spam sometimes dynamically generates attributes: what should Intellisense display for an instance of Spam?
In these cases there is no correct answer. You might be able to volunteer some guesses (for example, you could show a list containing both str and int's methods on the return value of spam), but you can't give suggestions that will be right all the time.
So Intellisense tooling for Python is reduced to best-guesses. In the example you give, your IDE doesn't know enough about the return type of get_sheet_by_name to give you information about wsheet1. However, it does know the type of wsheet2 because you just instantiated it to a Worksheet. In your second example, Intellisense is simply making a (correct) guess about the return type of f1 by inspecting its source code.
Incidentally, auto-completion in an interactive shell like IPython is more reliable. This is because IPython actually runs the code you type. It can tell what the runtime type of an object is because the analysis is happening at runtime.
You can use an assert to tell intellisense what class you want it to be. Of course now it will thow an error if it isn't, but that's a good thing.
assert isinstance(my_variable, class_i_want_it_to_be)
This will give you the auto-complete and ctrl-click to jump to the function that you have been looking for. (At least this is how it works now in 2022, some other answers are 5 years old).
Here is a quick example.
#!/usr/bin/python3
class FooMaker():
def make_foo():
return "foo"
#this makes a list of constants
list1 = [FooMaker(),FooMaker()]
#Even if the result is the same. These are not constants
list2 = []
for i in range(2):
list2.append(FooMaker)
#intellisense knows this is a FooMaker
m1 = list1[0]
#now intellisense isn't sure what this object is
m2 = list2[0]
# Make_foo is highlighted for m1 and not for m2
m1.make_foo()
m2.make_foo()
# now make_foo is highlighted for M2
assert isinstance(m2, FooMaker)
m2.make_foo()
The color difference is subtile in my vs code. But here is a screen shot anyway.
tldr:
So many online answers just say "no" that it took me a while to say: "this is ridiculous, I don't have to deal with this in C, there must be a better way".
Yes, python is dynamically typed, but that doesn't mean that intellisence has to be necessarily banned from suggesting "you probably want this".
It also doesn't mean, you have to "just deal with it" because you chose python.
Furthermore, throwing down a lot of assert functions is good practice and will shorten your development time when things start to get complicated. You might be about to pass a variable a long way down a list of functions before you get a type error. Then you have to dig a long way back up to find it. Just say what it is when you decide what it is and that's where it will throw the error when something goes wrong.
It's also much easier to show other developers what you are trying to do. I even see asserts this in C libraries, and always wondered why they bothered in a strongly typed language. But, now it makes a lot more sense. I would also speculate there is little performance hit to adding an assert (complier stuff, blah blah, I'll leave that for the comments).
Well, technically in Python a method may return anything and the result of an operation is defined only when the operation is completed.
Consider this simple function:
def f(a):
if a == 1:
return 1 # returns int
elif a == 2:
return "2" # returns string
else:
return object() # returns an `object` instance
The function is pretty valid for Python and its result is strictly defined but only at the end of the function execution. Indeed:
>>> type(f(1))
<type 'int'>
>>> type(f(2))
<type 'str'>
>>> type(f(3))
<type 'object'>
Certainly this flexibility is something not needed all the time and most methods return something predictable apriori. An intelligent IDE could analyze the code (and some other hints like docstrings which may specify arguments and return types) but this always would be a guess with a certain level of confidence. Also there's PEP0484 which introduces type hints on the language level, but it's optional, relatively new and all legacy code definitely doesn't use it.
If PyDev doesn't work for a particular case, well, it's a pity, but it's something you should accept if you choose such a dynamic language like Python. Maybe it's worth to try a different, more intelligent IDE or have a console with an interactive Python prompt opened next to your IDE to test your code on the fly. I would suggest to use a sophisticated python shell like bpython

Hot swapping python code (duck type functions?)

I've been thinking about this far too long and haven't gotten any idea, maybe some of you can help.
I have a folder of python scripts, all of which have the same surrounding body (literally, I generated it from a shell script), but have one chunk that's different than all of them. In other words:
Top piece of code (always the same)
Middle piece of code (changes from file to file)
Bottom piece of code (always the same)
And I realized today that this is a bad idea, for example, if I want to change something from the top or bottom sections, I need to write a shell script to do it. (Not that that's hard, it just seems like it's very bad code wise).
So what I want to do, is have one outer python script that is like this:
Top piece of code
Dynamic function that calls the middle piece of code (based on a parameter)
Bottom piece of code
And then every other python file in the folder can simply be the middle piece of code. However, normal module wouldn't work here (unless I'm mistaken), because I would get the code I need to execute from the arguement, which would be a string, and thus I wouldn't know which function to run until runtime.
So I thought up two more solutions:
I could write up a bunch of if statements, one to run each script based on a certain parameter. I rejected this, as it's even worse than the previous design.
I could use:
os.command(sys.argv[0] scriptName.py)
which would run the script, but calling python to call python doesn't seem very elegant to me.
So does anyone have any other ideas? Thank you.
If you know the name of the function as a string and the name of module as a string, then you can do
mod = __import__(module_name)
fn = getattr(mod, fn_name)
fn()
Another possible solution is to have each of your repetitive files import the functionality from the main file
from topAndBottom import top, bottom
top()
# do middle stuff
bottom()
In addition to the several answers already posted, consider the Template Method design pattern: make an abstract class such as
class Base(object):
def top(self): ...
def bottom(self): ...
def middle(self): raise NotImplementedError
def doit(self):
self.top()
self.middle()
self.bottom()
Every pluggable module then makes a class which inherits from this Base and must override middle with the relevant code.
Perhaps not warranted for this simple case (you do still have to import the right module in order to instantiate its class and call doit on it), but still worth keeping in mind (together with its many Pythonic variations, which I have amply explained in many tech talks now available on youtube) for cases where the number or complexity of "pluggable pieces" keeps growing -- Template Method (despite its horrid name;-) is a solid, well-proven and highly scalable pattern [[sometimes a tad too rigid, but that's exactly what I address in those many tech talks -- and that problem doesn't apply to this specific use case]].
However, normal module wouldn't work here (unless I'm mistaken), because I would get the code I need to execute from the arguement, which would be a string, and thus I wouldn't know which function to run until runtime.
It will work just fine - use __import__ builtin or, if you have very complex layout, imp module to import your script. And then you can get the function by module.__dict__[funcname] for example.
Importing a module (as explained in other answers) is definitely the cleaner way to do this, but if for some reason that doesn't work, as long as you're not doing anything too weird you can use exec. It basically runs the content of another file as if it were included in the current file at the point where exec is called. It's the closest thing Python has to a source statement of the kind included in many shells. As a bare minimum, something like this should work:
exec(open(filename).read(None))
How about this?
function do_thing_one():
pass
function do_thing_two():
pass
dispatch = { "one" : do_thing_one,
"two" : do_thing_two,
}
# do something to get your string from the command line (optparse, argv, whatever)
# and put it in variable "mystring"
# do top thing
f = dispatch[mystring]
f()
# do bottom thing

Categories

Resources