Functions with dependencies passed as parameters - python

I'm working on a project where I'm batch generating XML files which can import to the IDE of an industrial touchscreen.
Each XML file represents a screen, and most screens require the same functions and the process for dealing with them is the same, with the exception of the fact that each screen type has a unique configuration function.
I'm using a ScreenType class to hold attributes specific to a screen type, so I decided to write a unique configuration for each type, and pass it as a parameter to the __init__() of this class. This way, when I pass around my ScreenType as it is needed, it's configuration function will stay bundled and can be used whenever needed.
But I'm not sure what will happen if my configuration function itself has a dependency. For example:
def configure_inputdiag(a, b, c):
numerical_formatting = get_numerics(a)
# ...
return configured_object
Then, when it comes time to create an instance of a ScreenType
myscreentype = ScreenType(foo, man, shoe, configure_inputdiag)
get_numerics is a module scoped function, but myscreentype could (and does) get passed within other modules.
Does this create a problem with dependencies? I'd try to test it myself, but it seems like I don't have a fundamental understanding behind what's going on when I pass a function as a parameter. I don't want to draw incorrect conclusions about what's happening.
What I've tried: Googling, Search SO, and I didn't find anything specifically for Python.
Thanks in advance.

There's no problem.
The function configure_inputdiag will always refer to get_numerics in the context where it was defined. So, even if you call configure_inputdiag from some other module which knows nothing about get_numerics, it will work fine.
Passing a function as a parameter produces a reference to that function. Through that reference, you can call the function as if you had called it by name, without actually knowing the name (or the module from which it came). The reference is valid for the lifetime of the program, and will always refer to the same function. If you store the function reference, it basically becomes a different name for the same function.

What you are trying to do works in a very natural form in Python -
In the exampe above, you don't need to have the "get_numerics" function imported in the namespace (module) where the "configure_inputdiag" is - you just pass it as a normal parameter (say, call it "function") and you are going like in this example:
Module A:
def get_numerics(parm):
...
input diag = module_B.configure_inputdiag(get_numerics, a)
Module B:
def configure_inputdiag(function, parm):
result = function(parm)
Oh - I saw your doubt iwas the other waya round - anyway, there is no problem - in Python, functions are first class objects- jsut like ints and strings, and they can be passed around as parametrs to other functions in other modules as you wish. I think the example above clarifies that.

get_numerics is resolved in the scope of the function body, so it does not also need to be in the scope of the caller.

Related

when should I make a function that has parameters / arguments?

when should we actually create a function that has parameters / arguments?
today I made a programming project. Then it occurred to me when should I actually create a function that has parameters .. I usually create it when there is a global value / variable and that value must exist in some function then I make that value the argument of the function .. did I do it right? or wrong? if wrong what are the best practices for doing it?
varGlobal = "test"
def foo():
print(varGlobal)
# or
def foo(parm):
print(parm) # parm -> varGlobal
def foo():
ask = input("ask")
print(ask)
# or
def foo(parm):
print(parm) # parm -> global user input
It's usually a good idea to use parameters. Consider what the purpose of the function is. Parameterized functions are more generally useful than non-parameterized functions.
If the first case, is whatever foo does applicable only to a single value, or could it be useful for arbitrary values, regardless of what variable might refer to them? In the former case, you are stuck using varGlobal. In the latter, the call can always use foo(varGlobal) if that's the necessary argument.
In the second, might foo be useful to someone who already has a value, and doens't need to call input? In the former case, you are stuck calling input. In the latter, the caller can always use foo(input()) or the like if they really need to call input.
I would strongly suggest that you should use parameters and arguments in every function. it simply makes the whole process of design simpler.
You can clear see what data the function uses, and what it returns.
The only use of global values (either module globals, or globals imported from other modules are :
Module or application wide constants
Module or application wide function or classes (which are in Python efectively module level 'globals'.
Your functions should always return values and never change a global value (by definition if you stick to the above list that you wont be changing anything).
In my opinon using the 'global' keyword is never needed (in 8 years of coding I have never needed it, or identified a reason to use it).
Using global variables is bad practice in any language GlobalVariablesAreBad
Global variables can be used if you need to access or modify the variable in several methods/classes in the same module.
Remember you need to point global my_global_variable to modify the variable.
Parameters are variables needed in the method to do the processing. These variables should live locally in the method. If you need to retrieve something from the method, you should add a return statement. Also, if you need to return several variables you can return as tuple.
So, in this way, you're organizing your code, making all variables visible to other people. Also I recommend you to use docstrings to fully document your methods, variables and processing.
When we need to solve the same sort of question but with different arguments. So you don't have to write the same function over and over again. Let's say you want to write a function that will return the square of the provided number as an argument.
So you write
def square(num):
return num*num
So every time you need to have square of a number..you just put that number in place of the argument and not write the whole function again.

Python - Clone a function while passing a single argument

I would like to copy an existing function from an existing module in the following way:
def foo(a,b,c=1,d=3,*arg):
return True
myClass.foo = lambda b,c,d,*arg : foo(my_value_of_a, b,c,d,*arg)
However, there are several problems with this approach namely:
I am doing this in a loop and I don't know the arguments of most functions
I am losing the default values - which I absolutely cannot
The __docs__ and other attributes would be nice to keep too
I tried to do something like this:
handler = getattr(mod,'foo')
handler.__defaults__ = tuple([my_value_of_a] + list(handler.__defaults__))
myClass.foo = handler
which is almost enough for my use case (just because I always modify the first argument only). The problem is that if I call mod.foo() it also has my_value_of_a as the default value for a!
I tried using the copy module to do a handler=deepcopy(handler) but even that didn't work and modifying the default values of handler also modifies the default values of the module function itself.
Any suggestions on who to do this in a "pythonic" way? I probably cannot use decorators either, since I'm looping over functions from external modules (several, actually).

how to reference previous arguments of a function call in later arguments?

This is in micropython
I'm creating an API to control some hardware. The API will be implemented in C with an interface in micropython.
One example of my API is:
device.set(curr_chan.BipolarRange, curr_chan.BipolarRange.state.ON)
I'd like to be able to achieve the same functionality but shorten the second path by somehow implicitly referencing the first argument:
device.set(curr_chan.BipolarRange, <first arg?>.state.ON)
Is there anyway to do this?
The only way to do something like this now would be
device.set(curr_chan.BipolarRange.state.ON)
and then put an upward pointing C-pointer on both the ON C-object and state C-object so that I know which entry in curr_chan is being referenced.
The micropython runtime - and I assume CPython one - doesn't keep the entire object "tree" available to the developer in memory.
You could have special values for the second (state) argument which tell the function implementation to derive the state from the first argument. You could also introduce a completely separate function which has this behavior.
Or you could have a helper function which determines the state and passes it down to the set function, something like this:
device.set(*state_ON(curr_chan.BipolarRange))
Here, state_ON would return a tuple (curr_chan.BipolarRange, curr_chan.BipolarRange.state.ON).
In any case, there is no direct support for what you are trying to do in Python itself.
Pass the name of the attribute you want as the second argument. Call getattr (or PObject_GetAttr repeatedly to get each element of the .-separated string:
device.set(curr_chan.BipolarRange, 'state.ON')

Explanation of importing Python classes

So I know this could be considered quite a broad quesiton, for which I am sorry, but I'm having problems understanding the whole importing and __init__ and self. things and all that... I've tried reading through the Python documentation and a few other tutorials, but this is my first language, and I'm a little (a lot) confused.
So far through my first semester at university I have learnt the very basics of Python, functions, numeric types, sequence types, basic logic stuff. But it's moving slower than I would like, so I took it upon myself to try learn a bit more and create a basic text based, strategy, resource management sorta game inspired by Ogame.
First problem I ran into was how to define each building, for example each mine, which produces resources. I did some research and found classes were useful, so I have something like this for each building:
class metal_mine:
level = 1
base_production = 15
cost_metal = 40
cost_crystal = 10
power_use = 10
def calc_production():
metal_mine.production = A formula goes here
def calc_cost_metal():
etc, same for crystal
def calc_power_use():
metal_mine.power_use = blah blah
def upgrade():
various things
solar_plant.calc_available_power()
It's kinda long, I left a lot out. Anyway, so the kinda important bit is that last bit, you see when I upgrade the mine, to determine if it has enough power to run, I calculate the power output of the solar plant which is in its own class (solar_plant.calc_output()), which contains many similar things to the metal mine class. If I throw everything in the same module, this all works fantastically, however with many buildings and research levels and the likes, it gets very long and I get lost in it.
So I tried to split it into different modules, so one for mines, one for storage buildings, one for research levels, etc. This makes everything very tidy, however I still need a way to call the functions in classes which are now part of a different module. My initial solution was to put, for example, from power import *, which for the most part, made the solar_plant class available in the metal_mine class. I say for the most part, because depending on the order in which I try to do things, sometimes it seems this doesn't work. The solar_plant class itself calls on variables from the metal_mine class, now I know this is getting very spagetti-ish..but I don't know of any better conventions to follow yet.
Anyway, sometimes when I call the solar_plant class, and it in turn tries to call the metal_mine class, it says that metal_mine is not defined, which leads me to think somehow the modules or classes need to be initialized? There seems to be a bit of looping between things in the code. And depending on the order in which I try and 'play the game', sometimes I am unintentionally doing this, sometimes I'm not. I haven't yet been taught the conventions and details of importing and reloading and all that..so I have no idea if I am taking the right approach or anything.
Provided everything I just said made sense, could I get some input on how I would properly go about making the contents of these various modules freely available and modifiable to others? Am I perhaps trying to split things into different modules which you wouldn't normally do, and I should just deal with the large module? Or am I importing things wrong? Or...?
And on a side note, in most tutorials and places I look for help on this, I see classes or functions full of self.something and the init function..can I get a explanation of this? Or a link to a simple first-time-programmer's tutorial?
==================UPDATE=================
Ok so too broad, like I thought it might be. Based on the help I got, I think I can narrow it down.
I sorted out what I think need to be the class variables, those which don't change - name, base_cost_metal, and base_cost_crystal, all the rest would depend on the players currently selected building of that type (supposing they could have multiple settlements).
To take a snippet of what I have now:
class metal_mine:
name = 'Metal Mine'
base_cost_metal = 60
base_cost_crystal = 15
def __init__(self):
self.level = 0
self.production = 30
self.cost_metal = 60
self.cost_crystal = 15
self.power_use = 0
self.efficiency = 1
def calc_production(self):
self.production = int(30 + (self.efficiency * int(30 * self.level * 1.1 * self.level)))
def calc_cost_metal(self):
self.cost_metal = int(metal_mine.base_cost_metal * 1.5 ** self.level)
So to my understanding, this is now a more correctly defined class? I define the instance variables with their starting values, which are then changed as the user plays.
In my main function where I begin the game, I would create an instance of each mine, say, player_metal_mine = metal_mine(), and then I call all the functions and variables with the likes of
>>> player_metal_mine.level
0
>>> player_metal_mine.upgrade()
>>> player_metal_mine.level
1
So if this is correctly defined, do I now just import each of my modules with these new templates for each building? and once they are imported, and an instance created, are all the new instances and their variables contained within the scope(right terminology?) of the main module, meaning no need for new importing or reloading?
Provided the answer to that is yes, I do just need to import, what method should I use? I understand there is just import mines for example, but that means I would have to use mines.player_metal_mine.upgrade() to use it, which is a tiny bit more typing thanusing the likes of from mines import *, or more particularly, from mines import metal_mine, though that last options means I need to individually import every building from every module. So like I said, provided, yes, I am just importing it, what method is best?
==================UPDATE 2================= (You can probably skip the wall of text and just read this)
So I went through everything, corrected all my classes, everything seems to be importing correctly using from module import *, but I am having issues with the scope of my variables representing the resource levels.
If everything was in 1 module, right at the top I would declare each variable and give it the beginning value, e.g. metal = 1000. Then, in any method of my classes which alters this, such as upgrading a building, costing resources, or in any function which alters this, like the one which periodically adds all the production to the current resource levels, I put global metal, for example, at the top. Then, within the function, I can call and alter the value of metal no problem.
However now that I am importing these classes and functions from various modules all into 1 module, functions cant find these variables. What I thought would happen was that in the process of importing I would basically be saying, take everything in this module, and pretend its now in this one, and work with it. But apparently that's not what is happening.
In my main module, I import all my modules using from mines import * for example and define the value of say, metal, to be 1000. Now I create an instance of a metal mine, `metal_mine_one = metal_mine(), and I can call its methods and variables, e.g.
>>> metal_mine_one.production
30
But when I try call a method like metal_mine_one.upgrade(), which contains global metal, and then metal -= self.cost_metal, it give me an error saying metal is not defined. Like I said, if this is all in 1 module, this problem doesn't happen, but if I try to import things, it does.
So how can I import these modules in a way which doesn't cause this problem, and makes variables in the global scope of my main module available to all functions and methods within all imported modules?
First a little background on object oriented programming. i.e. classes. You should think of a class like a blueprint, it shows how to make something. When you make a class it describes how to make an object to the program. a simple class in python might look like this.
class foo:
def __init__(self, bars_starting_value):
self.bar = bars_starting_value
def print_bar(self):
print(self.bar)
This tells python how to make a foo object. The init function is called a constructor. It is called when you make a new foo. The self is a way of referencing the foo that is running the function. In this case every foo has its own bar which can be accessed from within a foo by using self.bar. Note that you have to put a self as the first argument of the function definition this makes it so those functions belong to a single foo and not all of them.
One might use this class like this:
my_foo = foo(15)
my_other_foo = foo(100)
foo.print_bar()
foo.bar = 20
print(foo.bar)
my_other_foo.print_bar()
This would output
15
20
100
As far as imports go. They take all things that are defined in one file and move them to be defined in another. This is useful if you put the a class definition in a file you can import it into your main program file and make objects from there.
As far as making variables available to others, you could pass the power that has been generated from all the generators to the mine's function to determine if it has enough power.
Hope this helps.
A lot of things to cover here.. init is a builtin method that is automatically called when an instance of a class is created. In the code you provided you've created a class, now you need to create an instance of that class. A simpler example:
class Test:
def __init__(self):
print "this is called when you create an instance of this class"
def a_method(self):
return True
class_instance = Test()
>>> "this is called when you create an instance of this class"
class_instance.a_method()
>>> True
The first argument in a class method is *always itself. By convention we just call that argument 'self'. Your methods did not accept any arguments, make sure they accept self (or have the decorator #staticmethod above them). Also, make sure you refer to attributes (in you case methods) by self.a_method or class_instance.a_method

Parameter and Function with the same name

On a Python assignment, I had to make the following two functions:
move(board, move)
undomove(board, move)
Having an argument with the same name as the function seems like a bad practice to me. I already contacted the professor to change it, but out of curiosity, is it possible to call the move function from inside the undomove, or to use recursion on the move? Inside these functions, move refers to the argument.
(Python 3, if it matters)
You can get a handle on move (the function), however it will require some additional gymnastics.
def move(move):
print(move,"inside move")
def undomove(move):
print (move,"inside undomove")
this_mod =__import__(__name__)
this_mod.move(move)
if __name__ == '__main__':
move(1)
undomove(2)
Generally though, I would definitely avoid naming a local variable with the same name as a function that I will need in that function.
As far as style is concerned, creating a function def move(move): ... is definitely a little weird, and it would make the casual reader think that you're trying to write a recursive function, so I would definitely avoid that. Writing undomove(move) when move is already defined in the module scope as a function is a little less weird, but it still might cause confusion at a quick glance (is it a local variable? is it the function?) so I would probably avoid that one as well.
There are a number of options here, but ruling out the simplest (renaming move), there are a few others.
Firstly, you could create another name for the function, so you can use that when move gets overridden:
def move(...):
...
move_ = move
def undomove(..., move):
move_(...)
This works as functions in Python are objects like any other - so you can just assign them to variables.
Another option would be to place the functions in a class, so they are in a namespace - meaning you access the method as self.move() and the parameter as move. That said, if you assignment requires that the functions be top-level, that isn't an option.
You could reach the function moveby calling globals()['move'] from within undomove (or any other function). Not very elegant...

Categories

Resources