Best approach to pass and handle arguments to function - python

After some reading, I found myself struggling with two different approaches to pass a list of arguments to a function. I read some indications. That's what I figured out so far:
Actual code:
file caller.py:
import worker
worker.version_check(iserver,login,password,proxyUser,proxyPass,
proxyServer,packageInfo)
worker.version_get(iserver,login,password,proxyUser,proxyPass,
proxyServer,packageInfo)
worker.version_send(iserver,login,password,proxyUser,proxyPass,
proxyServer,packageInfo)
File: worker.py:
def version_check(iserver,login,password,proxyUser,proxyPass,proxyServer,service):
#code and more code
def version_get(iserver,login,password,proxyUser,proxyPass,proxyServer,service):
#code and more code
def version_send(iserver,login,password,proxyUser,proxyPass,proxyServer,service):
#code and more code
And now I have:
file caller.py:
import worker
args = (env, family, host, password, prefix, proxyServer,
proxyUser, proxyPass, option, jokerVar
)
worker.version_check(*args)
worker.version_get(*args)
worker.version_send(*args)
File: worker.py:
def version_check(*args):
env = args[0]
family = args[1]
host = args[2]
password = args[3]
prefix = args[4]
proxyServer = args[5]
proxyUser = args[6]
proxyPass = args[7]
option = args[8]
jokerVar = args[9]
#code and more code
def version_get((*args):
env = args[0]
family = args[1]
host = args[2]
password = args[3]
prefix = args[4]
proxyServer = args[5]
proxyUser = args[6]
proxyPass = args[7]
option = args[8]
jokerVar = args[9]
#code and more code
def version_send(*args):
env = args[0]
family = args[1]
host = args[2]
password = args[3]
prefix = args[4]
proxyServer = args[5]
proxyUser = args[6]
proxyPass = args[7]
option = args[8]
jokerVar = args[9]
#code and more code
Using the old approach (actual code) I believe it is more "friendly" to call a function in one line only (as you can see on worker.py). But, using the new approach, I think the code get more extensive because for each function I have to define all the same variables. But is this the best practice? I'm still learning Python on a slow curve, so, sorry for any mistakes in the code.
And one important thing, most of the variables are retrieved from a database, so they are not stactic.

I really don't recommend defining functions like def version_check(*args): unless you specifically need to. Quick, without reading the source: what order are the arguments in? How do you specify a default value for proxyServer? Remember, "explicit is better than implicit".
The one time I routinely deviate from that rule is when I'm wrapping another function like:
def foo(bar):
print 'Bar:', bar
def baz(qux, *args):
print 'Qux:', qux
foo(*args)
I'd never do it for such a simple example, but suppose foo is a function from a 3rd-party package outside my control with lots of defaults, keyword arguments, etc. In that case, I'd rather punt the argument parsing to Python than attempt it myself.
Personally, I'd write that as a class like:
class Worker(object):
def __init__(iserver,login,password,proxyUser,proxyPass,proxyServer,service):
self.iserver = iserver
self.login = login
self.password = password
self.proxyUser = proxyUser
self.proxyPass = proxyPass
self.proxyServer = proxyServer
self.service = service
def version_check(self): ...
def version_get(self): ...
def version_send(self): ...
And then in the client, write:
from worker import Worker
w = Worker(iserver,login,password,proxyUser,proxyPass,proxyServer,service)
w.version_check()
w.version_get()
w.version_send()
If you really need to write functions with lots of arguments instead of encapsulating that state in a class - which is a more typically Pythonic way to do it - then consider the namedtuple datatype from recent Python versions. It lets you specify a tuple where items are addressable by keyword and can make for some very clean, elegant code.

There are many approaches, depending on what those arguments represent.
If they are just a grab-bag of arguments (especially if some are optional), use keyword arguments:
myargs = {'iserver':'server','login':'username','password':'Pa2230rd'}
version_get(**myargs)
If they represent some thing with its own state, then use classes:
If the arguments represent a single state that your functions are modifying, then accept the arguments in the object constructor and make your version_* methods functions of that class:
class Version(object):
def __init__(self,iserver,login,password,
proxyUser,proxyPass,proxyServer,service):
self.iserver = iserver
self.login = login
#etc
def check(self):
self.iserver
def get(self):
pass
#etc
myversion = Version('iserver','login',...)
myversion.check()
If you have some kind of resource those arguments represent that your functions are merely using, in that case use a separate class, and supply it as an object parameter to your functions:
class Connection(Object):
def __init__(self, iserver, ...):
self.iserver # etc
myconn = Connection('iserver',...)
version_check(myconn)
Most likely, these are two different resources and should be two classes. In this case you can combine these approaches:
#Connection() class as above
class Version(object):
def __init__(self, connection):
self.connection = connection
def check(self):
self.connection.iserver # ....
myconn = Connection('iserver', ...)
conn_versioner = Version(myconn)
conn_versioner.check()
Possibly, your arguments represent more than one object (e.g., a connection and a transparent proxy object) In that case, try to create an object with the smallest public interface methods like version_* would need and encapsulate the state represented by the other arguments using object composition.
For example, if you have proxy connections, you can create a Connection() class which just knows about server, login and password, and a ConnectionProxy() class which has all the methods of a Connection, but forwards to another Connection object. This allows you to separate the proxy* arguments, and means that your version_* functions can be ignorant of whether they're using a proxy or not.
If your arguments are just state and don't have any methods proper to them, consider using a namedtuple(). This will act like a smarter tuple (including tuple unpacking, slicing, etc) and have minimal impact on your existing code while still being easier to use.
Connection = namedtuple('Connection', 'iserver login password etc')
myconn = Connection('iserver', 'loginname', 'passw3rd')
version_check(*myconn)

You can create instance an object or define a class. e.g.
file caller.py:
import worker
info=object()
info.env=0
info.family='something'
info.host='something'
info.password='***'
info.prefix=''
info.proxyServer=''
info.proxyUser=''
info.proxyPass=''
info.option=''
info.jokerVar=''
worker.version_check(info)
worker.version_get(info)
worker.version_send(info)
file worker.py:
def version_check(info):
#you may access values from info
#code and more code
def version_get(info):
#code and more code
def version_send(info):
#code and more code

Related

Python Generic functions with generic parameters

I'm not sure if I used the right terms in the title. This maybe a known way to program interface functions for a subsystem or module but because I don't know the keywords, I'm not finding the results in my search queries.
I want to create a function whose intention can be clearly described in the functions name but the parameters are flexible. I want to write the function to be generic enough so that the function can complete the intention with whatever parameters it receives from whichever caller.
Let's take a function do_foo.
do_foo can take in some_obj whose attributes allows do_foo to do its work. Additionally, do_foo can just take in the individual attributes it cares about like obj_attr0 or obj_attr1 and perform the same work. In both cases, the expected result is the same as well.
So this would look something like this:
Class SomeObj():
def __init__(self, obj_attr0, obj_attr1, obj_attrN):
self.obj_attr0 = None
self.obj_attr1 = None
self.obj_attrN = None # denotes an N number of attributes
def do_foo(params)
# unpack params. do_foo requires obj_attr0 and obj_attr1 and so its searching it in the params iterable
# determine which data is passed in
# execute the work the same way regardless of what form the data is passed in
pass
obj_attr0 = None
obj_attr1 = None
obj_attrN = None
some_obj = SomeObj(obj_attr0, obj_attr1, obj_attrN)
# One can either call with a subset of attributes that would make up SomeObj or SomeObj itself if all the information is there. E.g.:
params = (some_obj)
do_foo(params)
# OR
params = (obj_att0, obj_attr1)
do_foo(params)
I know python offers *args and **kwargs facilities that offer the flexibility above. I'm looking for some examples of where the implementation lends itself to reducing pitfalls. What is a good way to implement the above? And if there are any resources out there what are examples/articles/or terms that describe the above style of programming? Clearly, I'm trying to write my interface functions to be generic and usable in multiple logic paths where the users has its data in different forms where sticking to a specific parameter list is limiting.
Short answer:
You can use function decorators to do this
Long answer:
I have a concrete example for you. It might not be the prettiest code but it does something similar to what you are asking for.
Mini HTTP Testing library
I made a mini HTTP testing library because I make my REST http tests in python, and I realized that I always write the same code again and again. So I made a more general setup
The core
The core is kind of ugly and this is the part I don't want to write again and again.
Just skip this part quick and check how it is used in the interface section.
Then if you like it you can go back and try to understand how it is all tied together.
# base.py
import json, requests, inspect
# This function drops invallid parameters
def request(*args, **kwargs):
allowed = inspect.signature(requests.Session.request).parameters
return {k:v for (k,v) in kwargs.items() if k in allowed}
def response(r, code):
if r.status_code != code:
print(r.text)
return
data = r.json()
if data:
print(json.dumps(data, indent=2, ensure_ascii=False))
return data
# This is the core function it is not pretty but it creates all the abstaction in multiple levels of decorations.
def HTTP(base_url):
def outer(func_one):
def over(*args_one, **kwargs_one):
req, url, code = func_one(*args_one, **kwargs_one)
url = base_url + url
def inner(func_two):
def under(*args_two, **kwargs_two):
allowed = inspect.signature(func_two).parameters
kwparams = {k:v for (k,v) in kwargs_two.items() if k in allowed}
from_inner = func_two(*args_two, **kwparams)
u = url.format(id=kwargs_two.pop('_id')) if '{id}' in url else url
r = req(u, **request(**kwargs_two, **from_inner))
return response(r, code)
return under
return inner
return over
return outer
The interface
The interface functions are all each decorated by the HTTP function which makes them a HTTP caller function, it is still abstract since it will return a function.
Note: interface is just what I call it but it is really just functions which returns functions based on the HTTP decorator
BASE_URL = "https://example.com"
#HTTP(BASE_URL)
def POST(url, code=200): return requests.post, url, code
#HTTP(BASE_URL)
def PUT(url, code=200): return requests.put, url, code
#HTTP(BASE_URL)
def DELETE(url, code=200): return requests.delete, url, code
#HTTP(BASE_URL)
def GET(url, code=200): return requests.get, url, code
A middleware function
When one of the interface functions are decorated with this one then they need a token.
def AUTH(func):
def inner(token, *args, **kwargs):
headers = {'Authorization': f'bearer {token}'}
return func(*args, **kwargs, headers=headers)
return inner
The implementation
The interface can be used for many implementations.
Here I use the interface of POST, PUT, GET and DELETE for the user model.
This is the final decoration, and the functions returned will actually return content instead of other functions.
# users.py
from httplib.base import (
POST,
GET,
DELETE,
PUT,
AUTH,
request
)
#POST('/users',200)
def insert(user):
return request(json=user)
#AUTH
#GET('/users')
def find(_filter={}):
return request(params=_filter)
#AUTH
#GET('/users/{id}')
def find_one(_id):
return request()
#AUTH
#DELETE('/users/{id}')
def delete(_id):
return request()
#AUTH
#PUT('/users/{id}')
def update(_id, updates={}):
return request(json=updates)
Operation
Here you can see how the users delete insert and find functions work.
from httplib import users
def create_and_delete_users(token, n): return [
users.delete(token, _id=x['user']['id'])
for x in [
users.insert(user={
'username' : f'useruser{str(i).zfill(2)}',
'password' : 'secretpassword',
'email' : f'useruser{str(i).zfill(2)}#mail.com',
'gender' : 'male',
}) for i in range(n)]
]
def find_all_and_then_find_each(token): return [
users.find_one(token, _id=x['id'])
for x in users.find(token)['data']
]
I hope this was helpful.

Unable to use same SQLite connection across multiple objects in Python

I'm working on a Python desktop app using wxPython and SQLite. The SQLite db is basically being used as a save file for my program so I can save and backup and reload the data being entered. I've created separate classes for parts of my UI so make it easier to manage from the "main" window. The problem I'm having is that each control needs to access the database, but the filename, and therefore the connection name, needs to be dynamic. I originally created a DBManager class that hardcoded a class variable with the connection string, which worked but didn't let me change the filename. For example
class DBManager:
conn = sqlite3.Connection('my_file.db')
#This could then be passed to other objects as needed
class Control1:
file = DBManager()
class Control2:
file = DBManager()
etc.
However, I'm running into a lot of problems trying to create this object with a dynamic filename while also using the same connection across all controls. Some examples of this I've tried...
class DBManager:
conn = None
def __init__(self):
pass
def __init__(self, filename):
self.conn = sqlite3.Connection(filename)
class Control1:
file = DBManager()
class Control2:
file = DBManager()
The above doesn't work because Python doesn't allow overloading constructors, so I always have to pass a filename. I tried adding some code to the constructor to act differently based upon whether the filename passed was blank or not.
class DBManager:
conn = None
def __init__(self, filename):
if filename != '':
self.conn = sqlite3.Connection(filename)
class Control1:
file = DBManager('')
class Control2:
file = DBManager('')
This let me compile, but the controls only had an empty connection. The conn object was None. It seems like I can't change a class variable after it's been created? Or am I just doing something wrong?
I've thought about creating one instance of DBManager that I then pass into each control, but that would be a huge mess if I need to load a new DB after starting the program. Also, it's just not as elegant.
So, I'm looking for ideas on achieving the one-connection path with a dynamic filename. For what it's worth, this is entirely for personal use, so it doesn't really have to follow "good" coding convention.
Explanation of your last example
You get None in the last example because you are instantiating DBManager in Control1 and Control2 with empty strings as input, and the DBManager constructor has an if-statement saying that a connection should not be created if filename is just an empty string. This leads to the self.conn instance variable never being set and any referal to conn would resolve to the conn class variable which is indeed set to None.
self.conn would create an instance variable only accessible by the specific object.
DBManager.conn would refer to the class variable and this is what you want to update.
Example solution
If you only want to keep one connection, you would need to do it with e.g. a. class variable, and update the class variable every time you interact with a new db.
import sqlite3
from sqlite3 import Connection
class DBManager:
conn = None
def __init__(self, filename):
if filename != '':
self.filename = filename
def load(self) -> Connection:
DBManager.conn = sqlite3.Connection(self.filename) # updating class variable with new connection
print(DBManager.conn, f" used for {self.filename}")
return DBManager.conn
class Control1:
db_manager = DBManager('control1.db')
conn = db_manager.load()
class Control2:
db_manager = DBManager('control2.db')
conn = db_manager.load()
if __name__ == "__main__":
control1 = Control1()
control2 = Control2()
would output the below. Note that the class variable conn refers to different memory addresses upon instantiating each control, showing that it's updated.
<sqlite3.Connection object at 0x10dc1e1f0> used for control1.db
<sqlite3.Connection object at 0x10dc1e2d0> used for control2.db

is this possible to create sub-methods in python?

I want to write a method to parse a site with requests library, the method should take a part of url having base_url in it and perform the get request on this, the main problem is that I do not know how to make it better;
What I have in mind now is:
import requests
class Response:
# ...
def site_parser(self, atom):
base_url="https://example.com/"
def category1(self):
return requests.get(base_url + category1/ + atom).text
def category2(self):
return requests.get(base_url + category2/ + atom).text
if __name == "__main__":
def main():
result = Response()
result.site_parser.category1("atom")
result.site_parser.category2("atom")
so needed data has the same base url but different dirs to get into, and I need to gen each dir if only the method was called afterwards. is there a way of doing this properly? I wouuld like to avoid making base url global variable
It seems to me that what you need is another class.
class Response:
# ... Some dark magic here ...
def site_parser(self, atom):
return ResponseParser(self, atom)
class ResponseParser:
def __init__(self, res, atom):
self.atom = atom
self.res = res
self.base_url = "https://example.com/"
def category1(self):
# ... Do stuff ...
def category2(self):
# ... Do stuff ...
Then you call it with
result = Response()
result.site_parser("atom").category1()
If you really insist on getting rid of the parentheses on the site_parser call, you could move the "atom" bit to the categoryN methods and turn site_parser into a property, but IMO that would probably just confuse people more than anything.
As a functional programmer, I love nested functions and closures as much as the next guy, but it seems to me that, based on the limited example you've given, having a second helper class is probably going to be the more readable way to go about this in this case.

Python class __init__ layout?

In python, is it bad form to write an __init__ definition like:
class someFileType(object):
def __init__(self, path):
self.path = path
self.filename = self.getFilename()
self.client = self.getClient()
self.date = self.getDate()
self.title = self.getTitle()
self.filetype = self.getFiletype()
def getFilename(self):
'''Returns entire file name without extension'''
filename = os.path.basename(self.path)
filename = os.path.splitext(filename)
filename = filename[0]
return filename
def getClient(self):
'''Returns client name associated with file'''
client = self.filename.split()
client = client[1] # Assuming filename is formatted "date client - docTitle"
return client
where the initialized variables are calls to functions returning strings? Or is it considered lazy coding? It's mostly to save me from writing something.filetype as something.getFiletype() whenever I want to reference some aspect of the file.
This code is to sort files into folders by client, then by document type, and other manipulations based on data in the file name.
Nope, I don't see why that would be bad form. Calculating those values only once when the instance is created can be a great idea, in fact.
You could also postpone the calculations until needed by using caching propertys:
class SomeFileType(object):
_filename = None
_client = None
def __init__(self, path):
self.path = path
#property
def filename(self):
if self._filename is None:
filename = os.path.basename(self.path)
self._filename = os.path.splitext(filename)[0]
return self._filename
#property
def client(self):
'''Returns client name associated with file'''
if self._client is None:
client = self.filename.split()
self._client = client[1] # Assuming filename is formatted "date client - docTitle"
return self._client
Now, accessing somefiletypeinstance.client will trigger calculation of self.filename as needed, as well as cache the result of it's own calculation.
In this specific case, you may want to make .path a property as well; one with a setter that clears the cached values:
class SomeFileType(object):
_filename = None
_client = None
def __init__(self, path):
self._path = path
#property
def path(self):
return self._path
#path.setter
def path(self, value):
# clear all private instance attributes
for key in [k for k in vars(self) if k[0] == '_']:
delattr(self, key)
self._path = value
#property
def filename(self):
if self._filename is None:
filename = os.path.basename(self.path)
self._filename = os.path.splitext(filename)[0]
return self._filename
#property
def client(self):
'''Returns client name associated with file'''
if self._client is None:
client = self.filename.split()
self._client = client[1] # Assuming filename is formatted "date client - docTitle"
return self._client
Because property-based caching does add some complexity overhead, you need to consider if it is really worth your while; for your specific, simple example, it probably is not. The calculation cost for your attributes is very low indeed, and unless you plan to create large quantities of these classes, the overhead of calculating the properties ahead of time is negligible, compared to the mental cost of having to maintain on-demand caching properties.
Your code is doing two different things:
a) Simplifying the class API by exposing certain computed attributes as variables, rather than functions.
b) Precomputing their values.
The first task is what properties are for; a straightforward use would make your code simpler, not more complex, and (equally important) would make the intent clearer:
class someFileType(object):
#property
def filename(self):
return os.path.basename(self.path)
You can then write var.filename and you will dynamically compute the filename from the path.
#Martijn's solution adds caching, which also takes care of part b (precomputation). In your example, at least, the calculations are cheap so I don't see any benefit in doing so.
On the contrary, caching or precomputation raises consistency issues. Consider the following snippet:
something = someFileType("/home/me/document.txt")
print something.filename # prints `document`
...
something.path = "/home/me/document-v2.txt"
print something.filename # STILL prints `document` if you cache values
What should the last statement print? If you cache your computations, you will still get document instead of document-v2! Unless you are certain that nobody will try to change the value of the basic variable, you need to either avoid caching, or take measures to ensure consistency. The easiest way is to prohibit modifications to path-- one of the things that properties are designed to do.
Conclusion: Use properties to simplify your interface. Don't cache computations, unless it's necessitated by performance reasons. If you cache, take measures to ensure consistency, e.g. by making the underlying value read-only.
PS. The issues are analogous to database normalization (non-normalized designs raise consistency issues), but in python you have more resources for keeping things in sync.

Passing arguments into SUDS client statement

I am using SUDS (Like SOAP) to test WSDL files. The methods contain types that are linked to further functions. I am not sure how to access the variables stored in the types that are displayed. Some sample code is below:
from suds.client import Client
client=Client('http://eample.wsdl')
print client
response is:
Ports (1):
(PTZ)
Methods (4):
AbsoluteMove(ns4:ReferenceToken ProfileToken, ns4:PTZVector Destination, ns4:PTZSpeed Speed, )
Types (303):
ns4:PTZSpeed
I am able to get access to these functions. I cannot find any documentation on how to test functions in SUDS. I want to test to see if functions work and checking their return values. Does anyone know how to do this?
I used the command below to display all child functions.
client.factory.create('AbsoluteMove.PTZSpeed.Speed.PanTilt')
I main problem is basically passing values into the functions and getting return values.
I have tried to pass the arguments but the parameters have attributes stored in the attributes. Below shows the layout for the structure of the parameters I'm trying to access.
(AbsoluteMove){
ProfileToken = None
Destination =
(PTZVector){
PanTilt =
(Vector2D){
_x = ""
_y = ""
_space = ""
}
Zoom =
(Vector1D){
_x = ""
_space = ""
}
}
Speed =
(PTZSpeed){
PanTilt =
(Vector2D){
_x = ""
_y = ""
_space = ""
}
Zoom =
(Vector1D){
_x = ""
_space = ""
The parameters are more complex than just entering simple values.
Try to invoke the method on the service:
from suds.client import Client
client=Client('http://eample.wsdl')
res = client.service.AbsoluteMove(profile_token, destination, speed)
print res
You'll need to determine what values to put in for those arguments to the AbsoluteMove method.
Client.factory.create is for the instantiation of object types that are internal to the service you are utilizing. If you're just doing a method call (which it seems you are), invoke it directly.

Categories

Resources