PySpark applyinpands/grouped_map pandas_udf too many arguments - python

I'm trying to use the pyspark applyInPandas in my python code. Problem is, the function that I want to pass to it exists in the same class, and so it is defined as def func(self, key, df). This becomes an issue because applyInPandas will error out saying I'm passing too many arguments to the underlying func (at most it allows a key and df params, so the self is causing the issue). Is there any way around this?
The underlying goal is to process a pandas function on dataframe groups in parallel.

As OP mentioned, one way is to just use #staticmethod, which may not be desirable in some cases.
The pyspark source code for creating pandas_udf uses inspect.getfullargspec().args (line 386, 436), this includes self even if the class method is called from the instance. I would think this is a bug on their part (maybe worthwhile to raise a ticket).
To overcome this, the easiest way is to use functools.partial which can help change the argspec, i.e. remove the self argument and restore the number of args to 2.
This is based on the idea that calling an instance method is the same as calling the method directly from the class and supply the instance as the first argument (because of the descriptor magic):
A.func(A(), *args, **kwargs) == A().func(*args, **kwargs)
In a concrete example,
import functools
import inspect
class A:
def __init__(self, y):
self.y = y
def sum(self, a: int, b: int):
return (a + b) * self.y
def x(self):
# calling the method using the class and then supply the self argument
f = functools.partial(A.sum, self)
print(f(1, 2))
print(inspect.getfullargspec(f).args)
A(2).x()
This will print
6 # can still use 'self.y'
['a', 'b'] # 2 arguments (without 'self')
Then, in OP's case, one can simply do the same for key, df parameters:
class A:
def __init__(self):
...
def func(self, key, df):
...
def x(self):
f = functools.partial(A.func, self)
self.df.groupby(...).applyInPandas(f)

Related

Modify an attribute of an already defined class in Python (and run its definition again)

I am trying to modify an already defined class by changing an attribute's value. Importantly, I want this change to propagate internally.
For example, consider this class:
class Base:
x = 1
y = 2 * x
# Other attributes and methods might follow
assert Base.x == 1
assert Base.y == 2
I would like to change x to 2, making it equivalent to this.
class Base:
x = 2
y = 2 * x
assert Base.x == 2
assert Base.y == 4
But I would like to make it in the following way:
Base = injector(Base, x=2)
Is there a way to achieve this WITHOUT recompile the original class source code?
The effect you want to achieve belongs to the realm of "reactive programing" - a programing paradigm (from were the now ubiquitous Javascript library got its name as an inspiration).
While Python has a lot of mechanisms to allow that, one needs to write his code to actually make use of these mechanisms.
By default, plain Python code as the one you put in your example, uses the Imperative paradigm, which is eager: whenever an expression is encoutered, it is executed, and the result of that expression is used (in this case, the result is stored in the class attribute).
Python's advantages also can make it so that once you write a codebase that will allow some reactive code to take place, users of your codebase don't have to be aware of that, and things work more or less "magically".
But, as stated above, that is not free. For the case of being able to redefine y when x changes in
class Base:
x = 1
y = 2 * x
There are a couple paths that can be followed - the most important is that, at the time the "*" operator is executed (and that happens when Python is parsing the class body), at least one side of the operation is not a plain number anymore, but a special object which implements a custom __mul__ method (or __rmul__) in this case. Then, instead of storing a resulting number in y, the expression is stored somewhere, and when y is retrieved either as a class attribute, other mechanisms force the expression to resolve.
If you want this at instance level, rather than at class level, it would be easier to implement. But keep in mind that you'd have to define each operator on your special "source" class for primitive values.
Also, both this and the easier, instance descriptor approach using property are "lazily evaluated": that means, the value for y is calcualted when it is to be used (it can be cached if it will be used more than once). If you want to evaluate it whenever x is assigned (and not when y is consumed), that will require other mechanisms. Although caching the lazy approach can mitigate the need for eager evaluation to the point it should not be needed.
1 - Before digging there
Python's easiest way to do code like this is simply to write the expressions to be calculated as functions - and use the property built-in as a descriptor to retrieve these values. The drawback is small:
you just have to wrap your expressions in a function (and then, that function
in something that will add the descriptor properties to it, such as property). The gain is huge: you are free to use any Python code inside your expression, including function calls, object instantiation, I/O, and the like. (Note that the other approach requires wiring up each desired operator, just to get started).
The plain "101" approach to have what you want working for instances of Base is:
class Base:
x = 1
#property
def y(self):
return self.x * 2
b = Base()
b.y
-> 2
Base.x = 3
b.y
-> 6
The work of property can be rewritten so that retrieving y from the class, instead of an instance, achieves the effect as well (this is still easier than the other approach).
If this will work for you somehow, I'd recommend doing it. If you need to cache y's value until x actually changes, that can be done with normal coding
2 - Exactly what you asked for, with a metaclass
as stated above, Python'd need to know about the special status of your y attribute when calculcating its expression 2 * x. At assignment time, it would be already too late.
Fortunately Python 3 allow class bodies to run in a custom namespace for the attribute assignment by implementing the __prepare__ method in a metaclass, and then recording all that takes place, and replacing primitive attributes of interest by special crafted objects implementing __mul__ and other special methods.
Going this way could even allow values to be eagerly calculated, so they can work as plain Python objects, but register information so that a special injector function could recreate the class redoing all the attributes that depend on expressions. It could also implement lazy evaluation, somewhat as described above.
from collections import UserDict
import operator
class Reactive:
def __init__(self, value):
self._initial_value = value
self.values = {}
def __set_name__(self, owner, name):
self.name = name
self.values[owner] = self._initial_value
def __get__(self, instance, owner):
return self.values[owner]
def __set__(self, instance, value):
raise AttributeError("value can't be set directly - call 'injector' to change this value")
def value(self, cls=None):
return self.values.get(cls, self._initial_value)
op1 = value
#property
def result(self):
return self.value
# dynamically populate magic methods for operation overloading:
for name in "mul add sub truediv pow contains".split():
op = getattr(operator, name)
locals()[f"__{name}__"] = (lambda operator: (lambda self, other: ReactiveExpr(self, other, operator)))(op)
locals()[f"__r{name}__"] = (lambda operator: (lambda self, other: ReactiveExpr(other, self, operator)))(op)
class ReactiveExpr(Reactive):
def __init__(self, value, op2, operator):
self.op2 = op2
self.operator = operator
super().__init__(value)
def result(self, cls):
op1, op2 = self.op1(cls), self.op2
if isinstance(op1, Reactive):
op1 = op1.result(cls)
if isinstance(op2, Reactive):
op2 = op2.result(cls)
return self.operator(op1, op2)
def __get__(self, instance, owner):
return self.result(owner)
class AuxDict(UserDict):
def __init__(self, *args, _parent, **kwargs):
self.parent = _parent
super().__init__(*args, **kwargs)
def __setitem__(self, item, value):
if isinstance(value, self.parent.reacttypes) and not item.startswith("_"):
value = Reactive(value)
super().__setitem__(item, value)
class MetaReact(type):
reacttypes = (int, float, str, bytes, list, tuple, dict)
def __prepare__(*args, **kwargs):
return AuxDict(_parent=__class__)
def __new__(mcls, name, bases, ns, **kwargs):
pre_registry = {}
cls = super().__new__(mcls, name, bases, ns.data, **kwargs)
#for name, obj in ns.items():
#if isinstance(obj, ReactiveExpr):
#pre_registry[name] = obj
#setattr(cls, name, obj.result()
for name, reactive in pre_registry.items():
_registry[cls, name] = reactive
return cls
def injector(cls, inplace=False, **kwargs):
original = cls
if not inplace:
cls = type(cls.__name__, (cls.__bases__), dict(cls.__dict__))
for name, attr in cls.__dict__.items():
if isinstance(attr, Reactive):
if isinstance(attr, ReactiveExpr) and name in kwargs:
raise AttributeError("Expression attributes can't be modified by injector")
attr.values[cls] = kwargs.get(name, attr.values[original])
return cls
class Base(metaclass=MetaReact):
x = 1
y = 2 * x
And, after pasting the snippet above in a REPL, here is the
result of using injector:
In [97]: Base2 = injector(Base, x=5)
In [98]: Base2.y
Out[98]: 10
The idea is complicated with that aspect that Base class is declared with dependent dynamically evaluated attributes. While we can inspect class's static attributes, I think there's no other way of getting dynamic expression except for parsing the class's sourcecode, find and replace the "injected" attribute name with its value and exec/eval the definition again. But that's the way you wanted to avoid. (moreover: if you expected injector to be unified for all classes).
If you want to proceed to rely on dynamically evaluated attributes define the dependent attribute as a lambda function.
class Base:
x = 1
y = lambda: 2 * Base.x
Base.x = 2
print(Base.y()) # 4

How to initialize an object that requires __new__ and __init__

I'm creating a class sequence, which inherits from the builtin list and will hold an ordered collection of a second class: d0 which inherits from int. d0, in addition to its int value must contain a secondary value, i which denotes where it exists in the class and a reference to the class itself.
My understanding is because int is an immutable type, I have to use the __new__ method, and because it will have other attributes, I need to use __init__.
I've been trying for a while to get this to work and I've explored a few options.
Attempt 1:
class sequence(list):
def __init__(self, data):
for i, elem in enumerate(data): self.append( d0(elem, i, self) )
class d0(int):
def __new__(self, val, i, parent):
self.i = i
self.parent = parent
return int.__new__(d0, val)
x = sequence([1,2,3])
print([val.i for val in x])
This was the most intuitive to me, but every time self.i is assigned, it overwrites the i attribute for all other instances of d0 in sequence. Though I'm not entirely clear why this happens, I understand that __new__ is not the place instantiate an object.
Attempt 2:
class sequence(list):
def __init__(self, data):
for i, val in enumerate(data): self.append( d0(val, i, self) )
class d0(int):
def __new__(cls, *args):
return super().__new__(cls, *args)
def __init__(self, *args):
self = args[0]
self.i = args[1]
self.parent = args[2]
x = sequence([1,2,3])
print([val.i for val in x])
This raises TypeError: int() takes at most 2 arguments (3 given), though I'm not sure why.
Attempt 3:
class sequence(list):
def __init__(self, data):
for i, val in enumerate(data):
temp = d0.__new__(d0, val)
temp.__init__(i, self)
self.append(temp)
class d0(int):
def __new__(cls, val):
return int.__new__(d0, val)
def __init__(self, i, parent):
self.i = i
self.parent = parent
x = sequence([1,2,3])
print([val.i for val in x])
This accomplishes the task, but is cumbersome and otherwise just feels strange to have to explicitly call __new__ and __init__ to instantiate an object.
What is the proper way to accomplish this? I would also appreciate any explanation for the undesired behavior in attempts 1 and 2.
First, your sequence isn’t much of a type so far: calling append on it won’t preserve its indexed nature (let alone sort or slice assignment!). If you just want to make lists that look like this, just write a function that returns a list. Note that list itself behaves like such a function (it was one back in the Python 1 days!), so you can often still use it like a type.
So let’s talk just about d0. Leaving aside the question of whether deriving from int is a good idea (it’s at least less work than deriving from list properly!), you have the basic idea correct: you need __new__ for an immutable (base) type, because at __init__ time it’s too late to choose its value. So do so:
class d0(int):
def __new__(cls,val,i,parent):
return super().__new__(cls,val)
Note that this is a class method: there’s no instance yet, but we do need to know what class we’re instantiating (what if someone inherits from d0?). This is what attempt #1 got wrong: it thought the first argument was an instance to which to assign attributes.
Note also that we pass only one (other) argument up: int can’t use our ancillary data. (Nor can it ignore it: consider int('f',16).) Thus failed #2: it sent all the arguments up.
We can install our other attributes now, but the right thing to do is use __init__ to separate manufacturing an object from initializing it:
# d0 continued
def __init__(self,val,i,parent):
# super().__init__(val)
self.i=i; self.parent=parent
Note that all the arguments appear again, even val which we ignore. This is because calling a class involves only one argument list (cf. d0(elem,i,self)), so __new__ and __init__ have to share it. (It would therefore be formally correct to pass val to int.__init__, but what would it do with it? There’s no use in calling it at all since we know int is already completely set up.) Using #3 was painful because it didn’t follow this rule.

Custom Indexing Python Data Structure

I have a class that wraps around python deque from collections. When I go and create a deque x=deque(), and I want to reference the first variable....
In[78]: x[0]
Out[78]: 0
My question is how can use the [] for referencing in the following example wrapper
class deque_wrapper:
def __init__(self):
self.data_structure = deque()
def newCustomAddon(x):
return len(self.data_structure)
def __repr__(self):
return repr(self.data_structure)
Ie, continuing from above example:
In[75]: x[0]
Out[76]: TypeError: 'deque_wrapper' object does not support indexing
I want to customize my own referencing, is that possible?
You want to implement the __getitem__ method:
class DequeWrapper:
def __init__(self):
self.data_structure = deque()
def newCustomAddon(x):
return len(self.data_structure)
def __repr__(self):
return repr(self.data_structure)
def __getitem__(self, index):
# etc
Whenever you do my_obj[x], Python will actually call my_obj.__getitem__(x).
You may also want to consider implementing the __setitem__ method, if applicable. (When you write my_obj[x] = y, Python will actually run my_obj.__setitem__(x, y).
The documentation on Python data models will contain more information on which methods you need to implement in order to make custom data structures in Python.

How to keep help strings the same when applying decorators?

How can I keep help strings in functions to be visible after applying a decorator?
Right now the doc string is (partially) replaced with that of the inner function of the decorator.
def deco(fn):
def x(*args, **kwargs):
return fn(*args, **kwargs)
x.func_doc = fn.func_doc
x.func_name = fn.func_name
return x
#deco
def y(a, b):
"""This is Y"""
pass
def z(c, d):
"""This is Z"""
pass
help(y) # 1
help(z) # 2
In the Y function, required arguments aren't shown in the help. The user may assume it takes any arguments, while actually it doesn't.
y(*args, **kwargs) <= y(a, b) is desired
This is Y
z(c, d)
This is Z
I use help() and dir() a lot, since it's faster than pdf manuals, and want to make reliable document strings for my library and tools, but this is an obstacle.
give the decorator module a peek. i believe it does exactly what you want.
In [1]: from decorator import decorator
In [2]: #decorator
...: def say_hello(f, *args, **kwargs):
...: print "Hello!"
...: return f(*args, **kwargs)
...:
In [3]: #say_hello
...: def double(x):
...: return 2*x
...:
and info says "double(x)" in it.
What you're requesting is very hard to do "properly", because help gets the function signature from inspect.getargspec which in turn gets it from introspection which cannot directly be fooled -- to do it "properly" would mean generating a new function object on the fly (instead of a simple wrapper function) with the right argument names and numbers (and default values). Extremely hard, advanced, black-magic bytecode hacking required, in other words.
I think it may be easier to do it by monkeypatching (never a pleasant prospect, but sometimes the only way to perform customization tasks that are otherwise so difficult as to prove almost impossible, like the one you require) -- replace the real inspect.getargspec with your own lookalike function which uses a look-aside table (mapping the wrapper functions you generate to the wrapped functions' argspecs and otherwise delegating to the real thing).
import functools
import inspect
realgas = inspect.getargspec
lookaside = dict()
def fakegas(f):
if f in lookaside:
return lookaside[f]
return realgas(f)
inspect.getargspec = fakegas
def deco(fn):
#functools.wraps(fn)
def x(*args, **kwargs):
return fn(*args, **kwargs)
lookaside[x] = realgas(fn)
return x
#deco
def x(a, b=23):
"""Some doc for x."""
return a + b
help(x)
This prints, as required:
Help on function x in module __main__:
x(a, b=23)
Some doc for x.
(END)

Set function signature in Python

Suppose I have a generic function f. I want to programmatically create a function f2 that behaves the same as f, but has a customized signature.
More detail
Given a list l and and dictionary d I want to be able to:
Set the non-keyword arguments of f2 to the strings in l
Set the keyword arguments of f2 to the keys in d and the default values to the values of d
ie. Suppose we have
l = ["x", "y"]
d = {"opt": None}
def f(*args, **kwargs):
# My code
Then I would want a function with signature:
def f2(x, y, opt=None):
# My code
A specific use case
This is just a simplified version of my specific use case. I am giving this as an example only.
My actual use case (simplified) is as follows. We have a generic initiation function:
def generic_init(self, *args, **kwargs):
"""Function to initiate a generic object"""
for name, arg in zip(self.__init_args__, args):
setattr(self, name, arg)
for name, default in self.__init_kw_args__.items():
if name in kwargs:
setattr(self, name, kwargs[name])
else:
setattr(self, name, default)
We want to use this function in a number of classes. In particular, we want to create a function __init__ that behaves like generic_init, but has the signature defined by some class variables at creation time:
class my_class:
__init_args__ = ["x", "y"]
__kw_init_args__ = {"my_opt": None}
__init__ = create_initiation_function(my_class, generic_init)
setattr(myclass, "__init__", __init__)
We want create_initiation_function to create a new function with the signature defined using __init_args__ and __kw_init_args__. Is it possible to write create_initiation_function?
Please note:
If I just wanted to improve the help, I could set __doc__.
We want to set the function signature on creation. After that, it doesn't need to be changed.
Instead of creating a function like generic_init, but with a different signature we could create a new function with the desired signature that just calls generic_init
We want to define create_initiation_function. We don't want to manually specify the new function!
Related
Preserving signatures of decorated functions: This is how to preserve a signature when decorating a function. We need to be able to set the signature to an arbitrary value
From PEP-0362, there actually does appear to be a way to set the signature in py3.3+, using the fn.__signature__ attribute:
from inspect import signature
from functools import wraps
def shared_vars(*shared_args):
"""Decorator factory that defines shared variables that are
passed to every invocation of the function"""
def decorator(f):
#wraps(f)
def wrapper(*args, **kwargs):
full_args = shared_args + args
return f(*full_args, **kwargs)
# Override signature
sig = signature(f)
sig = sig.replace(parameters=tuple(sig.parameters.values())[1:])
wrapper.__signature__ = sig
return wrapper
return decorator
Then:
>>> #shared_vars({"myvar": "myval"})
>>> def example(_state, a, b, c):
>>> return _state, a, b, c
>>> example(1,2,3)
({'myvar': 'myval'}, 1, 2, 3)
>>> str(signature(example))
'(a, b, c)'
Note: the PEP is not exactly right; Signature.replace moved the params from a positional arg to a kw-only arg.
For your usecase, having a docstring in the class/function should work -- that will show up in help() okay, and can be set programmatically (func.__doc__ = "stuff").
I can't see any way of setting the actual signature. I would have thought the functools module would have done it if it was doable, but it doesn't, at least in py2.5 and py2.6.
You can also raise a TypeError exception if you get bad input.
Hmm, if you don't mind being truly vile, you can use compile()/eval() to do it. If your desired signature is specified by arglist=["foo","bar","baz"], and your actual function is f(*args, **kwargs), you can manage:
argstr = ", ".join(arglist)
fakefunc = "def func(%s):\n return real_func(%s)\n" % (argstr, argstr)
fakefunc_code = compile(fakefunc, "fakesource", "exec")
fakeglobals = {}
eval(fakefunc_code, {"real_func": f}, fakeglobals)
f_with_good_sig = fakeglobals["func"]
help(f) # f(*args, **kwargs)
help(f_with_good_sig) # func(foo, bar, baz)
Changing the docstring and func_name should get you a complete solution. But, uh, eww...
I wrote a package named forge that solves this exact problem for Python 3.5+:
With your current code looking like this:
l=["x", "y"]
d={"opt":None}
def f(*args, **kwargs):
#My code
And your desired code looking like this:
def f2(x, y, opt=None):
#My code
Here is how you would solve that using forge:
f2 = forge.sign(
forge.arg('x'),
forge.arg('y'),
forge.arg('opt', default=None),
)(f)
As forge.sign is a wrapper, you could also use it directly:
#forge.sign(
forge.arg('x'),
forge.arg('y'),
forge.arg('opt', default=None),
)
def func(*args, **kwargs):
# signature becomes: func(x, y, opt=None)
return (args, kwargs)
assert func(1, 2) == ((), {'x': 1, 'y': 2, 'opt': None})
Have a look at makefun, it was made for that (exposing variants of functions with more or less parameters and accurate signature), and works in python 2 and 3.
Your example would be written like this:
try: # python 3.3+
from inspect import signature, Signature, Parameter
except ImportError:
from funcsigs import signature, Signature, Parameter
from makefun import create_function
def create_initiation_function(cls, gen_init):
# (1) check which signature we want to create
params = [Parameter('self', kind=Parameter.POSITIONAL_OR_KEYWORD)]
for mandatory_arg_name in cls.__init_args__:
params.append(Parameter(mandatory_arg_name, kind=Parameter.POSITIONAL_OR_KEYWORD))
for default_arg_name, default_arg_val in cls.__opt_init_args__.items():
params.append(Parameter(default_arg_name, kind=Parameter.POSITIONAL_OR_KEYWORD, default=default_arg_val))
sig = Signature(params)
# (2) create the init function dynamically
return create_function(sig, generic_init)
# ----- let's use it
def generic_init(self, *args, **kwargs):
"""Function to initiate a generic object"""
assert len(args) == 0
for name, val in kwargs.items():
setattr(self, name, val)
class my_class:
__init_args__ = ["x", "y"]
__opt_init_args__ = {"my_opt": None}
my_class.__init__ = create_initiation_function(my_class, generic_init)
and works as expected:
# check
o1 = my_class(1, 2)
assert vars(o1) == {'y': 2, 'x': 1, 'my_opt': None}
o2 = my_class(1, 2, 3)
assert vars(o2) == {'y': 2, 'x': 1, 'my_opt': 3}
o3 = my_class(my_opt='hello', y=3, x=2)
assert vars(o3) == {'y': 3, 'x': 2, 'my_opt': 'hello'}
You can't do this with live code.
That is, you seem to be wanting to take an actual, live function that looks like this:
def f(*args, **kwargs):
print args[0]
and change it to one like this:
def f(a):
print a
The reason this can't be done--at least without modifying actual Python bytecode--is because these compile differently.
The former results in a function that receives two parameters: a list and a dict, and the code you're writing operates on that list and dict. The second results in a function that receives one parameter, and which is accessed as a local variable directly. If you changed the function "signature", so to speak, it'd result in a function like this:
def f(a):
print a[0]
which obviously wouldn't work.
If you want more detail (though it doesn't really help you), a function that takes an *args or *kwargs has one or two bits set in f.func_code.co_flags; you can examine this yourself. The function that takes a regular parameter has f.func_code.co_argcount set to 1; the *args version is 0. This is what Python uses to figure out how to set up the function's stack frame when it's called, to check parameters, etc.
If you want to play around with modifying the function directly--if only to convince yourself that it won't work--see this answer for how to create a code object and live function from an existing one to modify bits of it. (This stuff is documented somewhere, but I can't find it; it's nowhere in the types module docs...)
That said, you can dynamically change the docstring of a function. Just assign to func.__doc__. Be sure to only do this at load time (from the global context or--most likely--a decorator); if you do it later on, tools that load the module to examine docstrings will never see it.
Maybe I didn't understand the problem well, but if it's about keeping the same behavior while changing the function signature, then you can do something like :
# define a function
def my_func(name, age) :
print "I am %s and I am %s" % (name, age)
# label the function with a backup name
save_func = my_func
# rewrite the function with a different signature
def my_func(age, name) :
# use the backup name to use the old function and keep the old behavior
save_func(name, age)
# you can use the new signature
my_func(35, "Bob")
This outputs :
I am Bob and I am 35
We want create_initiation_function to change the signature
Please don't do this.
We want to use this function in a number of classes
Please use ordinary inheritance.
There's no value in having the signature "changed" at run time.
You're creating a maintenance nightmare. No one else will ever bother to figure out what you're doing. They'll simply rip it out and replace it with inheritance.
Do this instead. It's simple and obvious and makes your generic init available in all subclasses in an obvious, simple, Pythonic way.
class Super( object ):
def __init__( self, *args, **kwargs ):
# the generic __init__ that we want every subclass to use
class SomeSubClass( Super ):
def __init__( self, this, that, **kwdefaults ):
super( SomeSubClass, self ).__init__( this, that, **kwdefaults )
class AnotherSubClass( Super ):
def __init__( self, x, y, **kwdefaults ):
super( AnotherSubClass, self ).__init__( x, y, **kwdefaults )
Edit 1: Answering new question:
You ask how you can create a function with this signature:
def fun(a, b, opt=None):
pass
The correct way to do that in Python is thus:
def fun(a, b, opt=None):
pass
Edit 2: Answering explanation:
"Suppose I have a generic function f. I want to programmatically create a function f2 that behaves the same as f, but has a customised signature."
def f(*args, **kw):
pass
OK, then f2 looks like so:
def f2(a, b, opt=None):
f(a, b, opt=opt)
Again, the answer to your question is so trivial, that you obviously want to know something different that what you are asking. You really do need to stop asking abstract questions, and explain your concrete problem.

Categories

Resources